
Preston Badeer
@pbadeer
Followers
530
Following
676
Media
134
Statuses
2K
I post about the intersection of 🦾AI, 🤖LLMs, 📊data products, and 📈data engineering.
Turning data into products👉🏻
Joined March 2015
Can we get @JoinAutopilot_ to create a @levelsio tracker? Or @marc_louvion, or both? These guys love sharing their data, and many folks want to follow their lead. This would be a killer partnership. 🔥
OK, I increased the recurring investment to $10,000/week. The only reason I don't go all-in with $600,000 is this: This money is the fruit of 7 years of entrepreneurship failures. If the market crashes tomorrow, I won't be able to sleep. I'm going to invest almost everything I
0
0
2
Chain of Continuous Thought looks dope, very excited to try models trained this way.
1
0
1
🔥 This is sick. Using code to run simulations is way too uncommon IMO. So many amazing discoveries can be made by developing a simple simulation framework (even without LLMs).
.@Microsoft just dropped TinyTroupe! Described as "an experimental Python library that allows the simulation of people with specific personalities, interests, and goals." These agents can listen, reply back, and go about their lives in simulated TinyWorld environments.
0
0
1
Swarm is cool but definitely a tutorial. Explicitly not for production and not a library, just an example.
This came unexpected! @OpenAI released Swarm, a lightweight library for building multi-agent systems. Swarm provides a stateless abstraction to manage interactions and handoffs between multiple agents and does not use the Assistants API. 🤔 How it works: 1️⃣ Define Agents, each
1
0
1
FINALLY got access to @CerebrasSystems. They ain't kidding, it's even faster than @GroqInc. 🤯 I'm getting 447/ts on Llama 3.1 70B with JSON parsing. 0.95s round trip!
3
1
15
This is specific to the Instruct models: https://t.co/HDG7zVhtKE. However, if you're having trouble with Llama 3.1 Instruct 8B on any JSON-mode tasks, I recommend trying 70B before increasing complexity of your pipeline or changing models entirely.
llama.com
Llama 3.1 - the most capable open model.
0
0
0
Struggling with Llama 3.1 8B? I wish I had seen this sooner. Meta: "We recommend using Llama 70B-instruct or Llama 405B-instruct for applications that combine conversation and tool calling. Llama 8B-Instruct can not reliably maintain a conversation alongside tool calling
1
0
0
My python people, if you've been waiting for the right time to move to uv for package mgmt, now is the time.
uv 0.4.0 is out now 🚢🚢🚢 It includes first-class support for Python projects that aren't intended to be built into Python _packages_, which is common for web applications, data science projects, etc.
0
0
1
30% is a huge improvement over all the previous hype that was around 13% (Devin), but still not something I would consider even close to production ready. Fast progress though! I hope we get some solid open source options in this space as the commercial ones improve.
I'm excited to share that we've built the world's most capable AI software engineer, achieving 30.08% on SWE-Bench – ahead of Amazon and Cognition. This model is so much more than a benchmark score: it was trained from the start to think and behave like a human SWE.
0
0
0
Great update on open source, with expanded details in the replies 👇
What a massive week for Open Source AI: We finally managed to beat closed source fair and square! 1. Meta Llama 3.1 405B, 70B & 8B—The latest in the llama series, this version (base + instruct) comes with multilingual (8 languages) support, a 128K context, and an even more
0
0
1
In the commercial AI use case space, open source models are everything. This details how much of a leader Llama/meta is in this space.
Huge congrats to @AIatMeta on the Llama 3.1 release! Few notes: Today, with the 405B model release, is the first time that a frontier-capability LLM is available to everyone to work with and build on. The model appears to be GPT-4 / Claude 3.5 Sonnet grade and the weights are
0
0
0
This library deserves more attention 👇🏻
You don't need a H100 to run Llama-3-405b. 2 MacBooks and 1 Mac Studio will do the job, with @exolabs_ to aggregate the memory/compute. I'm ready for you, Llama-3-405b.
1
0
2
Another open source win ✅
NVIDIA Transitions Fully Towards Open-Source GPU Kernel Modules https://t.co/ymsTYZKWIe
#linux
0
0
0
This is neat, but IMO all these have a long ways to go before being part of daily workflows for anything beyond startups and indie hackers.
Introducing Claude Engineer 2.0, with agents! 🚀 Biggest update yet with the addition of a code editor and code execution agents, and dynamic editing. When editing files (especially large ones), Engineer will direct a coding agent, and the agent will provide changes in batches.
0
0
1