Anton McGonnell Profile
Anton McGonnell

@aton2006

Followers
558
Following
5K
Media
15
Statuses
682

Product @SambaNovaAI #GenerativeAI

Palo Alto
Joined August 2009
Don't wanna be here? Send us removal request.
@aton2006
Anton McGonnell
2 months
RT @SambaNovaAI: We just keep moving up! 📈 . 131K Context Length is now available on #Llama4 Maverick! . Unlock use cases with Maverick for….
0
10
0
@aton2006
Anton McGonnell
3 months
RT @MetaforDevs: Apply to participate in the LlamaCon Hackathon in SF May 3rd-4th🦙. We’ve partnered with @cerebral_valley, @Shack15sf, @neb….
0
11
0
@aton2006
Anton McGonnell
3 months
RT @SambaNovaAI: Another 🏆. We're honored to be selected for the @Forbes 2025 #ForbesAI50 List!. "AI is the most transformative technology….
0
2
0
@aton2006
Anton McGonnell
3 months
Fastest in the world! . Maverick coming next.
@SambaNovaAI
SambaNova Systems
3 months
🦙 Llama 4 Scout from @AIatMeta is now available on SambaNova Cloud! Fastest Inference clocked at 697+t/s. Llama 4 Maverick will be out next week, followed by higher context lengths up to 128K! 🚀. Try it now on SambaNova Cloud 👇.
1
1
12
@aton2006
Anton McGonnell
3 months
RT @SambaNovaAI: We've teamed up with @AIatMeta to unleash the power of both Llama 4 models! 🦙⚡️. Get ready for lightning-fast AI magic—dro….
0
16
0
@aton2006
Anton McGonnell
3 months
Very proud of the @SambaNovaAI team as we partner with @AIatMeta on this huge release for the AI community. Big updates to come!.
@AIatMeta
AI at Meta
3 months
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout.• 17B-active-parameter model
Tweet media one
0
0
6
@aton2006
Anton McGonnell
3 months
RT @weights_biases: LLM evaluation at scale isn’t just about accuracy anymore. It’s about speed, cost, and hardware efficiency—and @SambaN….
0
9
0
@aton2006
Anton McGonnell
4 months
RT @SambaNovaAI: Speed demon alert! We're out here turning DeepSeek R1 into a Formula 1 race! 🏎️💨. ✅ Now up to 250 tps.✅ Available for anyo….
0
6
0
@aton2006
Anton McGonnell
4 months
RT @SambaNovaAI: 📣 We heard you, devs — DeepSeek R1 671B is now available! . 🚀 Our @deepseek_ai launch on SambaNova Cloud received overwhel….
0
36
0
@aton2006
Anton McGonnell
4 months
RT @RobRizk1: @SambaNovaAI is the undefeated GOAT for running extremely fast inference at SCALE. not for prototyping or running demos. 10….
0
3
0
@aton2006
Anton McGonnell
4 months
We don’t just win on speed, we also win on total system throughout, which is ultimately what matters for tokenomics. The significance of this can’t be understated. Speed matters for user experience, throughout matters for cost efficiency. Many believed that the GPU disrupters.
@SambaNovaAI
SambaNova Systems
4 months
SN40L crushes H200 in real-world #AI inference! 🦾. We measured @deepseek_ai's-R1 with SGLang 0.4.2 on 1 node of H200, & guess what - SN40L completely smashes H200's Pareto frontier:. ☑️ 5.7x faster (201 tps vs 35 tps).☑️ Reasoning model (30s vs 171s to generate 6k tokens)
Tweet media one
1
0
4
@aton2006
Anton McGonnell
4 months
RT @bitdeep_: I need to find a way to plug in this into my cursor/aider ASAP. It's @SambaNovaAI
Tweet media one
0
1
0
@aton2006
Anton McGonnell
5 months
RT @RobRizk1: The fastest uncensored DEEPSEEK R1 in the world is here!. 3.5X faster and this is only the beginning! . @SambaNovaAI is the….
0
3
0
@aton2006
Anton McGonnell
5 months
RT @RobRizk1: @sambanova is one of the most innovative companies in the world building the fastest chip to power autonomous ai agents in th….
0
2
0
@aton2006
Anton McGonnell
5 months
RT @RobRizk1: Fastest reasoning model, fastest moving team & fastest enterprise support @SambaNovaAI.
0
2
0
@aton2006
Anton McGonnell
5 months
DeepSeek 671B R1 is the most important open source model ever created. However inference capacity for it has been severely limited due to the inefficiency of GPUs in running these very large, very sparse models. SambaNova is going to solve that problem, scaling out global.
@SambaNovaAI
SambaNova Systems
5 months
🏎️⚡️Ka-chow⚡️ The fastest DeepSeek-R1 671B on SambaNova Cloud — running at 198 t/s! . ✅3X faster & 5X more efficient than the latest GPUs.✅Running on 1 rack efficiently (16 RDUs).✅Hosted in secure US data centers.✅100X the global capacity by the end of 2025. @deepseek_ai #AI.
4
6
32
@aton2006
Anton McGonnell
5 months
RT @julien_c: You can now filter the list of all models on HF by whether they're supported by / hooked to your favorite Inference provider….
0
25
0
@aton2006
Anton McGonnell
5 months
Maybe the best instruct model available in open source today. Try it out!.
@SambaNovaAI
SambaNova Systems
5 months
📣 Welcome another new addition to SambaNova Cloud: @allen_ai's Tülu 3!. 1⃣ Runs on SambaNova RDU with 1 rack & low power.2⃣ Model performs better than @deepseek_ai V3.3⃣ We are the fastest provider of Tülu 3. Learn more about our Tülu 3 launch here ⬇️.
0
0
1
@aton2006
Anton McGonnell
5 months
RT @ClementDelangue: Now you can run millions of open-source models (including @deepseek_ai R1) in production with the best performance dir….
0
40
0
@aton2006
Anton McGonnell
5 months
Thanks to @julien_c and the @huggingface team for this partnership. Huggingface is the connective tissue that holds open source AI together. With this release, many of the best models on Huggingface are more accessible than ever, and when using the @SambaNovaAI API, they will be.
@SambaNovaAI
SambaNova Systems
5 months
⚡️ We've partnered with @HuggingFace to bring lightning fast inference speeds. 🤗 10x faster on @AIatMeta's Llama 3 & @Alibaba_Qwen.🤗 #SambaNovaCloud models #Llama Guard & #Qwen QwQ.🤗 Minimal codes changes, switch to a faster provider for HF #devs.🤗 @deepseek_ai 🔜 . Try it👇.
0
0
8