
Anton McGonnell
@aton2006
Followers
558
Following
5K
Media
15
Statuses
682
Product @SambaNovaAI #GenerativeAI
Palo Alto
Joined August 2009
RT @SambaNovaAI: We just keep moving up! 📈 . 131K Context Length is now available on #Llama4 Maverick! . Unlock use cases with Maverick for….
0
10
0
RT @MetaforDevs: Apply to participate in the LlamaCon Hackathon in SF May 3rd-4th🦙. We’ve partnered with @cerebral_valley, @Shack15sf, @neb….
0
11
0
RT @SambaNovaAI: Another 🏆. We're honored to be selected for the @Forbes 2025 #ForbesAI50 List!. "AI is the most transformative technology….
0
2
0
Fastest in the world! . Maverick coming next.
🦙 Llama 4 Scout from @AIatMeta is now available on SambaNova Cloud! Fastest Inference clocked at 697+t/s. Llama 4 Maverick will be out next week, followed by higher context lengths up to 128K! 🚀. Try it now on SambaNova Cloud 👇.
1
1
12
RT @SambaNovaAI: We've teamed up with @AIatMeta to unleash the power of both Llama 4 models! 🦙⚡️. Get ready for lightning-fast AI magic—dro….
0
16
0
Very proud of the @SambaNovaAI team as we partner with @AIatMeta on this huge release for the AI community. Big updates to come!.
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout.• 17B-active-parameter model
0
0
6
RT @weights_biases: LLM evaluation at scale isn’t just about accuracy anymore. It’s about speed, cost, and hardware efficiency—and @SambaN….
0
9
0
RT @SambaNovaAI: Speed demon alert! We're out here turning DeepSeek R1 into a Formula 1 race! 🏎️💨. ✅ Now up to 250 tps.✅ Available for anyo….
0
6
0
RT @SambaNovaAI: 📣 We heard you, devs — DeepSeek R1 671B is now available! . 🚀 Our @deepseek_ai launch on SambaNova Cloud received overwhel….
0
36
0
RT @RobRizk1: @SambaNovaAI is the undefeated GOAT for running extremely fast inference at SCALE. not for prototyping or running demos. 10….
0
3
0
We don’t just win on speed, we also win on total system throughout, which is ultimately what matters for tokenomics. The significance of this can’t be understated. Speed matters for user experience, throughout matters for cost efficiency. Many believed that the GPU disrupters.
SN40L crushes H200 in real-world #AI inference! 🦾. We measured @deepseek_ai's-R1 with SGLang 0.4.2 on 1 node of H200, & guess what - SN40L completely smashes H200's Pareto frontier:. ☑️ 5.7x faster (201 tps vs 35 tps).☑️ Reasoning model (30s vs 171s to generate 6k tokens)
1
0
4
RT @bitdeep_: I need to find a way to plug in this into my cursor/aider ASAP. It's @SambaNovaAI
0
1
0
RT @RobRizk1: The fastest uncensored DEEPSEEK R1 in the world is here!. 3.5X faster and this is only the beginning! . @SambaNovaAI is the….
0
3
0
RT @RobRizk1: @sambanova is one of the most innovative companies in the world building the fastest chip to power autonomous ai agents in th….
0
2
0
RT @RobRizk1: Fastest reasoning model, fastest moving team & fastest enterprise support @SambaNovaAI.
0
2
0
DeepSeek 671B R1 is the most important open source model ever created. However inference capacity for it has been severely limited due to the inefficiency of GPUs in running these very large, very sparse models. SambaNova is going to solve that problem, scaling out global.
🏎️⚡️Ka-chow⚡️ The fastest DeepSeek-R1 671B on SambaNova Cloud — running at 198 t/s! . ✅3X faster & 5X more efficient than the latest GPUs.✅Running on 1 rack efficiently (16 RDUs).✅Hosted in secure US data centers.✅100X the global capacity by the end of 2025. @deepseek_ai #AI.
4
6
32
Maybe the best instruct model available in open source today. Try it out!.
📣 Welcome another new addition to SambaNova Cloud: @allen_ai's Tülu 3!. 1⃣ Runs on SambaNova RDU with 1 rack & low power.2⃣ Model performs better than @deepseek_ai V3.3⃣ We are the fastest provider of Tülu 3. Learn more about our Tülu 3 launch here ⬇️.
0
0
1
RT @ClementDelangue: Now you can run millions of open-source models (including @deepseek_ai R1) in production with the best performance dir….
0
40
0
Thanks to @julien_c and the @huggingface team for this partnership. Huggingface is the connective tissue that holds open source AI together. With this release, many of the best models on Huggingface are more accessible than ever, and when using the @SambaNovaAI API, they will be.
⚡️ We've partnered with @HuggingFace to bring lightning fast inference speeds. 🤗 10x faster on @AIatMeta's Llama 3 & @Alibaba_Qwen.🤗 #SambaNovaCloud models #Llama Guard & #Qwen QwQ.🤗 Minimal codes changes, switch to a faster provider for HF #devs.🤗 @deepseek_ai 🔜 . Try it👇.
0
0
8