Jonathan Ross Profile Banner
Jonathan Ross Profile
Jonathan Ross

@JonathanRoss321

Followers
14,186
Following
99
Media
70
Statuses
513

CEO & Founder, Groq®™

Joined November 2021
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@JonathanRoss321
Jonathan Ross
2 months
😉
@GroqInc
Groq Inc
2 months
Still faster.
21
37
440
10
4
83
@JonathanRoss321
Jonathan Ross
17 days
Nvidia hit 100k developers in 7 years. Our goal was to hit 100k developers in 7 weeks. It's been 6 weeks, and...
Tweet media one
64
62
1K
@JonathanRoss321
Jonathan Ross
3 months
What do @GroqInc 's LPUs cost? So much curiosity! We're very comfortable with this pricing and performance - and no, the chips/cards don't cost anywhere near $20,000 😂 #Groqspeed
Tweet media one
34
41
398
@JonathanRoss321
Jonathan Ross
5 months
Hey @ElonMusk , read this👇 #grok and #GroqOn
@JonathanRoss321
Jonathan Ross
6 months
Hey @ElonMusk , Watch This    #GroqOn 🤘 #grok
8
30
97
134
118
270
@JonathanRoss321
Jonathan Ross
2 months
@GroqInc has had a liiiiitle bit of user growth this week. #GroqSpeed
Tweet media one
27
30
332
@JonathanRoss321
Jonathan Ross
2 months
@__tinygrad__ @GroqInc What would we get if you couldn't do it?
2
5
262
@JonathanRoss321
Jonathan Ross
2 months
Today at GroqHQ. Reshare if you want @MistralAI running on @GroqInc LPUs
Tweet media one
6
23
255
@JonathanRoss321
Jonathan Ross
2 months
We'll run @xai 's model on @GroqInc LPUs when it's renamed to Slartibartfast.
20
12
254
@JonathanRoss321
Jonathan Ross
2 months
We're shipping them as fast as we can make them! More @GroqInc #LPU s coming!
@GroqInc
Groq Inc
2 months
More LPU™ systems coming in hot! We hear you world you want that ultra-low latency speed. The team is building as fast as they can. We're increasing capacity and speed daily. More announcements coming next week, stay tuned!
Tweet media one
42
40
486
10
8
165
@JonathanRoss321
Jonathan Ross
17 days
We heard the demand for increased rate limits! This partnership adds another 21,600 LPUs this year to GroqCloud, with the option to add an additional 108,000 LPUs next year. For comparison. This will 8x the capacity this year vs. what's available on GroqCloud today, which will…
@GroqInc
Groq Inc
17 days
We're excited to partner with @EWPESG to build an #AI Compute Center for Europe in Norway. This puts us on track to deliver 50% of the world's #inference compute capacity via GroqCloud™, running on our AI infrastructure - the LPU™ Inference Engine. Read more:…
3
10
92
12
13
140
@JonathanRoss321
Jonathan Ross
2 months
Looking forward to @nvidia GTC next week! Apparently there are numerous exciting and significant last minute changes to the event aimed at answering how Nvidia will respond to concerns raised by how popular some inference focused AI chip startup has recently become. 👀 What…
14
6
138
@JonathanRoss321
Jonathan Ross
3 months
Lunch in Paris. Nothing to see here. @arthurmensch & @tlacroix6
Tweet media one
10
8
133
@JonathanRoss321
Jonathan Ross
3 months
@jiayq @GroqInc We're very comfortable at our current pricing for Token as a Service. Very. Is everyone else comfortable with their pricing for TaaS? 😉
Tweet media one
7
10
125
@JonathanRoss321
Jonathan Ross
2 months
😉
@sundeep
sunny madra
2 months
A new partnership between @aramco and @GroqInc to build GroqCloud inference capabilities together announced at @LEAPandInnovate
22
39
344
10
12
121
@JonathanRoss321
Jonathan Ross
15 days
We broke @ArtificialAnlys 's graph for Llama3 8B. Sometimes we get a lucky shot on the benchmarks at the high end of our variance, but this is real, we have some we've logged that are even a liiiiiittle bit faster. Who knows, maybe someday this could be our mean perf 🤔 Maybe.
Tweet media one
9
9
107
@JonathanRoss321
Jonathan Ross
3 months
Wow on the press recently, thanks ( @Gizmodo , @stratechery , @SemiAnalysis_ ) for the coverage of @GroqInc . 🙏 We're 2 months into providing early access to our LPU™ systems. Since recent publishings we've pushed a software update last night that gets more than 2x the throughput…
10
17
106
@JonathanRoss321
Jonathan Ross
6 months
Hey @ElonMusk , Watch This    #GroqOn 🤘 #grok
8
30
97
@JonathanRoss321
Jonathan Ross
3 months
Thank you for the shoutout of @GroqInc on @theallinpod ! @friedberg w.r.t. getting everything right all at once, when asked what our secret sauce is, we say, "No, we have 11 herbs and spices, and it takes every one of them to do what we do." As for luck, here's a small excerpt…
Tweet media one
Tweet media two
@theallinpod
The All-In Podcast
3 months
E167: $NVDA smashes earnings (again), $GOOG's AI disaster, @GroqInc 's LPU breakthrough & more (0:00) bestie intros: banana boat! (2:34) nvidia's terminal value and bull/bear cases in the context of the history of the internet (27:26) groq's big week, training vs. inference,…
152
105
864
7
4
97
@JonathanRoss321
Jonathan Ross
17 days
@notfncladvice You're absolutely right. The reason we got to 100,000 developers >50x faster was we made it almost effortless to build AI applications. No low level C++ code, no CUDA kernels, no need to understand computer architecture. We also deployed all the HW and automated the dev ops. 😂
3
3
96
@JonathanRoss321
Jonathan Ross
13 days
Thank you for the shout out @AndrewYNg - at @GroqInc we're committed both to driving the cost of compute to zero, and the speed of compute to ∞ If you haven't tried ludicrous speed yet, try it at And if you find yourself in Mountain View, you're invited…
@AndrewYNg
Andrew Ng
14 days
Much has been said about many companies’ desire for more compute (as well as data) to train larger foundation models. I think it’s under-appreciated that we have nowhere near enough compute available for inference on foundation models as well. Years ago, when I was leading teams…
43
248
1K
6
11
97
@JonathanRoss321
Jonathan Ross
3 months
Did someone say they were worried about @Groq 's cost/ability to scale? 👀 #GroqSpeed
@AravSrinivas
Aravind Srinivas
3 months
trending on crunchbase h/t @wagieeacc
Tweet media one
18
17
269
14
10
78
@JonathanRoss321
Jonathan Ross
6 months
I have two suggestions for Elon: 1. Slartibartfast is a far more appropriate name for a snarky chatbot. 2. He should run the #LLM on #Groq ™ so he can provide sarcasm at speed. Read my recent blog post to #grok why on both fronts: #GroqOn
14
38
67
@JonathanRoss321
Jonathan Ross
1 month
'Nuff said @RaiseSummit
Tweet media one
4
6
68
@JonathanRoss321
Jonathan Ross
17 days
@skryl_alex We've signed deals to deploy 240,000 LPUs.
5
2
63
@JonathanRoss321
Jonathan Ross
9 days
There are at least five levels of AI App Autonomy: 1. Ideate: draft, template, etc. 2. Inform: retrieve information reliably 3. Verify: check our work 4. Update: make requested changes 5. Decide: make decisions What level does your app target?
5
6
63
@JonathanRoss321
Jonathan Ross
2 months
We thought the universe needed to bend a bit more in both space and time. This will do it.
@chamath
Chamath Palihapitiya
2 months
Groq news! Happy to announce that @DefinitiveIO has been acquired by @GroqInc so that we can accelerate Groq’s cloud offering. Was lucky to be the seed and A for both of these companies. @sundeep will now be GM of the Groq Cloud and work closely with @JonathanRoss321 to…
76
50
623
5
8
62
@JonathanRoss321
Jonathan Ross
6 months
@rowancheung Hey @rowancheung , another competitive difference is responsiveness. LLMs run faster on Groq®'s LPU™ chips than any other hardware, so if you want better answers fast let @elonmusk know that you want @xai to run at #GroqSpeed . That, or you can wait, and wait, and wait for them to…
26
25
36
@JonathanRoss321
Jonathan Ross
2 months
@IanCutress @nvidia @groq I've met Jensen before, and he's had his team updating GTC specifically in response to Groq this week, so not knowing enough about Groq seems unlikely. That said, *** @GroqInc runs 70B parameter models faster than @nvidia runs 7B parameter models. *** Try it:…
Tweet media one
2
9
52
@JonathanRoss321
Jonathan Ross
3 months
@lulumeservey Lulu, have you been reading our internal comms plans? 😂 Credit goes to @lifebypixels our head of Brand and @andycunningham4 our fractional CMO, and the whole @GroqInc team.
2
0
42
@JonathanRoss321
Jonathan Ross
6 months
Hey @ElonMusk , if you want to compete with OpenAI #ChatGPT , send us the @Xai #LLM and we'll run it on Groq™ to show X users what best-in-class #GenAI is really like. Our #LPU  ™ Inference Engine = speed and quality. #GroqOn #grok
5
16
40
@JonathanRoss321
Jonathan Ross
2 months
@kami_ayani @adamscochran Yes, we're do our arithmetic in FP16 and store weights in FP8 on now. That said, it's a 70B and 8x7B on that site. We do have a (not released) 180B parameter model running at 200T/s with FP16 multiplies, and the performance of an MoE tends to be similar to…
2
6
40
@JonathanRoss321
Jonathan Ross
5 months
When you start a startup, someday you hope to get popular. How popular? Well, having a fake crypto currency named after you is a good sign you're popular. There is no crypto currency related to Groq.
20
15
32
@JonathanRoss321
Jonathan Ross
16 days
@karpathy No, but if you want to multiply and accumulate happiness, try an LPU.
4
1
38
@JonathanRoss321
Jonathan Ross
2 months
Cute kid. DM me, we'll get you access, and you can just include him in whatever demo you do, that'll be enough 😂
@thegarrettscott
Garrett Scott 🕳
3 months
@GroqInc @StonkyOli I am once again offering my first born for access
Tweet media one
3
0
17
4
1
34
@JonathanRoss321
Jonathan Ross
1 month
Most things that are hard are counterintuitive.
4
2
34
@JonathanRoss321
Jonathan Ross
20 days
Hope this isn't on account of us...
@danielnewmanUV
Daniel Newman
20 days
How far they run… How fast they fall. And no, don’t panic. $NVDA will be absolutely positively 👌 $SMCI looks even worse. 👀
Tweet media one
6
1
19
4
1
33
@JonathanRoss321
Jonathan Ross
5 months
@ylecun To be fair, I took @ylecun 's course at NYU in 2007/8, most of the course was about CNNs, and I didn't get the value of convolutions until a few years later - you were a little ahead :)
2
0
31
@JonathanRoss321
Jonathan Ross
17 days
@Ahmad_Al_Dahle We credit Llama3 with accelerating it a week faster than we projected. Thank you @AIatMeta 🦙🦙🦙
1
0
32
@JonathanRoss321
Jonathan Ross
3 months
Deepgram knocked it out of the park! Enabling real-time voice to text from 12,000km away. Part of any serious voice solution.
@DeepgramAI
Deepgram
3 months
Huge shoutout to Jonathan Ross at @GroqInc for this incredible LUI demo on @CNN ! We're honored to be a part of your stack & we can't wait to see where this astounding technology will go next 🚀 Thanks again @JonathanRoss321 for the mention 😄 Demo:
2
6
36
2
5
28
@JonathanRoss321
Jonathan Ross
5 months
@PatrickMoorhead @AMD @nvidia @Signal_65 Looks like Nvidia went big on input tokens and small on output tokens because the MI300X has better memory bandwidth, and will look better than Nvidia on output. I wouldn't call these "typical" settings, typically the output is longer. #BenchmarkShenanigans
2
2
28
@JonathanRoss321
Jonathan Ross
3 months
Love it! @rauchg @vercel 🔻
@rauchg
Guillermo Rauch
3 months
Two @vercel AI SDK playground additions: 1️⃣ @groqinc , a lightning fast inference platform powered by custom hardware
10
18
268
0
4
30
@JonathanRoss321
Jonathan Ross
3 months
A little over 24 hours left until my talk at @WorldGovSummit - first time speaking to a Prime Minister, let alone multiple, let alone from a stage. As usual, @GroqInc has something new exciting to show. Wish us luck! 😁👍 #GroqSpeed
7
2
28
@JonathanRoss321
Jonathan Ross
5 months
Thrilled to talk with @SavIsSavvy and @LisaMartinTV for #SuperCloud5 . Great conversation about why whiteboards were banned at @GroqInc , #AI safety, and how #GenAI may increase empathy in the world. Oh, also, why the #Groq ™ brand is "wow!"
@theCUBE
theCUBE
5 months
The llama is back! 🦙 #Supercloud5 rock stars. A curious wardrobe change for #theCUBE 's Savannah Peterson as Jonathan Ross, CEO of @GroqInc , known for its mascot llama, stops by our studios today. 💬 Join the real-time conversations! #LiveTechNews
Tweet media one
Tweet media two
0
3
12
4
10
25
@JonathanRoss321
Jonathan Ross
3 months
@lukaszbyjos @GroqInc We'll save you the electric bill, and send you some foot warmers instead.
1
0
26
@JonathanRoss321
Jonathan Ross
6 months
Hey @SamA , We hear that speed is crucial for engagement. So at Groq™ we created the LPU™ Inference Engine. Wherever you land, whatever you build, it will be 10x #BetterOnGroq . Let's talk. 🤙 #AI #GenAI #LLM #GroqOn
5
11
26
@JonathanRoss321
Jonathan Ross
6 months
#Groq ™ running next to @elonmusk 's model. Do you #grok it?
2
5
23
@JonathanRoss321
Jonathan Ross
13 days
@bensima asked what @GroqInc should sell in it's merch store. Any coffee brands interested in some co-marketing? Or @redbull ?
Tweet media one
7
2
26
@JonathanRoss321
Jonathan Ross
5 months
@ramahluwalia Correction - @GroqInc doesn't compete with @OpenAI , we uniquely provide a service to affordably run very, very large language models ultra fast. Anyone can run on Groq and get a massive speed boost. #GroqOn 🚀🚀🚀
4
8
23
@JonathanRoss321
Jonathan Ross
6 months
@ylecun @tegmark @RishiSunak @vonderleyen Meta's Open Source models saved Groq 2 years ago we got a panicked call from a CEO deploying the then largest LLM service, he couldn't get any GPUs. They couldn't share the models with us so for 2 years...Nothing. Then Llama. Now we have insane demand for LPU chips. Thank you!
0
8
22
@JonathanRoss321
Jonathan Ross
6 months
@elonmusk @icreatelife Hey @ElonMusk , Groq®'s compute for LLMs has increased more than 8x in the last 30 days. LPU™ Inference Engine for the win! That's 800% more @GroqInc vs. 44% more @xAI per month.🤔Want to keep up? How about running xAI on #OnGroq ?
1
11
21
@JonathanRoss321
Jonathan Ross
3 months
🫶
@rauchg
Guillermo Rauch
3 months
Can't get over how fast @groqinc is Demo:
15
12
361
2
3
22
@JonathanRoss321
Jonathan Ross
3 months
Try #GroqSpeed for yourself at . A question we at Groq used to get before we had LLMs running on our #LPU ™ Inference Engine was why do #LLMs need to run faster than reading speed. No one asks that anymore.
7
4
20
@JonathanRoss321
Jonathan Ross
6 months
@pmddomingos We love Nvidia, for training models. For inference, well, we have another offering from @GroqInc .
1
4
16
@JonathanRoss321
Jonathan Ross
29 days
Clever new benchmark from Artificial Analysis.
@ArtificialAnlys
Artificial Analysis
29 days
Per Token pricing is not always 🍏-to-🍏, when normalized provider pricing increases up to +19% When tokenizing the same text, different tokenizers output a different number of tokens and therefore have different ‘tokenizer efficiency’. Because we pay per token, this means we…
Tweet media one
3
2
20
0
1
19
@JonathanRoss321
Jonathan Ross
3 months
@mattduhon
Matt Duhon
3 months
@GroqInc Thanks for access! The speed is insane! Note how similar the calls are. Makes it super easy to get going..
Tweet media one
Tweet media two
3
2
31
0
1
18
@JonathanRoss321
Jonathan Ross
15 days
Tweet media one
0
1
17
@JonathanRoss321
Jonathan Ross
3 months
@slitdrip @jiayq @GroqInc No need to imagine, we did the comparison 😉
2
2
15
@JonathanRoss321
Jonathan Ross
3 months
@felix_red_panda We're comfortable at these prices. 😉
Tweet media one
1
0
16
@JonathanRoss321
Jonathan Ross
2 months
@canipeonyou @nvidia Nvidia's been talking to a lot of CFOs recently - explaining why they're still economical for inference. We believe in show rather than tell, so let's count how many LPUs get deployed by the end of 2025. That'll answer that question 😁
0
0
15
@JonathanRoss321
Jonathan Ross
3 months
@GroqInc 's #LPU just showed superior performance over GPUs. How do you think the biggest GPU vendor will respond?
Bring it - real perf gain
97
Pay Analysts to sow FUD
64
Threaten shipping delays
13
Say they're faster anyway
97
12
3
14
@JonathanRoss321
Jonathan Ross
2 months
We now have definitive proof that fast tokens are worth more than slow tokens.
@thegarrettscott
Garrett Scott 🕳
3 months
@GroqInc @StonkyOli I am once again offering my first born for access
Tweet media one
3
0
17
3
2
14
@JonathanRoss321
Jonathan Ross
2 months
Love Brussels. It may be Medieval Portland.
Tweet media one
1
0
12
@JonathanRoss321
Jonathan Ross
5 months
@ramahluwalia and I start getting philosophical around timestamp 41:17. What's next after the Turing test? What is insight and can AI have it? Can an AI experience a Eureka! moment? What is sentience, and are we even sentient?
@ramahluwalia
Ram Ahluwalia CFA, Lumida
5 months
AI Chip Wars: LPUs, TPUs & GPUs with @JonathanRoss321 , Founder @GroqInc
10
24
48
4
3
11
@JonathanRoss321
Jonathan Ross
5 months
@samlakig @satnam6502 Your math seems to be accurate :) The weights for Mixtral are smaller, the KV cache is a little larger (32K), but overall, close. The reason we're so low power is that we keep the model in the on-chip memory (SRAM). HBM requires almost 10x the energy per token at this latency.
2
1
9
@JonathanRoss321
Jonathan Ross
6 months
@ramahluwalia Hey @satyanadella , Have you considered @GroqInc 's LPU™ Inference Engine? ;) #GroqSpeed
1
4
9
@JonathanRoss321
Jonathan Ross
4 months
We love the work @lmsysorg is doing! Paged Attention, Chatbot Arena, LMSYS-Chat-1M. Very cool. I'm really impressed with their performance on #llama2 70B in the video. Of course, LPU vs. GPU, I think we all know how this video is going to end. #GroqSpeed
2
1
10
@JonathanRoss321
Jonathan Ross
3 months
@tomjaguarpaw @Russell_AGI @GroqInc We have plenty of capacity: As for our next-gen chip (in addition to those 1 million LPUs), we're the lead customer for Samsung's 4nm plant in Taylor Texas, which is what Tom Ellis shared. We're going very, very high volume 😁
2
1
10
@JonathanRoss321
Jonathan Ross
6 months
AI is going to be good for humanity, what hardware allows Large Language Models to run 10x faster, and the semiconductor shortage won't stop AI. Now stop scrolling, and learn why Elon Musk should *Watch This* video. via @YouTube @theCUBE @GroqInc #GroqOn
2
4
8
@JonathanRoss321
Jonathan Ross
5 months
@samlakig @satnam6502 We measure in joules per token. Under full load we're expecting it to get down to less than 3J / Token
1
0
9
@JonathanRoss321
Jonathan Ross
5 months
Love the wow factor! #GroqSpeed
@GroqInc
Groq Inc
5 months
We love when #Groqsters show people our demo out in the wild. This was last night at Schiphol airport and it's really fun to see the reaction of end-users to #groqspeed . Try it yourself over at and reach out for API access requests.
0
2
11
0
2
8
@JonathanRoss321
Jonathan Ross
3 months
4
2
7
@JonathanRoss321
Jonathan Ross
6 months
Please no one name any models grizzly or honey badger.
@GroqInc
Groq Inc
6 months
Will you see the Groq llama again in real-time today at #SC23 ?! If you spot us, come say hi and tag us in your pics with #GroqOn . Then be sure to go see Llama-2 70B running at 300 tokens per second at the Groq booth, #1681 !
Tweet media one
1
3
25
0
1
9
@JonathanRoss321
Jonathan Ross
3 months
@sharongoldman @GroqInc @mattshumer_ @nvidia Thank you @sharongoldman . I asked the LLM to summarize, but it said the content was best read in full 😉 That said, trying anyway: #LPU == Insanely Fast Inference
6
1
7
@JonathanRoss321
Jonathan Ross
2 months
0
0
8
@JonathanRoss321
Jonathan Ross
5 months
@NaveenGRao @chamath Thanks! We're huge fans of what you're doing over at Mosaic/Databricks :)
0
0
7
@JonathanRoss321
Jonathan Ross
6 months
@ashleevance We'll let you judge #GroqOn v. #grok :)
11
1
7
@JonathanRoss321
Jonathan Ross
4 months
@GroqInc 's LPU™ chips run LLMs at crazy speeds, e.g. 70B parameters models at 300 tokens / second. How fast do you think we can run a 7B parameter model like @MistralAI 's 7B model? Watch below to find out 👇 😂
2
1
7
@JonathanRoss321
Jonathan Ross
5 months
@TonyHawkersCRC -- We built our own chips - that's how we're faster. The video you saw was Meta's 70B parameter model running on our LPU™ Inference Engine, rather than on GPUs.
2
3
5
@JonathanRoss321
Jonathan Ross
3 months
@Samjhudson83 @GroqInc That's Llama2, Mixtral is more affordable.
2
1
7
@JonathanRoss321
Jonathan Ross
3 months
@JoshMiller656 @IntuitMachine It was the only picture of a GPU we could find that hadn't been photoshopped to look more regular than it is 😉
1
0
6
@JonathanRoss321
Jonathan Ross
6 months
Hey @SamA , glad to see you back at @OpenAI . As you've said, technology magnifies differences, and the difference here👇 is clear. #BetterOnGroq When the dust settles, let's build something together. #GroqOn
1
3
7
@JonathanRoss321
Jonathan Ross
6 months
Hey @ElonMusk , we brought a live llama to #SC23 . Her name is Bunny. She's sassy – like your bot. 🦙 #Groq ™ chips are super fast at running Llamas... how about a race between Bunny and the #Cybertruck ? #grok , #GroqOn , #Meta Watch this:
1
2
7
@JonathanRoss321
Jonathan Ross
19 days
@KhazzanYassine @GroqInc @sundeep @GavinSherry Very well done. The results are speedy and good.
0
0
5