bubble boi
@bubbleboi
Followers
25K
Following
90K
Media
751
Statuses
29K
Brand Ambassador @thru_xyz. Consigliere @UntoLabs.
New York, NY
Joined June 2017
Doubled lab space and team will be 75 engineers strong. Amazing progress and lots to do. Recently added hiring for production , Circuit Design / PDK , and open source chip design software
We make it ourselves And make the machine to make it. Want to join? Hiring EE, ME, Software , process engineers. We made the machine that puts atoms there, it’s laptop sized and draws 90 watts . Now we’re automating and building 10. Zoom in ,they’re not pixels
7
6
187
Imagine being a $5T company and approving a post made of pure distilled copium like this
We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google. NVIDIA is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done. NVIDIA offers greater
15
10
230
5 nm tape out is about 50M, definitely doable for a series A, 7 nm is about half that. If you are good at designing chips you can probably get away with it at 7 nm but having access to IP will be an issue.
@bubbleboi Just so we're clear, we're talking about an industry that requires billions in capital expenditures with no new (non big-tech) entrant in decades. Can't wait to see the $20B seed round for this vaunted startup to build their fab!
1
0
3
You can have it designed & emulated in 3 months. I didn’t say tape out I said “make.”
@bubbleboi You’re an absolute joke. Never actually been part of a takeout and bring up have you? Not one TPU or GPU engineer would ever say something so asinine. 3 months? 😂
3
0
10
Agree with this block floating point format is a must have for training and inference.. There are a few other design decisions I would make that would 8-12x throughout while keeping TDP low.
TPU v7 costs around $15K for Google. Capital costs for Google are 1/3rd on TPUs compared to Nvidia hardware. Interconnect bandwidth 600 GB/s (ICI), 1000W, 4.6 PFLOPS fp8, 192GB HBM3E. Google can squeeze out higher MFU on TPUs due to JAX/XLA compared to Nvidia on GPUs. (U won't
1
0
9
@bubbleboi nobody knows how to make a computer anymore. except bubble
0
1
3
You can make a TPU in 3 months if you’re good. The original TPU architecture was created by Norm Jouppi in less than 12 months. I will reiterate a common theme on this account which is people don’t know how computers work anymore lol.
@bubbleboi a startup will “just make” a tpu
20
9
334
Surprised I get this much attention on here. Besides being a buffoon I don’t have much to offer.
6
1
14
@bubbleboi Yes. And the winner of LLM will be a company currently getting Series A money that has worked out how to do inference without burning a whole city’s worth of energy to create a funny cat video. Brute-force LLMs won’t succeed. LLMs built with intuition will.
1
2
16
Felt this
1
0
2
Something NVIDIA & Google do better than anyone else is software-hardware-system co-design, and not just optimizing hardware for current model architectures, but predicting future ones. Back in early 2022, when NVIDIA started the design process for NVL72, MoE (Mixture of Experts)
15
12
236
$500 to whoever can get me a bag of HBM potato chips from SK Hynix.
I’m considering selling my SK hynix shares. SK hynix just released something called ‘HBM Chips’ at convenience stores. With things already turning out like this for HBM4, is this really the time for them to be doing something like that?
3
0
16
Ilya popped the AI bubble It's over
“From 2012 to 2020, it was the age of research. From 2020 to 2025, it was the age of scaling. Is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true. It's back to the age of research again, just with big computers.” @ilyasut
17
11
265