
@fclc
@FelixCLC_
Followers
2K
Following
73K
Media
1K
Statuses
19K
Perf and ASM nerd drifted off to ML side quests. Care about clean specifications, HPC, Floating Point and the BLAS. Eng @tenstorrent, prev Weather @Canada
Silly goose migrating all over
Joined August 2011
Hey folks; going quiet on this account for a little bit whilst I reset. Best place to reach me is discord or Signal/Whatapps.
5
0
44
THE FIRST SCIENCE GALAXY LIVES
1
0
17
I’m slowly consolidating on the biggest problem in standards for hw and software is thinking the same tooling should serve embedded devices as should serve application tier devices You see this in RISCV, you see it in the C/C++ standard, you see it in operating system tooling
9
4
81
Very fun looking back on the slides (available here: https://t.co/CMSOrmNZMF) And thinking back on the context that talk was written in. Namely at the then existing AVX10.N/M proposed spec, where M could be {128,256,512}
github.com
Holds slides and recordings of any talks I've done since 2023 - FCLC/Talks
1
2
8
Hard to believe, but 2 years ago to the day I gave my @easy_build TechTalk on #AVX10 and the history of SIMD on x86.
1
2
25
Happy thanksgiving all! Wishing you and your families a lovely Autumn filled with love and contentment!
0
0
19
In the meantime, highly recommend this thesis that came across my feed: The current breadth of #HPC tools, approaches and more:
1
1
13
nerd snipe: PLEASE go learn about how cable modems/DOCSIS/etc. work, it's the foundation of broadband internet and it's so interesting. you'd be surprised how much physics and math go into it, it is not just EE at all
16
13
425
Downside of being locked in is no longer seeing the forest for the trees. You lose sight of the work that matters.
3
0
16
The real open secret is that leading SOTA models are well into the 10T+ range and have been for a while (6+ months)
0
0
4
I think this is the closest anyone has gotten to admitting that OpenAI is doing multi-T models? granted it's the biggest open secret in the industry, but fun to see confirmed none the less.
Just announced: @Microsoft delivers the world's first at-scale NVIDIA GB300 NVL72 production cluster, providing the supercomputing engine needed for OpenAI to train multitrillion-parameter models in days, not weeks. Learn more: https://t.co/ndcp1YSSLC
3
0
25
I think this is the closest anyone has gotten to admitting that OpenAI is doing multi-T models? granted it's the biggest open secret in the industry, but fun to see confirmed none the less.
Just announced: @Microsoft delivers the world's first at-scale NVIDIA GB300 NVL72 production cluster, providing the supercomputing engine needed for OpenAI to train multitrillion-parameter models in days, not weeks. Learn more: https://t.co/ndcp1YSSLC
3
1
41
Hosting a @Reddit AMA today from 3-4 Eastern Time. Spacewalking to aliens to writing thrillers, I'm ready - ask me anything! https://t.co/0BFsfFuobx
9
32
296