EmadBarsoumPi Profile Banner
Emad Barsoum Profile
Emad Barsoum

@EmadBarsoumPi

Followers
537
Following
840
Media
12
Statuses
726

Corporate Vice President, AI at AMD.

Joined October 2014
Don't wanna be here? Send us removal request.
@EmadBarsoumPi
Emad Barsoum
10 hours
Proud of the team and outstanding work!!! Extending Instella family of models with text-to-image model trained from scratch on @AMD MI300X; fully open (dataset, training code, checkpoints and a detailed blog) to help reproducibility and pushing research forward. Not only that,.
0
2
17
@EmadBarsoumPi
Emad Barsoum
2 days
Looking forward for this collaboration!!!.
@AIatAMD
AI at AMD
5 days
We’re thrilled to collaborate with the @HazyResearch @StanfordAILab, led by Chris Ré, to power Minions, their cutting-edge agentic framework tackling the cost-accuracy tradeoff in modern AI systems. This innovation is enabled on AMD Ryzen AI, thanks to seamless integration with
Tweet media one
0
0
12
@EmadBarsoumPi
Emad Barsoum
6 days
RT @realSharonZhou: The story of hybrid architectures is honestly fascinating! I've been diving deep into why Transformers became the defau….
0
16
0
@EmadBarsoumPi
Emad Barsoum
6 days
Fastest on-device Foundation Models, outstanding work by @ramin_m_h and @LiquidAI_ team!!! With day 0 support on AMD Ryzen AI. @AIatAMD.
@AIatAMD
AI at AMD
6 days
FM2 (Liquid Foundation Model 2) from Liquid AI just dropped today with 3 model weights: 350M, 700M, 1.2B. LFM2 is specifically designed to provide the fastest on-device gen-AI experience across the industry. What's more is LFM2 has been optimized on AMD Ryzen AI day 0 and works.
0
2
8
@EmadBarsoumPi
Emad Barsoum
6 days
Want to train text-to-image diffusion model from scratch in a less than a day? With deferred patch masking introduced by MicroDiT to reduce sequence length, high compression latent space introduced by DC-AE that achieve 32x compression ratio and improved representation alignment.
1
1
4
@EmadBarsoumPi
Emad Barsoum
7 days
RT @Vultr: ⚕ Run your healthcare AI stack – your way. 🤖. Migrate to Vultr Cloud + @AMD Instinct™ GPUs with support from FluidCloud. Full d….
0
6
0
@EmadBarsoumPi
Emad Barsoum
8 days
Looking forward to see what @__tinygrad__ build!!!.
@__tinygrad__
the tiny corp
9 days
Our MI350X machines are here, thanks @AMD! They are just two racks down from their MI300X friends.
Tweet media one
0
0
12
@EmadBarsoumPi
Emad Barsoum
8 days
Great to see that, hope more ML frameworks adding NoGIL python support!!! Great job @vllm_project .
@vllm_project
vLLM
8 days
vLLM runs on free-threaded Python! A group of engineers from @Meta’s Python runtime language team has shown that it’s possible to run vLLM on the nogil distribution of Python. We’re incredibly excited to embrace this future technique and be early adopters 😍
Tweet media one
0
0
7
@EmadBarsoumPi
Emad Barsoum
8 days
Our paper: `X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression` got accepted to COLM 2025. Congratulation to Guihong Li, Mehdi Rezagholizadeh, Mingyu Yang and @vikramappia . Inspired by DeepSeek MLA, can we use MLA on an already.
1
1
7
@EmadBarsoumPi
Emad Barsoum
9 days
Awesome to see @MangoBoost_Inc!!!.
@MangoBoost_Inc
MangoBoost
15 days
🔥MangoBoost LLMBoost™ cut Llama2-70B training time by half in @Dell 's test with AMD MI300X. Multinode AI training, made easy. 🔗 #Dell #AMD #GPU #LLM #AI #GenAI #MLPerf #Training.
0
0
2
@EmadBarsoumPi
Emad Barsoum
14 days
RT @Zuby_Tech: AMD: . PlayStation 6 Will "Push the boundaries of real time game graphics.". #PlayStation6 #PS6 #ProjectAmethyst #Amethyst #….
0
44
0
@EmadBarsoumPi
Emad Barsoum
14 days
RT @swayaminsync: I am just quoting here: ."Tomorrow US time, launching AMD GPU support, single binary which runs on CPU, Nvidia GPUs and A….
0
2
0
@EmadBarsoumPi
Emad Barsoum
14 days
RT @rohanpaul_ai: Zero Cost Checkpoint (ZCC). Conventional checkpointing slows large jobs because saving model state steals compute and ban….
0
1
0
@EmadBarsoumPi
Emad Barsoum
14 days
RT @AnushElangovan: ZCC + torchft ( is getting to a first principles approach on how we would do fault tolerant tra….
0
4
0
@EmadBarsoumPi
Emad Barsoum
15 days
RT @AnushElangovan: Don't forget AMD. You just have to try it . and flux-fast go brrr on MI300X with AITER kernels. Thanks @adyaman for….
0
9
0
@EmadBarsoumPi
Emad Barsoum
16 days
Looking forward to see what you build, great work @_Alex_Borghi_ !!!.
@_Alex_Borghi_
Alexandre Borghi
16 days
Finally had time to try the new AMD Developer Cloud. Running GRPO on 8 MI300X worked like a charm. @AMD @AIatAMD
Tweet media one
0
0
2
@EmadBarsoumPi
Emad Barsoum
16 days
A great fireside chat, learning the inception of the Transformer architecture from its co-creator!!! Thanks @ashVaswani for the great talk and @KarimBhalwani for moderating the chat. @AIatAMD.
@KarimBhalwani
Karim
16 days
Wonderful moderating a fireside chat with @ashVaswani, the co-creator of the Transformer, at @AMD's Advancing AI event. Grateful that @AIatAMD gets to learn from and partner with @essential_ai every day to advance frontier infrastructure and open science. We've had a lot of
Tweet media one
2
0
2
@EmadBarsoumPi
Emad Barsoum
16 days
RT @gpusteve: @AnushElangovan AMD Forge. bc - u know - ROCm.
0
1
0
@EmadBarsoumPi
Emad Barsoum
16 days
Agree!!!.
@PPLSOPTIMISMCEO
18 days
AMD's AI software ROCm will inevitably outgrow Nvidia's CUDA due to it being open source and not proprietary. There are limitations for what can be done exclusively in-house. When you isolate yourself, you limit development potential, especially when larger communities join.
1
1
22
@EmadBarsoumPi
Emad Barsoum
16 days
Awesome to see, great work @ZDi____ !!!.
@ZDi____
ZD1908
19 days
More AMD yaps. This time I test out FP8 training with ROCm's version of Transformer Engine. And it just works. Is it good? Yes. Speeds up? Yes, _but_. Link in replies.
Tweet media one
1
0
5