Ramin Hasani
@ramin_m_h
Followers
5K
Following
7K
Media
115
Statuses
2K
building @liquidai
New York, USA
Joined July 2012
The last 90 days we shipped hard at @LiquidAI_.🚢 🐘 LFM2 tiny instances. fastest on-device models 350M, 700M, 1.2B with a flagship new architecture. 🐸 LEAP. our device ai platform, from use-case to model deployment on phones and laptops in 5min. 👁️ LFM2 Vision language
7
16
119
Last week we (@LiquidAI) announced Liquid Labs! 🧠 What is Liquid Labs? Here's your answer: Read more: https://t.co/ez46Zh1b62 Video credit: @mlech26l @ramin_m_h @loo_noel @mihirbafna14 @961014dltkdg
1
7
45
A Floriday is where last-minute decisions lead to lifelong memories.
1
0
13
Today we start rolling out Sidekick Pulse. We run the largest HSTU model in the world together with the heaviest LLMs overnight, analyze the customer’s entire business, find ways to improve it, and notify them of changes needed. You should see the things it discovers...
4
5
118
Watch @LiquidAI_ CEO @ramin_m_h and our Head of Global Marketing @maddsey talk edge AI, foundation models, and why execution matters. https://t.co/KrEHcV4F6C
2
4
15
Grok Play Me and @nacloos attended xAI hackathon (+500 participants) to work on Grok Play. A platform for humans to compete against and with AI agents in games to improve LLMs ability to generate game relate code. We ended up winning 1st place of one of the tracks. 🚀
Grok Play: Enjoy and create multiplayer games where your Grok Owl can climb the leaderboard by playing against you, your friends, your friends' Owls, and itself. @nacloos @961014dltkdg
6
4
18
No laptop. No coding. Now, Cooper builds sites on his phone. Turn your best idea into a working tool for your business with YouWare.
0
0
0
an incredible neurips, always better together! 🤝
Here's our @ShopifyEng Neurips2025 wrapped: ✅ Presented our work on Recommendation Foundation Models in partnership with @NVIDIAAI and @LiquidAI ✅ Announced Tangle, our open source ML experimentation platform ✅ 15 booth presentations on Sidekick, Tangle, our global catalogue,
0
2
26
If you spent the week on Saturn and just got back to Earth, this is the technical report by @liquidai that explains how efficient AI models are built. You are welcome. https://t.co/g8vHc7eqjh
5
18
106
@AlgomashAI @liquidai Aha, yes, we've seen the same phenomena at a lower parameter scale with LFM2-8B-A1B beating dense models of 3-4B parameters. You can check the benchmarks here https://t.co/iX2G7i12X4 and in more detail in the LFM2 report that the Liquid team released last week
liquid.ai
We are releasing LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE) with 8.3B total parameters and 1.5B active parameters per token. By activating only a sparse subset of parameters during...
0
1
1
⚡️Stability World AI is about to launch its “AI Agent” A fully autonomous, 24/7 self-operating chatbot that runs on every platform, defends against FUD, and can be customized by any Web3 project with just one API integration. A new era of Autonomous AI in Web3 is unfolding.
3
5
79
Today we introduce Liquid Labs, our advanced research unit, with the goal of understanding and building efficient and adaptive intelligence systems. Liquid Labs consolidates our existing research efforts at Liquid across architecture of foundation models, multimodality,
18
34
241
The @LiquidAI LFM2 Tech Report is now live! It's a 51-page behemoth 🐘 with the full recipe on how we pre-, mid-, and post-trained LFM2, across all modalities text, vision, and audio 🚀 📄 https://t.co/Fm90fRdLTa 🤗
huggingface.co
The LFM2 Tech Report is now live on arXiv! We share everything from our novel hardware-in-the-loop architecture design, pre-training, and knowledge distillation, to the post-training recipe for small models. > 🤗LFM2 class of models has over 3.3M downloads > ⚛️LFM2 nanos from
2
6
23
Thrilled to work on LFM2 right after my PhD. Proud of the team’s push on architecture + multimodal extensions. The tech report is out, hope you enjoy the read... And yes, there’s more coming. Stay tuned!
The LFM2 Tech Report is now live on arXiv! We share everything from our novel hardware-in-the-loop architecture design, pre-training, and knowledge distillation, to the post-training recipe for small models. > 🤗LFM2 class of models has over 3.3M downloads > ⚛️LFM2 nanos from
2
1
16
MIT offshoot Liquid AI releases blueprint for enterprise-grade small-model training https://t.co/jcdlKRgsZ6
venturebeat.com
MIT offshoot Liquid AI releases blueprint for enterprise-grade small-model training
0
8
14
Glad to finally get the LFM2 Technical Report out! You will find interesting details on our architecture, pre-training and post-training procedures, and multimodal and retrieval variants. https://t.co/d6OEhxlZqk
arxiv.org
We present LFM2, a family of Liquid Foundation Models designed for efficient on-device deployment and strong task capabilities. Using hardware-in-the-loop architecture search under edge latency...
The LFM2 Tech Report is now live on arXiv! We share everything from our novel hardware-in-the-loop architecture design, pre-training, and knowledge distillation, to the post-training recipe for small models. > 🤗LFM2 class of models has over 3.3M downloads > ⚛️LFM2 nanos from
0
2
17
LFM2 Technical Report dropped! 🥳 It provides details about the LFM2 architecture, pre-training, post-training, vision, audio, and ColBERT models It's 51 pages long, have fun!
7
37
168
Win $250 in Big Bad Toy Store Credit. Follow @videogamedeals, Repost & Reply With What You'd Get If You Win. Ends Wed. at 9PM ET. USA Only.
1K
1K
1K
The LFM2 Tech Report is now live on arXiv! We share everything from our novel hardware-in-the-loop architecture design, pre-training, and knowledge distillation, to the post-training recipe for small models. > 🤗LFM2 class of models has over 3.3M downloads > ⚛️LFM2 nanos from
6
50
200
now you have all the ingredients to build a powerful and efficient on-device foundation model! enjoy!
The LFM2 Tech Report is now live on arXiv! We share everything from our novel hardware-in-the-loop architecture design, pre-training, and knowledge distillation, to the post-training recipe for small models. > 🤗LFM2 class of models has over 3.3M downloads > ⚛️LFM2 nanos from
3
1
29
This year at NeurIPS we got around 25 Liquid scientists joining the event with papers, presentations, and a big announcement. find us at the exhibition hall to learn more and to join our social event. 🌊
1
9
55
It's simple. The faster your Amazon business is, the more money you make And Boxem makes it faster than it has ever been to list products, ship to Amazon, & compare all shipping options for max value Get a free trial today:
0
1
13
What if BabyAGI didn’t need the cloud? We took @yoheinakajima's BabyAGI loop and ran it locally on an iPhone over a small language model (LFM2-350M). No OpenAI key. No servers. Just @RunAnywhereAI SDK + @liquidai LFM2-350M running everything fully on-device. #edgeai
16
16
70