TheInclusionAI Profile Banner
InclusionAI Profile
InclusionAI

@TheInclusionAI

Followers
602
Following
33
Media
6
Statuses
71

InclusionAI (IAI) envisions AGI as humanity's shared milestone. See @AntLingAGI model series, and OSS projects like AReaL & AWorld https://t.co/0pp0iXVeA8

Joined March 2025
Don't wanna be here? Send us removal request.
@TheInclusionAI
InclusionAI
6 days
Officially introduce Ring-1T, silver-level to the IMO 2025.🚀🚀🚀⚡️⚡️⚡️
@AntLingAGI
Ant Ling
6 days
🚀We officially release Ring-1T, the open-source trillion-parameter thinking model built on the Ling 2.0 architecture. Ring-1T achieves silver-level IMO reasoning through pure natural language reasoning. → 1 T total / 50 B active params · 128 K context window → Reinforced by
0
2
27
@TheInclusionAI
InclusionAI
6 days
We are excited to share a new milestone — we've open-sourced dInfer, a high-performance inference framework for diffusion language models (dLLMs). 🚀10.7X speedup over NVIDIA’s diffusion model framework Fast-dLLM. 🧠1,011 tokens per second in single-batch inference — on the
2
37
253
@TheInclusionAI
InclusionAI
10 days
Here is the chat interface:
0
0
4
@TheInclusionAI
InclusionAI
11 days
One of the largest non-thinking models ever open sourced🚀🚀🚀
@AntLingAGI
Ant Ling
11 days
🚀 Ling-1T — Trillion-Scale Efficient Reasoner Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series — 1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens. Highlights → Evo-CoT curriculum +
2
1
45
@TheInclusionAI
InclusionAI
20 days
A remarkable moment on scaling!
@AntLingAGI
Ant Ling
20 days
🚀 Ring-1T-preview: Deep Thinking, No Waiting The first 1 trillion open-source thinking model -> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7 -> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5 Still evolving!
0
1
11
@TheInclusionAI
InclusionAI
23 days
Ring-flash-linear-2.0 :cost effective ,as fast as flashlight ⚡️
@AntLingAGI
Ant Ling
23 days
🚀Meet Ring-flash-linear-2.0 & Ring-mini-linear-2.0 --> ultra-fast, SOTA reasoning LLMs with hybrid linear attentions --> 2x faster than same-size MoE & 10x faster than 32B models --> Enhanced with advanced RL methods Try the future of reasoning!
0
1
15
@zzqsmall
zzqsmall
26 days
amazing try with our baby moe model, good reason to buy a new iphone 17
@awnihannun
Awni Hannun
27 days
Managed to get Ling Mini 16B (1.4B active) running on my iPhone Air. It runs very fast with MLX. It's a DWQ of Ling Mini quantized to 3 bits-per-weight. A 16B model running on an Air at this speed is pretty awesome:
0
1
5
@TheInclusionAI
InclusionAI
26 days
Nice work!We released Ling-flash-2.0, Ring -flash-2.0, you can try more and talk to us.😇
@adrgrondin
Adrien Grondin
27 days
Another demo of the iPhone 17 Pro’s on-device LLM performance This time with Ling mini 2.0 by @TheInclusionAI, a 16B MoE model with 1.4B active parameters running at ~120tk/s Thanks to @awnihannun for the MLX DWQ 2-bit quants
0
0
7
@TheInclusionAI
InclusionAI
30 days
We will keep you guys informed about our hub.
@AdinaYakup
Adina Yakup
30 days
Ring-flash-2.0 🔥 MoE thinking model released by @TheInclusionAI https://t.co/sx0YUt2lnE ✨ 100B total/6.1B active -MIT license ✨ Powered by self-developed IcePop for stable long horizon RL ✨ Multi-stage training: SFT + RLVR + RLHF
1
0
3
@TheInclusionAI
InclusionAI
30 days
🚀🚀🚀 Ring-flash-2.0 shows a new breakthrough about Long-CoT RL traning on MoE models.
@AntLingAGI
Ant Ling
30 days
We open-source Ring-flash-2.0 — the thinking version of Ling-flash-2.0. --> SOTA reasoning in math, code, logic & beyond. --> 100B-A6B, 200+ tok/s on 4×H20 GPUs. --> Powered by "icepop"🧊, solving RL instability in MoE LLMs.
1
0
8
@TheInclusionAI
InclusionAI
1 month
Small activation,big performance, significant milestone of MoE LLM.🚀🚀🚀
@AntLingAGI
Ant Ling
1 month
⚡️Ling-flash-2.0⚡️ is now open source. 100B MoE LLM • only 6.1B active params --> 3x faster than 36B dense (200+ tok/s on H20) --> Beats ~40B dense LLM on complex reasoning --> Powerful coding and frontend development Small activation. Big performance.
0
1
5
@AQ_MedAI
AQ-MedAI
1 month
AQ Med AI team says hi to everyone!👋🏻 We’re on a mission to bring more MedAI breakthroughs to the world. 🦾 We invite all researchers, developers, and Med AI geeks to join us on this journey, transforming cutting-edge research into real-world impact. 🚀💥 🔗 GitHub, HuggingFace,
2
4
20
@TheInclusionAI
InclusionAI
1 month
cool
@bigeagle_xd
🐻熊狸
1 month
congs to the release! my wife has contributions in the post-training part 🥳
0
0
7
@TheInclusionAI
InclusionAI
1 month
More reasoning work will coming soon🚀
0
0
5
@TheInclusionAI
InclusionAI
1 month
Yes !
@AdinaYakup
Adina Yakup
1 month
Ring-mini-2.0 🔥 Latest reasoning model by @InclusionAI666 @AntLing20041208 https://t.co/VKqy21CXgD ✨ 16B/1.4B active - MIT license ✨ Trained on 20T tokens of high-quality data ✨ 128K context length ✨ Reasoning with CoT + RL
0
0
4