
InclusionAI
@TheInclusionAI
Followers
602
Following
33
Media
6
Statuses
71
InclusionAI (IAI) envisions AGI as humanity's shared milestone. See @AntLingAGI model series, and OSS projects like AReaL & AWorld https://t.co/0pp0iXVeA8
Joined March 2025
Officially introduce Ring-1T, silver-level to the IMO 2025.🚀🚀🚀⚡️⚡️⚡️
🚀We officially release Ring-1T, the open-source trillion-parameter thinking model built on the Ling 2.0 architecture. Ring-1T achieves silver-level IMO reasoning through pure natural language reasoning. → 1 T total / 50 B active params · 128 K context window → Reinforced by
0
2
27
We are excited to share a new milestone — we've open-sourced dInfer, a high-performance inference framework for diffusion language models (dLLMs). 🚀10.7X speedup over NVIDIA’s diffusion model framework Fast-dLLM. 🧠1,011 tokens per second in single-batch inference — on the
2
37
253
One of the largest non-thinking models ever open sourced🚀🚀🚀
🚀 Ling-1T — Trillion-Scale Efficient Reasoner Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series — 1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens. Highlights → Evo-CoT curriculum +
2
1
45
A remarkable moment on scaling!
🚀 Ring-1T-preview: Deep Thinking, No Waiting The first 1 trillion open-source thinking model -> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7 -> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5 Still evolving!
0
1
11
Ring-flash-linear-2.0 :cost effective ,as fast as flashlight ⚡️
🚀Meet Ring-flash-linear-2.0 & Ring-mini-linear-2.0 --> ultra-fast, SOTA reasoning LLMs with hybrid linear attentions --> 2x faster than same-size MoE & 10x faster than 32B models --> Enhanced with advanced RL methods Try the future of reasoning!
0
1
15
Models(international): https://t.co/gQKCWSAEeg Models(China) https://t.co/CeSOozHoTB uesercases: https://t.co/uldNoieQSE api-reference:
docs.siliconflow.cn
Creates a model response for the given chat conversation.
0
0
5
Nice work!We released Ling-flash-2.0, Ring -flash-2.0, you can try more and talk to us.😇
Another demo of the iPhone 17 Pro’s on-device LLM performance This time with Ling mini 2.0 by @TheInclusionAI, a 16B MoE model with 1.4B active parameters running at ~120tk/s Thanks to @awnihannun for the MLX DWQ 2-bit quants
0
0
7
We will keep you guys informed about our hub.
Ring-flash-2.0 🔥 MoE thinking model released by @TheInclusionAI
https://t.co/sx0YUt2lnE ✨ 100B total/6.1B active -MIT license ✨ Powered by self-developed IcePop for stable long horizon RL ✨ Multi-stage training: SFT + RLVR + RLHF
1
0
3
🚀🚀🚀 Ring-flash-2.0 shows a new breakthrough about Long-CoT RL traning on MoE models.
We open-source Ring-flash-2.0 — the thinking version of Ling-flash-2.0. --> SOTA reasoning in math, code, logic & beyond. --> 100B-A6B, 200+ tok/s on 4×H20 GPUs. --> Powered by "icepop"🧊, solving RL instability in MoE LLMs.
1
0
8
Small activation,big performance, significant milestone of MoE LLM.🚀🚀🚀
⚡️Ling-flash-2.0⚡️ is now open source. 100B MoE LLM • only 6.1B active params --> 3x faster than 36B dense (200+ tok/s on H20) --> Beats ~40B dense LLM on complex reasoning --> Powerful coding and frontend development Small activation. Big performance.
0
1
5
AQ Med AI team says hi to everyone!👋🏻 We’re on a mission to bring more MedAI breakthroughs to the world. 🦾 We invite all researchers, developers, and Med AI geeks to join us on this journey, transforming cutting-edge research into real-world impact. 🚀💥 🔗 GitHub, HuggingFace,
2
4
20
Yes !
Ring-mini-2.0 🔥 Latest reasoning model by @InclusionAI666 @AntLing20041208
https://t.co/VKqy21CXgD ✨ 16B/1.4B active - MIT license ✨ Trained on 20T tokens of high-quality data ✨ 128K context length ✨ Reasoning with CoT + RL
0
0
4