OpenMMLab Profile Banner
OpenMMLab Profile
OpenMMLab

@OpenMMLab

Followers
6K
Following
371
Media
232
Statuses
693

From MMDetection to AI Exploration. Empowering AI research and development with OpenMMLab. Discord:https://t.co/BWaz5KtF5e

Joined June 2020
Don't wanna be here? Send us removal request.
@intern_lm
InternLM
2 days
🔥LMDeploy v0.10.0 released! 😊Supercharges OpenAI’s GPT-OSS MXFP4 models. 😊Delivers exceptional performance for GPT-OSS models on V100 and higher GPUs. 😊On H800 & A100, LMDeploy outperforms vLLM across all scenarios—faster, more efficient inference! 🤗 https://t.co/bPJfr9rz5p
Tweet media one
Tweet media two
0
2
20
@OpenMMLab
OpenMMLab
23 days
@OpenMMLab
OpenMMLab
23 days
🔥China’s Open-source VLMs boom—Intern-S1, MiniCPM-V-4, GLM-4.5V, Step3, OVIS 🧐Join the AI Insight Talk with @huggingface, @OpenCompassX, @ModelScope2022 and @ZhihuFrontier 🚀Tech deep-dives & breakthroughs 🚀Roundtable debates ⏰Aug 21, 5 AM PDT 📺Live: https://t.co/brweSm4yT5
Tweet media one
0
0
12
@OpenMMLab
OpenMMLab
23 days
🔥China’s Open-source VLMs boom—Intern-S1, MiniCPM-V-4, GLM-4.5V, Step3, OVIS 🧐Join the AI Insight Talk with @huggingface, @OpenCompassX, @ModelScope2022 and @ZhihuFrontier 🚀Tech deep-dives & breakthroughs 🚀Roundtable debates ⏰Aug 21, 5 AM PDT 📺Live: https://t.co/brweSm4yT5
Tweet media one
1
3
18
@OpenCompassX
OpenCompass
1 month
🚀 Introducing #CompassVerifier: A unified and robust answer verifier for #LLMs evaluation and #RLVR! ✨LLM progress is bottlenecked by weak evaluation, looking for an alternative to rule-based verifiers? CompassVerifier can handle multiple domains including math, science, and
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
1
4
@intern_lm
InternLM
1 month
Our paper won an outstanding paper on ACL 2025. Try our best open-source multimodal reasoning model Intern-S1 at https://t.co/qlUYXkrNCn. This 241B MoE model combines strong general-task capabilities with state-of-the-art performance on a wide range of scientific tasks,
Tweet media one
Tweet media two
4
13
98
@intern_lm
InternLM
2 months
🚀Introducing Intern-S1, our most advanced open-source multimodal reasoning model yet! 🥳Strong general-task capabilities + SOTA performance on scientific tasks, rivaling leading closed-source commercial models. 🥰Built upon a 235B MoE language model and a 6B Vision encoder.
Tweet media one
Tweet media two
32
111
631
@intern_lm
InternLM
2 months
🚀 Introducing #POLAR: Bring Reward Model into a New Pre-training Era! ✨ Say goodbye to reward models with poor generalization! POLAR (Policy Discriminative Learning) is a groundbreaking pre-training paradigm that trains reward models to distinguish policy distributions,
Tweet media one
Tweet media two
1
30
113
@Xianbao_QIAN
Tiezhen WANG
3 months
We invited 3 top HF daily papers authors to deliver talks. Topics of this session: Reinforcement Learning Speakers: - Qi-Chen Zhao — Absolute Zero Reasoner: self-play RL that reaches SOTA reasoning with zero external data - Shu-Huai Ren — MiMo-VL: Xiaomi’s unified and
6
18
60
@intern_lm
InternLM
4 months
🥳Trained through #InternBootcamp, #InternThinker now combines pro-level Go skills with transparent reasoning. 😉In each game, it acts as a patient, insightful coach—analyzing the board, comparing moves, and clearly explaining each decision. 🤗Try it now: https://t.co/dMJEbOui5q
Tweet media one
Tweet media two
@intern_lm
InternLM
4 months
🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models. Unlimited automatic question generation and result verification. Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more. 🤗 https://t.co/uLtrAghjUz
Tweet media one
1
6
24
@intern_lm
InternLM
4 months
🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models. Unlimited automatic question generation and result verification. Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more. 🤗 https://t.co/uLtrAghjUz
Tweet media one
2
4
32
@opengvlab
OpenGVLab
5 months
🥳We have released #InternVL3, an advanced #MLLM series ranging from 1B to 78B, on @huggingface. 😉InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new SOTA among open-source MLLMs. ☺️Highlights: - Native multimodal pre-training: Simultaneous language and
Tweet media one
Tweet media two
3
48
166
@OpenMMLab
OpenMMLab
6 months
🥳#FaceShot generates animations for your "imaginary friends", like Teddy Bear, and brings them into life! 😉Project page: https://t.co/xfes0NKN2u 😉Paper link: https://t.co/hP3VTW7VrT 😉Code: https://t.co/NG0Tx81QkP
0
1
10
@OpenCompassX
OpenCompass
7 months
🥳#StructFlowBench is a structurally annotated multi-turn benchmark that leverages a structure-driven generation paradigm to enhance the simulation of complex dialogue scenarios. 🥳StructFlowBench is now part of the #CompassHub! 😉Feel free to download and explore it—available
Tweet media one
Tweet media two
0
1
3
@intern_lm
InternLM
7 months
🥳Thrill to release the full RL training code of #OREAL! 😊Now you can fully reproduce the results of OREAL-7B/32B. Using #DeepSeek-R1-Distill-Qwen-32B, you can further obtain a model has 95.6 on MATH-500! 🤗Code: https://t.co/64qos7qln3 🤗Based on:
Tweet card summary image
github.com
A Next-Generation Training Engine Built for Ultra-Large MoE Models - InternLM/xtuner
@intern_lm
InternLM
7 months
🥳Introducing #OREAL, a new RL method for math reasoning. 😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 🤗Paper/Model/Data: https://t.co/IVD72J8BHN
Tweet media one
Tweet media two
4
39
156
@intern_lm
InternLM
7 months
🥳Introducing #OREAL, a new RL method for math reasoning. 😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 🤗Paper/Model/Data: https://t.co/IVD72J8BHN
Tweet media one
Tweet media two
3
53
235
@opengvlab
OpenGVLab
7 months
🚀 Introducing #InternVideo 2.5 - The Video Multimodal AI That Sees Longer & Smarter! ✨ Handles videos 6x longer than predecessors ✨ Pinpoints objects/actions with surgical precision ✨ Trained on 300K+ hours of diverse video data 📈 Outperforms SOTA on multiple benchmarks &
Tweet media one
2
11
75
@intern_lm
InternLM
8 months
🥳🚀🥳Try it now on:
@intern_lm
InternLM
8 months
🚀Introducing InternLM3-8B-Instruct with Apache License 2.0. -Trained on only 4T tokens, saving more than 75% of the training cost. -Supports deep thinking for complex reasoning and normal mode for chat. Model:@huggingface https://t.co/HKQacyasyN GitHub: https://t.co/kh5znHasc8
Tweet media one
Tweet media two
0
5
31
@intern_lm
InternLM
8 months
🚀Introducing InternLM3-8B-Instruct with Apache License 2.0. -Trained on only 4T tokens, saving more than 75% of the training cost. -Supports deep thinking for complex reasoning and normal mode for chat. Model:@huggingface https://t.co/HKQacyasyN GitHub: https://t.co/kh5znHasc8
Tweet media one
Tweet media two
4
60
207
@OpenCompassX
OpenCompass
9 months
🚀 Shocking : O1-mini scores just 15.6% on AIME under strict, real-world metrics. 🚨 📈 Introducing G-Pass@k: A metric that reveals LLMs' performance consistency across trials. 🌐 LiveMathBench: Challenging LLMs with contemporary math problems, minimizing data leaks. 🔍 Our
Tweet media one
Tweet media two
3
15
66
@intern_lm
InternLM
9 months
🥳InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions. Real-time visual & auditory understanding Long-term memory formation Natural voice interaction Code: https://t.co/yF1W6wzeNH Model: https://t.co/139DuHZvqo
Tweet media one
Tweet media two
Tweet media three
5
37
149