Eigen_AI_Labs Profile Banner
Eigen AI Profile
Eigen AI

@Eigen_AI_Labs

Followers
405
Following
16
Media
3
Statuses
12

Built by researchers and engineers from MIT, we are pursuing Artificial Efficient Intelligence (AEI). Try GPT-OSS support: https://t.co/BQfsnXIGFo.

Palo Alto, CA
Joined July 2025
Don't wanna be here? Send us removal request.
@Eigen_AI_Labs
Eigen AI
3 months
🚀Founded by four dedicated MIT graduates, Eigen AI is the world's first company focusing on AEI – Artificial Efficient Intelligence, making AI accessible for all. Today OpenAI dropped GPT-OSS. We teamed up with our partners SGLang @lmsysorg and @NVIDIA to deliver open-source
4
21
70
@MiniMax__AI
MiniMax (official)
9 days
We’re open-sourcing MiniMax M2 — Agent & Code Native, at 8% Claude Sonnet price, ~2x faster ⚡ Global FREE for a limited time via MiniMax Agent & API - Advanced Coding Capability: Engineered for end-to-end developer workflows. Strong capability on a wide-range of applications
119
1K
3K
@Eigen_AI_Labs
Eigen AI
5 days
🚀 Releasing open-source Eigen-Banana-Qwen-Image-Edit: 4 seconds ⚡ instruction-based image edits trained on Pico-Banana-400K. Super fast with high image editing quality. Open-source LoRA for Diffusers/DiffSynth-Studio + enterprise stack (EigenTrain/Inference/Deploy). Feel free
1
9
27
@Eigen_AI_Labs
Eigen AI
24 days
The future of AI isn’t about size — it’s about intelligence design. Our EIGEN-1 system shows how agentic architecture can outperform bigger models through smarter reasoning. Efficient. Adaptive. Human-aligned. #EigenAI #AEI #DeepSeek
@XiangruTang
Rob Tang
28 days
🚨 Eigen-1 gets 48.3% (Pass@1) & 61.74% (Pass@5) on "Humanity's Last Exam" (HLE) gold subset @FutureHouseSF using DeepSeek V3.1. Prev. Grok4->30.2%, GPT-5->22.8%, Gemini 2.5 Pro->18.8% 📎 https://t.co/4Fhcp8VTBG The future isn't bigger models, it's smarter agentic design! 🚀
0
0
4
@Eigen_AI_Labs
Eigen AI
2 months
SGLang @lmsysorg now delivers major performance boosts for gpt-oss-120b: 🚀 • 2.1× higher prefill throughput • 2.25× higher decode throughput • Optimized for Hopper, Blackwell & MI350 GPUs Try it out on our GPT-OSS Playground! 👉 https://t.co/TkIh63zBdK See the detailed
@lmsysorg
LMSYS Org
2 months
3 weeks ago, @OpenAI released gpt-oss, matching GPT-4o mini in capability. After Day-0 support, SGLang delivered major perf boosts: optimized attention, MoE, all-reduce fusion, quant, and Eagle-3. 🚀Up to 2.1× prefill and 2.25× decode throughput vs Day-0!👇
1
2
7
@lmsysorg
LMSYS Org
3 months
Reminder: SGLang x AMD SF Meetup is this Friday (Aug 22) at Shack15 🎉 Hands-on GPU workshop, talks from AMD/xAI/SGLang, food + networking . Don’t miss it! 📣 Seats are filling fast - if you haven’t registered yet, now’s the time. Bring a friend! 👉 Register here:
1
8
23
@Eigen_AI_Labs
Eigen AI
3 months
Check out our blog post on the architecture deep-dive of GPT-OSS → https://t.co/zCiOhAhBjm 5 days ago → we pulled off Day 0 serving for OpenAI’s GPT-OSS-120B & 20B on Hopper 🖥️ + Blackwell ⚡ — fast, stable, production-ready the moment the weights dropped. 🌀 Tuned Attention
0
5
16
@lmsysorg
LMSYS Org
3 months
SGLang is now officially supporting OpenAI’s new GPT-OSS model!
@OpenAI
OpenAI
3 months
Our open models are here. Both of them. https://t.co/9tFxefOXcg
2
18
93
@Eigen_AI_Labs
Eigen AI
3 months
Proud to collaborate with the SGLang team, it was a lot of fun!
@lmsysorg
LMSYS Org
3 months
🚀We are thrilled to announce that SGLang now supports OpenAI's latest open-weight model 'gpt-oss-120b', on both Hopper and Blackwell GPUs. Thanks to the collaborative efforts from @Eigen_AI_Labs , @nvidia , SGLang @lmsysorg and the OSS community! SGLang support landed within 4
0
0
12
@Eigen_AI_Labs
Eigen AI
4 months
🚀Excited to see our collaboration with @lmsysorg bring Multiple Token Prediction (MTP) in SGLang to production! Proud to support faster, smarter open-source LLM serving. #EigenAl #MTP #SGLang #LLMinfra #ModelServing #DeepSeek #OpenSourceAl #AskChatGPT
@lmsysorg
LMSYS Org
4 months
🚀 Summer Fest Day 5: Multiple Token Prediction in SGLang by @Eigen_AI_ and SGLang Team 1.6× throughput, same quality — open-source & production-ready! We’ve integrated MTP into SGLang, unlocking up to 60% higher output throughput for models like DeepSeek V3, with zero quality
0
4
10