_inception_ai Profile Banner
Inception Profile
Inception

@_inception_ai

Followers
15K
Following
56
Media
25
Statuses
131

Pioneering a new generation of LLMs.

Joined February 2025
Don't wanna be here? Send us removal request.
@_inception_ai
Inception
1 month
Mercury is refreshed – with across-the-board improvements in coding, instruction following, math, and knowledge recall. Start building responsive, in-the-flow AI solutions! Read more: https://t.co/QyTVaHAIue
14
22
145
@StefanoErmon
Stefano Ermon
1 day
Thanks for the shoutout to @_inception_ai, Dan! Totally agree there’s enormous headroom left in model and inference design, especially with diffusion language models🚀
@realDanFu
Dan Fu
3 days
My response to @Tim_Dettmers great post last week that we won't reach AGI because of resource limitations. My take - there's a ton of headroom in today's systems, it's too early to say that we're limited in any real sense. There's so much to do! https://t.co/yBFqGGaPmv
2
1
40
@_inception_ai
Inception
7 days
Autoregressive LLMs share a structural bottleneck: they generate tokens sequentially. The AI Journal explains how dLLMs parallelize token generation and deliver high-quality responses fast enough for ultra-low latency coding and voice workflows. Proud to see Mercury featured:
Tweet card summary image
aijourn.com
Today's large language models (LLMs) all share a structural bottleneck: they generate tokens sequentially. One. At. A. Time. 
0
2
26
@WilliamFurness
William Furness
5 hours
🧵What if the DSM is holding psychiatry back? Modern mental health is about to radically change. Most psychiatric diagnoses today are still based on the DSM — a system built on: • symptom checklists • clinical observation • consensus categories Not objective biology.
9
1
25
@readsail
SAIL
14 days
@joycech3n from @_inception_ai on DLLM
0
1
12
@readsail
SAIL
14 days
Had the pleasure of interviewing @joycech3n from @_inception_ai at NeurIPS this week. Check out her reasoning of why latency is important!
0
1
9
@_inception_ai
Inception
10 days
Our CEO @StefanoErmon sat down with @CoreyNoles of @theneurondaily at #AWSreInvent to discuss how Mercury brings diffusion to language, unlocking faster, scalable, real-time AI for coding and text. They also discussed what diffusion means for the next generation of AI
2
1
16
@CelebrateMales
Celebrating Masculinity
3 days
The Relentless War on Masculinity by @DavidMaywald is the guide book on advocating for men and boys—proudly and effectively, driven by principles and with respect for others. It is also a manual for rebuilding healthier societies, with stronger relationships between men and
2
2
14
@_inception_ai
Inception
11 days
We met so many amazing people at #NeurIPS2025! Thanks @MayfieldFund for co-hosting our event!
1
3
37
@_inception_ai
Inception
12 days
Thanks @JayaGup10 and @FoundationCap! Wonderful conversations. It was a great event!
@JayaGup10
Jaya Gupta
13 days
Awesome @FoundationCap and @_inception_ai happy hour co-hosted with @thekaransinghal!
1
1
16
@elonmusk
Elon Musk
1 month
@StefanoErmon @_inception_ai Diffusion will obviously work on any bitstream. With text, since humans read from first word to last, there is just the question of whether the delay to first sentence for diffusion is worth it. That said, the vast majority of AI workload will be video understanding and
134
194
2K
@_inception_ai
Inception
29 days
Mercury is now available on Azure AI Foundry! This means you can leverage Mercury's speeds with the security of a private Azure instance and all the features of the broader Azure ecosystem. Read more: https://t.co/6Hotow3ruD #dLLM #AzureAI
5
6
33
@_inception_ai
Inception
1 month
Tired of slow, one-word-at-a-time AI? ⏳ Our founder, @StefanoErmon, joined @acremades on the Dealmakers podcast to discuss a new foundation for AI: diffusion LLMs. Hear the story of how his Stanford lab's research led to Inception & 10x faster, parallel generation. Listen
Tweet card summary image
podcasts.apple.com
Podcast Episode · DealMakers · 11/11/2025 · 27m
2
3
34
@elonmusk
Elon Musk
1 month
@StefanoErmon @_inception_ai Diffusion will obviously work on any bitstream. With text, since humans read from first word to last, there is just the question of whether the delay to first sentence for diffusion is worth it. That said, the vast majority of AI workload will be video understanding and
134
194
2K
@samar_a_khanna
Samar Khanna
1 month
Mercury is now much better at agentic tasks! You can try out the blazing speed of our dLLMs on your coding agents 😎 Here's a little zombie shooter game I spun up using Mercury with @goose_oss . Go ahead and diffuse w goose @_inception_ai
0
1
20
@_inception_ai
Inception
1 month
Mercury runs five times faster than Claude 4.5 Haiku at less than one-fourth the price, while maintaining higher quality.
0
7
46
@_inception_ai
Inception
1 month
Today’s LLMs are painfully slow and expensive. They are autoregressive and spit out words sequentially. One. At. A. Time. Our dLLMs generate text in parallel, delivering answers up to 10X faster. Now we’ve raised $50M to scale them. Full story from @russellbrandom in
Tweet card summary image
techcrunch.com
Diffusion models already power AI image generators, but Inception thinks they can be even more powerful applied in software development.
15
47
436
@_inception_ai
Inception
2 months
🚀We've partnered with ProxyAI! Our Mercury Coder dLLM is now the default for ProxyAI's autocomplete, next edit, and auto apply tooling, providing developers with lightning-fast and accurate code edits. Read more: https://t.co/eWtOFVXpgk #AI #DiffusionModels #dLLM
Tweet card summary image
tryproxy.io
1
2
14
@karpathy
Andrej Karpathy
2 months
Nice, short post illustrating how simple text (discrete) diffusion can be. Diffusion (i.e. parallel, iterated denoising, top) is the pervasive generative paradigm in image/video, but autoregression (i.e. go left to right bottom) is the dominant paradigm in text. For audio I've
@nathanbarrydev
Nathan Barry
2 months
BERT is just a Single Text Diffusion Step! (1/n) When I first read about language diffusion models, I was surprised to find that their training objective was just a generalization of masked language modeling (MLM), something we’ve been doing since BERT from 2018. The first
274
549
5K
@_inception_ai
Inception
2 months
Our CEO @StefanoErmon joined the Infinite Curiosity Podcast and shared how our Mercury diffusion LLMs deliver faster, cheaper models and why diffusion is reshaping coding, reasoning, and multimodal AI. Thanks for having him on @PrateekVJoshi! https://t.co/9gTrf5IEMV
1
0
18
@_inception_ai
Inception
2 months
We’re in! We are now part of the #AWSGenAIAccelerator2025. We’re looking forward to working with @AWSstartups to help us deliver ultra-fast and efficient diffusion large language models.
0
4
10
@_inception_ai
Inception
3 months
Honored that our co-founder @adityagrover_ has been named to the 2025 Mayfield | Divot AI List! Thank you to @MayfieldFund and @StartupGrind for the recognition alongside 50 innovators shaping the future of AI. See the full list here:
Tweet card summary image
divot.org
The AI List will spotlight change makers and rising stars: startups, builders, researchers, industry leaders, media voices, policymakers, and more.
0
0
3