
Karan Goel
@krandiash
Followers
7K
Following
4K
Media
101
Statuses
1K
founder ceo @cartesia_ai, phd @stanfordailab, @mldcmu @iitdelhi alum
San Francisco, CA
Joined January 2010
It's been a pretty cool year, and we couldn't have asked for more amazing folks to partner with. We're building the real-time intelligence future of AI, brains that can run anywhere, and the model architectures to support it. Audio is just the start. Join us!.
We've raised $27M from Index Ventures, Lightspeed, Factory, Conviction, SVA, General Catalyst, A* and our wonderful angels. Cartesia's audio models power the next generation of voice agents, digital media, and assistants across startups and large enterprises. Our mission is to.
16
14
157
RT @chenwanch1: One of my favorite moments at #ICML2025 was being able to witness @_albertgu and the @cartesia_ai team’s reaction to Mamba….
0
8
0
Meet the team behind the fight against Big Token.
📢 ICML friends — don’t miss this!. Albert Gu, June Hwang, and Brandon Wang will be hosting a meet & greet at the Cartesia booth to chat about their recent H-Net Paper: 🗓️ Thursday, July 17.🕛 12:00 PM.📍 Cartesia AI Booth | ICML Expo Floor. Come by to
0
0
26
RT @SarahChieng: do you ever have a fish to fry but your hands are too dirty?. our new intern @imbaime built a crazy real-time voice to bro….
0
16
0
RT @_albertgu: I'm at ICML for the week!!. come find the @cartesia_ai booth to chat about architectures, tokenizers, voice AI, etc. @sukjun….
0
4
0
RT @_albertgu: Cool demo and really nice blog post on H-Net inference: > On stage2_XL, this completely flipped. In….
0
29
0
I’ll be at ICML this week, reach out if you’d like to chat about. 👉 research at @cartesia_ai.👉 alternate architectures and tokenizer free hierarchies.👉 the future of voice and multimodal interaction. We also have a booth where you can come by and say hi to the team.
0
4
57
RT @_albertgu: This was an incredibly important project to me - I’ve wanted to solve it for years, but had no idea how. This was all @sukju….
0
18
0
RT @fluorane: happy to announce that we've gotten rid of tokenizers!. especially excited with what we've replaced them with: end-to-end tra….
0
52
0
At Cartesia, we've always believed that model architectures remain a fundamental bottleneck in building truly intelligent systems. Intelligence that can interact and reason over massive amounts of context over decade-long timescales. This research is an important step in our.
We're excited to announce a new research release from the Cartesia team, as part of a long-term collaboration to advance deep learning architectures. We've always believed that model architectures remain a fundamental bottleneck in building truly intelligent systems. H-Nets are
0
10
64
We've been tirelessly working for the last few years to change how machines think. H-Net unlocks the next step of architecture scaling, bringing together many of the lessons learned in the last 5 years of SSM research to break fundamentally new ground in learning end to end.
Tokenization is just a special case of "chunking" - building low-level data into high-level abstractions - which is in turn fundamental to intelligence. Our new architecture, which enables hierarchical *dynamic chunking*, is not only tokenizer-free, but simply scales better.
2
16
124
RT @sukjun_hwang: Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical netw….
0
651
0
This blog is an excellent read for where the future of the field and Cartesia’s research is headed.
I converted one of my favorite talks I've given over the past year into a blog post. "On the Tradeoffs of SSMs and Transformers".(or: tokens are bullshit). In a few days, we'll release what I believe is the next major advance for architectures.
0
0
29
RT @_albertgu: I converted one of my favorite talks I've given over the past year into a blog post. "On the Tradeoffs of SSMs and Transfor….
0
113
0
RT @_albertgu: I really like this result: an elegant framing and solution to significantly improve length generalization in recurrent model….
0
15
0
RT @rbuit_: Despite theoretically handling long contexts, existing recurrent models still fall short: they may fail to generalize past the….
0
33
0
RT @joshim5: I've been working on variants of this problem for my entire career, and have only dreamed of designing molecules on the comput….
0
17
0
RT @daraladje: All the minds in the world. Now one question away. We’ve raised $16M from @sequoia to build @withdelphi - create your digi….
0
86
0