
hazyresearch
@HazyResearch
Followers
8K
Following
3K
Media
34
Statuses
2K
A research group in @StanfordAILab working on the foundations of machine learning & systems. https://t.co/JHK58TDorG Ostensibly supervised by Chris Ré
Stanford, CA
Joined August 2012
RT @_goel_arnav: I just saw @_albertgu call the major AI labs as "Big Token" and it has to be the most hilarious shit ever lol.
0
12
0
RT @qizhengz_alex: Excited to share our latest at ICML 2025: pushing LoRA fine-tuning to below 2 bits (as low as 1.15 bits), unlocking up t….
0
3
0
RT @AIatAMD: We’re thrilled to collaborate with the @HazyResearch @StanfordAILab, led by Chris Ré, to power Minions, their cutting-edge age….
0
14
0
RT @sukjun_hwang: Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical netw….
0
633
0
RT @_albertgu: Tokenization is just a special case of "chunking" - building low-level data into high-level abstractions - which is in turn….
0
177
0
RT @cartesia_ai: We're excited to announce a new research release from the Cartesia team, as part of a long-term collaboration to advance d….
0
43
0
RT @krandiash: At Cartesia, we've always believed that model architectures remain a fundamental bottleneck in building truly intelligent sy….
0
9
0
RT @KumbongHermann: Happy to share that our HMAR code and pre-trained models are now publicly available. Please try them out here:. code: h….
0
10
0
RT @togethercompute: Announcing DeepSWE 🤖: our fully open-sourced, SOTA software engineering agent trained purely with RL on top of Qwen3-3….
0
78
0
RT @Azaliamirh: Introducing Weaver, a test time scaling method for verification! . Weaver shrinks the generation-verification gap through a….
0
47
0
RT @ekellbuch: LLMs can generate 100 answers, but which one is right? Check out our latest work closing the generation-verification gap by….
0
14
0
RT @togethercompute: New Notebook: LLM Evals with Batch Inference!. The new batch API is perfect for running large benchmarks - 50% cost sa….
0
1
0
RT @SohamGovande: Chipmunks can now hop across multiple GPU architectures (sm_80, sm_89, sm_90). You can get a 1.4-3x lossless speedup when….
0
3
0
RT @BeidiChen: Say hello to Multiverse — the Everything Everywhere All At Once of generative modeling. 💥 Lossless, adaptive, and gloriousl….
0
21
0
RT @james_y_zou: Excited to introduce Open Data Scientist:.✅outperforms Gemini data science agent.✅solves real Kaggle tasks.✅fully open sou….
0
9
0
RT @cartesia_ai: 👑 We’re #1! Sonic-2 leads @Labelbox’s Speech Generation Leaderboard topping out in speech quality, word error rate, and na….
0
8
0
RT @ajratner: Scale alone is not enough for AI data. Quality and complexity are equally critical. Excited to support all of these for LLM d….
0
33
0