Chenhui Zhang
@danielz2333
Followers
477
Following
9K
Media
16
Statuses
4K
Engineering @googledeepmind | Prev. @mitidss @IllinoisCDS @IllinoisStat | Views are my own
Cambridge, MA
Joined April 2013
Everything is a world model if you squint hard enough.
29
112
869
š¤š¤š¤
@IsaacKing314 @hamish_todd Disclaimer: I'm dropping an end-to-end Lean proof tonight.
0
0
0
Performance Hints Over the years, my colleague Sanjay Ghemawat and I have done a fair bit of diving into performance tuning of various pieces of code. We wrote an internal Performance Hints document a couple of years ago as a way of identifying some general principles and we've
103
1K
8K
Grok pointed me to this fascinating research paper titled "Position: The Current AI Conference Model is Unsustainable! Diagnosing the Crisis of Centralized AI Conferences" It argues that the current paradigm of organizing conferences at a single location is pretty ridiculous and
Oh yes letās fly 30,000 people to Sydney! Crazy what irresponsible locations they choose for āAI conferencesā, which in reality are just glorified holidays
3
6
45
One of my favorite moments from Yejin Choiās NeurIPS keynote was her point as follows: "it looks like a minor detail, but one thing I learned since joining and spending time at NVIDIA is that all these, like, minor details, implementation details matter a lot" -- I think this is
22
77
1K
Will be at the Google Booth until 3 p.m.!
We have more interactive demos this afternoon ā stop by the kiosks at the #NeurIPS2025 Google booth from 1pm - 5pm to learn more about: āAlphaEarth Foundations: Planetary Geospatial Insights through Satellite Embeddings āRadiology Report Structuring Powered by LangExtract and
0
0
3
Super excited for Google Workspace Studio, Iāve been playing with the early versions for months and it is super useful to connect Docs, Gmail, etc with Gemini https://t.co/vzTbWGcTdd
145
169
3K
How can neural nets learn from experience without the scalar reward bottleneck? Feedback descent enables long-term iterative improvement from text feedback. Blog post: https://t.co/GNgoZWhVTK Paper:
arxiv.org
We introduce \textit{Feedback Descent}, a framework that optimizes text artifacts -- prompts, code, and molecules -- through structured textual feedback, rather than relying solely on scalar...
Following the Text Gradient at Scale We wrote a @StanfordAILab blog post about the limitations of RL methods that learn solely from scalar rewards + a new method that addresses this Blog: https://t.co/rJ1IcBKDoR Paper: https://t.co/75pHtElyk3
7
68
549
Long-term investments in basic university research are behind many of the innovations we take for granted today (TCP/IP, RISC processors, ...). A conversation between Magdalena Balazinska, Partha Ranganathan, Urs Hƶlzle and me on academia's impact on Google and the journey from
12
68
658
AlphaEarth Foundations functions like a virtual satellite, integrating huge amounts of Earth observation data into a unified digital representation to generate maps and monitoring systems from local to global scales. See it in action at the #NeurIPS2025 Google booth at 5 PM.
12
75
442
Who made the terrible conference app for #NeurIPS2025 š
You can vibe code a better app than this.
0
0
0
https://t.co/1wSsXqHUV0 No better time than thanksgiving to build : )
aistudio.google.com
The fastest path from prompt to production with Gemini
193
74
2K
3 years ago we could showcase AI's frontier w. a unicorn drawing. Today we do so w. AI outputs touching the scientific frontier: https://t.co/ALJvCFsaie Use the doc to judge for yourself the status of AI-aided science acceleration, and hopefully be inspired by a couple examples!
74
213
1K
Hereās a fun example that shows how clever nanoš pro is and how well it teams up with Gemini3ā£. The task is to hide Waldo in a busy crowd. It requires precise editing that preserves every detail in this high resolution image, along with strong visual reasoning: you can see how
4
13
94
Leaving Meta and PyTorch I'm stepping down from PyTorch and leaving Meta on November 17th. tl;dr: Didn't want to be doing PyTorch forever, seemed like the perfect time to transition right after I got back from a long leave and the project built itself around me. Eleven years
498
587
11K
When we began applying diffusion to language in my lab at Stanford, many doubted it could work. That research became Mercury diffusion LLM: 10X faster, more efficient, and now the foundation of @_inception_ai. Proud to raise $50M with support from top investors.
Todayās LLMs are painfully slow and expensive. They are autoregressive and spit out words sequentially. One. At. A. Time. Our dLLMs generate text in parallel, delivering answers up to 10X faster. Now weāve raised $50M to scale them. Full story from @russellbrandom in
40
81
1K