danielz2333 Profile Banner
Chenhui Zhang Profile
Chenhui Zhang

@danielz2333

Followers
477
Following
9K
Media
16
Statuses
4K

Engineering @googledeepmind | Prev. @mitidss @IllinoisCDS @IllinoisStat | Views are my own

Cambridge, MA
Joined April 2013
Don't wanna be here? Send us removal request.
@wzihanw
Zihan "Zenus" Wang
8 days
Everything is a world model if you squint hard enough.
29
112
869
@danielz2333
Chenhui Zhang
7 days
šŸ¤”šŸ¤”šŸ¤”
@davidmbudden
Budden
7 days
@IsaacKing314 @hamish_todd Disclaimer: I'm dropping an end-to-end Lean proof tonight.
0
0
0
@JeffDean
Jeff Dean
8 days
Performance Hints Over the years, my colleague Sanjay Ghemawat and I have done a fair bit of diving into performance tuning of various pieces of code. We wrote an internal Performance Hints document a couple of years ago as a way of identifying some general principles and we've
103
1K
8K
@zhzHNN
Hunter Zhang, Ph.D
21 days
Does @NeurIPSConf have the best poster design award? This is the one.
6
10
131
@danielz2333
Chenhui Zhang
20 days
šŸ»šŸ»
@JustinLin610
Junyang Lin
20 days
great experience in san diego neurips 2025. one of the best ideas i heard which i have long been agreeing with is "research as a product" or me would like to call it "model as a product". as a product, what u r building should be generating productivity and providing value to the
0
0
1
@bookwormengr
GDP
21 days
One of GOATs @JustinLin610
2
1
51
@NielsRogge
Niels Rogge
21 days
Grok pointed me to this fascinating research paper titled "Position: The Current AI Conference Model is Unsustainable! Diagnosing the Crisis of Centralized AI Conferences" It argues that the current paradigm of organizing conferences at a single location is pretty ridiculous and
@NielsRogge
Niels Rogge
21 days
Oh yes let’s fly 30,000 people to Sydney! Crazy what irresponsible locations they choose for ā€œAI conferencesā€, which in reality are just glorified holidays
3
6
45
@chanwoopark20
Chanwoo Park
22 days
One of my favorite moments from Yejin Choi’s NeurIPS keynote was her point as follows: "it looks like a minor detail, but one thing I learned since joining and spending time at NVIDIA is that all these, like, minor details, implementation details matter a lot" -- I think this is
22
77
1K
@danielz2333
Chenhui Zhang
23 days
Will be at the Google Booth until 3 p.m.!
@GoogleResearch
Google Research
23 days
We have more interactive demos this afternoon — stop by the kiosks at the #NeurIPS2025 Google booth from 1pm - 5pm to learn more about: →AlphaEarth Foundations: Planetary Geospatial Insights through Satellite Embeddings →Radiology Report Structuring Powered by LangExtract and
0
0
3
@OfficialLoganK
Logan Kilpatrick
24 days
Super excited for Google Workspace Studio, I’ve been playing with the early versions for months and it is super useful to connect Docs, Gmail, etc with Gemini https://t.co/vzTbWGcTdd
145
169
3K
@chelseabfinn
Chelsea Finn
24 days
How can neural nets learn from experience without the scalar reward bottleneck? Feedback descent enables long-term iterative improvement from text feedback. Blog post: https://t.co/GNgoZWhVTK Paper:
Tweet card summary image
arxiv.org
We introduce \textit{Feedback Descent}, a framework that optimizes text artifacts -- prompts, code, and molecules -- through structured textual feedback, rather than relying solely on scalar...
@yoonholeee
Yoonho Lee
26 days
Following the Text Gradient at Scale We wrote a @StanfordAILab blog post about the limitations of RL methods that learn solely from scalar rewards + a new method that addresses this Blog: https://t.co/rJ1IcBKDoR Paper: https://t.co/75pHtElyk3
7
68
549
@JeffDean
Jeff Dean
25 days
Long-term investments in basic university research are behind many of the innovations we take for granted today (TCP/IP, RISC processors, ...). A conversation between Magdalena Balazinska, Partha Ranganathan, Urs Hƶlzle and me on academia's impact on Google and the journey from
12
68
658
@GoogleResearch
Google Research
25 days
AlphaEarth Foundations functions like a virtual satellite, integrating huge amounts of Earth observation data into a unified digital representation to generate maps and monitoring systems from local to global scales. See it in action at the #NeurIPS2025 Google booth at 5 PM.
12
75
442
@danielz2333
Chenhui Zhang
24 days
Who made the terrible conference app for #NeurIPS2025 šŸ˜… You can vibe code a better app than this.
0
0
0
@OfficialLoganK
Logan Kilpatrick
1 month
https://t.co/1wSsXqHUV0 No better time than thanksgiving to build : )
Tweet card summary image
aistudio.google.com
The fastest path from prompt to production with Gemini
193
74
2K
@SebastienBubeck
Sebastien Bubeck
1 month
3 years ago we could showcase AI's frontier w. a unicorn drawing. Today we do so w. AI outputs touching the scientific frontier: https://t.co/ALJvCFsaie Use the doc to judge for yourself the status of AI-aided science acceleration, and hopefully be inspired by a couple examples!
74
213
1K
@sainingxie
Saining Xie
1 month
Here’s a fun example that shows how clever nanošŸŒ pro is and how well it teams up with Gemini3⃣. The task is to hide Waldo in a busy crowd. It requires precise editing that preserves every detail in this high resolution image, along with strong visual reasoning: you can see how
4
13
94
@DynamicWebPaige
šŸ‘©ā€šŸ’» Paige Bailey
1 month
gonna be a good week, y'all :)
52
31
915
@soumithchintala
Soumith Chintala
2 months
Leaving Meta and PyTorch I'm stepping down from PyTorch and leaving Meta on November 17th. tl;dr: Didn't want to be doing PyTorch forever, seemed like the perfect time to transition right after I got back from a long leave and the project built itself around me. Eleven years
498
587
11K
@StefanoErmon
Stefano Ermon
2 months
When we began applying diffusion to language in my lab at Stanford, many doubted it could work. That research became Mercury diffusion LLM: 10X faster, more efficient, and now the foundation of @_inception_ai. Proud to raise $50M with support from top investors.
@_inception_ai
Inception
2 months
Today’s LLMs are painfully slow and expensive. They are autoregressive and spit out words sequentially. One. At. A. Time. Our dLLMs generate text in parallel, delivering answers up to 10X faster. Now we’ve raised $50M to scale them. Full story from @russellbrandom in
40
81
1K