Andrew M. Dai
@iamandrewdai
Followers
2K
Following
488
Media
11
Statuses
118
Deep learning/LLM principal researcher (director) at Google DeepMind. Gemini Data Area Lead. Chinese name: 戴明博
San Francisco, CA
Joined February 2011
Today, I am launching @axiommathai At Axiom, we are building a self-improving superintelligent reasoner, starting with an AI mathematician.
182
265
2K
We just launched Project Mariner in Google Search AI Mode — one of the most fun projects we've worked on, lately! Behind every search query, an agentic system works quietly for you — maybe one of the most profound interfaces ever built.
1/5 Today, we’re introducing new agentic & personalization features in AI Mode that makes Search even more useful and tailored to you. More info on the latest updates below 🧵
9
12
272
Make every thinking problem embarrassingly parallel! Looking forward to seeing what people discover.
Gemini 2.5 Deep Think now available for Ultra subscribers! Great at tackling problems that require creativity & planning, it finds the best answer by considering, revising & combining many ideas at once. A faster variation of the model that just achieved IMO gold-level. Enjoy!
0
0
11
Gemini deep think is all you need for an official IMO gold medal! Congrats to all the participants!
Excited to share that a scaled up version of Gemini DeepThink achieves gold-medal standard at the International Mathematical Olympiad. This result is official, and certified by the IMO organizers. Watch out this space, more to come soon! https://t.co/4KynCY6M6C
1
3
178
Today we are rolling out our first Gemini Embedding model, which ranks #1 on the MTEB leaderboard, as a generally available stable model. It is priced at $0.15 per million tokens and ready for at scale production use!
143
269
3K
Very excited to share that @windsurf_ai co-founders @_mohansolo & Douglas Chen, and some of their talented team have joined @GoogleDeepMind to help advance our work in agentic coding in Gemini. Welcome to our new team mates from Windsurf!
theverge.com
Key researchers are joining Google DeepMind, too.
12
85
1K
'Semi-supervised sequence learning' is also 10 years old too. Great to see language model pretraining and supervised finetuning has since been working out for some folks. https://t.co/XHGkxMuexx w/ @quocleix (Work started off as an accident).
"A Neural Conversational Model" is 10 years old, w/ @quocleix . TL;DR you can train a chatbot with a large neural network (~500M params!). Samples 👇 This paper was received with mixed reviews, but I'm glad all the critics are now riding the LLM wave 🌊 https://t.co/sPO47hv1Gz
0
6
58
It turns out LLM data is more like oil than coal, if you refine it properly. Congratulations to the contributors of the many researcher-years of work!
1
5
47
Very excited to finally be able to talk about Gemini 2.5 (nebula) with big gains in coding! It's one of the biggest improvements we've ever seen. Let's keep breaking more walls! And the release is so big it broke the Spanish leaderboard...
BREAKING: Gemini 2.5 Pro is now #1 on the Arena leaderboard - the largest score jump ever (+40 pts vs Grok-3/GPT-4.5)! 🏆 Tested under codename "nebula"🌌, Gemini 2.5 Pro ranked #1🥇 across ALL categories and UNIQUELY #1 in Math, Creative Writing, Instruction Following, Longer
2
6
96
Try our new #1 ranked Lmsys model: Gemini Flash Thinking! Thinking, Flash and Slow!
Introducing Gemini 2.0 Flash Thinking, an experimental model that explicitly shows its thoughts. Built on 2.0 Flash’s speed and performance, this model is trained to use thoughts to strengthen its reasoning. And we see promising results when we increase inference time
0
2
14
Very apt test of time talk by @ilyasut! But there's still much more value and capabilities to come from refining crude data. @NeurIPSConf
0
0
10
Like most things in life, Project Mariner only happened thanks to a legendary team of dreamers who were willing to work through the stack to make Gemini truly agentic and capable of collaborating with computers in the same way as humans! <3
We are investing in the frontiers of agentic capabilities with a few early prototypes. Project Mariner is built with Gemini 2.0 and is able to understand and reason across information - pixels, text, code, images + forms - on your browser screen, and then uses that info to
8
14
125
Gemini 2.0 has landed! Complete with agentic capabilities. This model really impressed us during development.
🎊Gemini 2.0 is here! 🎊 An AI model for the agentic era The blog post is chock full of announcements: Gemini 2.0 Flash Project Astra Project Mariner Developer features for building agents Agents in games and more! Watch the videos in the blog! https://t.co/SESXSq5hB8
0
0
13
Chat with the Gemini team about all things data & research! We'll be at our booth tomorrow @NeurIPSConf at noon answering your questions and describing how it interacts with all the other parts of Gemini!
I and other members of the Gemini team are looking forward to chatting with @NeurIPSConf attendees tomorrow at the @GoogleDeepMind / @GoogleResearch booth tomorrow at noon!
0
1
12
Over a decade ago, Google embarked on a journey to build a useful quantum computer. Today, with our latest quantum chip, Willow, we're closer to harnessing the power of quantum mechanics for real-world impact. Learn more about Willow below ⬇️ https://t.co/4QTh61HW2g
research.google
32
145
1K
Never underestimate the impact of data📀! Congratulations and thanks to contributors both inside and outside Gemini for getting us to 🥇 across the board.
What a way to celebrate one year of incredible Gemini progress -- #1🥇across the board on overall ranking, as well as on hard prompts, coding, math, instruction following, and more, including with style control on. Thanks to the hard work of everyone in the Gemini team and
3
5
88
This time tested paper truly showed the power of supervised sequence learning. Congratulations!!
Such a great honor, thanks a lot @NeurIPSConf and congrats to my esteemed co-authors @ilyasut & @quocleix! The 2014 talk also stood the test of time IMO. Here is a slide from it (powerful models of today == large transformers). Believe it or not this talk was controversial at the
0
0
5
This time tested paper truly showed the power of supervised sequence learning. Congratulations!!
Such a great honor, thanks a lot @NeurIPSConf and congrats to my esteemed co-authors @ilyasut & @quocleix! The 2014 talk also stood the test of time IMO. Here is a slide from it (powerful models of today == large transformers). Believe it or not this talk was controversial at the
0
1
8