iamandrewdai Profile Banner
Andrew M. Dai Profile
Andrew M. Dai

@iamandrewdai

Followers
2K
Following
488
Media
11
Statuses
118

Deep learning/LLM principal researcher (director) at Google DeepMind. Gemini Data Area Lead. Chinese name: 戴明博

San Francisco, CA
Joined February 2011
Don't wanna be here? Send us removal request.
@CarinaLHong
Carina Hong
2 months
Today, I am launching @axiommathai At Axiom, we are building a self-improving superintelligent reasoner, starting with an AI mathematician.
182
265
2K
@anmol01gulati
Anmol Gulati
3 months
We just launched Project Mariner in Google Search AI Mode — one of the most fun projects we've worked on, lately! Behind every search query, an agentic system works quietly for you — maybe one of the most profound interfaces ever built.
@rmstein
Robby Stein
3 months
1/5 Today, we’re introducing new agentic & personalization features in AI Mode that makes Search even more useful and tailored to you. More info on the latest updates below 🧵
9
12
272
@iamandrewdai
Andrew M. Dai
4 months
Make every thinking problem embarrassingly parallel! Looking forward to seeing what people discover.
@demishassabis
Demis Hassabis
4 months
Gemini 2.5 Deep Think now available for Ultra subscribers! Great at tackling problems that require creativity & planning, it finds the best answer by considering, revising & combining many ideas at once. A faster variation of the model that just achieved IMO gold-level. Enjoy!
0
0
11
@iamandrewdai
Andrew M. Dai
4 months
Gemini deep think is all you need for an official IMO gold medal! Congrats to all the participants!
@quocleix
Quoc Le
4 months
Excited to share that a scaled up version of Gemini DeepThink achieves gold-medal standard at the International Mathematical Olympiad. This result is official, and certified by the IMO organizers. Watch out this space, more to come soon! https://t.co/4KynCY6M6C
1
3
178
@OfficialLoganK
Logan Kilpatrick
4 months
Today we are rolling out our first Gemini Embedding model, which ranks #1 on the MTEB leaderboard, as a generally available stable model. It is priced at $0.15 per million tokens and ready for at scale production use!
143
269
3K
@koraykv
koray kavukcuoglu
4 months
Very excited to share that @windsurf_ai co-founders @_mohansolo & Douglas Chen, and some of their talented team have joined @GoogleDeepMind to help advance our work in agentic coding in Gemini. Welcome to our new team mates from Windsurf!
Tweet card summary image
theverge.com
Key researchers are joining Google DeepMind, too.
12
85
1K
@iamandrewdai
Andrew M. Dai
5 months
'Semi-supervised sequence learning' is also 10 years old too. Great to see language model pretraining and supervised finetuning has since been working out for some folks. https://t.co/XHGkxMuexx w/ @quocleix (Work started off as an accident).
@OriolVinyalsML
Oriol Vinyals
5 months
"A Neural Conversational Model" is 10 years old, w/ @quocleix . TL;DR you can train a chatbot with a large neural network (~500M params!). Samples 👇 This paper was received with mixed reviews, but I'm glad all the critics are now riding the LLM wave 🌊 https://t.co/sPO47hv1Gz
0
6
58
@iamandrewdai
Andrew M. Dai
5 months
It turns out LLM data is more like oil than coal, if you refine it properly. Congratulations to the contributors of the many researcher-years of work!
1
5
47
@iamandrewdai
Andrew M. Dai
8 months
Very excited to finally be able to talk about Gemini 2.5 (nebula) with big gains in coding! It's one of the biggest improvements we've ever seen. Let's keep breaking more walls! And the release is so big it broke the Spanish leaderboard...
@arena
lmarena.ai
8 months
BREAKING: Gemini 2.5 Pro is now #1 on the Arena leaderboard - the largest score jump ever (+40 pts vs Grok-3/GPT-4.5)! 🏆 Tested under codename "nebula"🌌, Gemini 2.5 Pro ranked #1🥇 across ALL categories and UNIQUELY #1 in Math, Creative Writing, Instruction Following, Longer
2
6
96
@iamandrewdai
Andrew M. Dai
11 months
Try our new #1 ranked Lmsys model: Gemini Flash Thinking! Thinking, Flash and Slow!
@JeffDean
Jeff Dean
11 months
Introducing Gemini 2.0 Flash Thinking, an experimental model that explicitly shows its thoughts. Built on 2.0 Flash’s speed and performance, this model is trained to use thoughts to strengthen its reasoning. And we see promising results when we increase inference time
0
2
14
@iamandrewdai
Andrew M. Dai
11 months
Our most advanced model is now available to try!
@JeffDean
Jeff Dean
11 months
If you're a Gemini Advanced user, try out our new gemini-exp-1206 model. It's a significant improvement across a range of different topic areas (select it from the model drop-down in the Gemini Advanced UI). 🎉
0
1
11
@iamandrewdai
Andrew M. Dai
11 months
Very apt test of time talk by @ilyasut! But there's still much more value and capabilities to come from refining crude data. @NeurIPSConf
0
0
10
@anmol01gulati
Anmol Gulati
11 months
Like most things in life, Project Mariner only happened thanks to a legendary team of dreamers who were willing to work through the stack to make Gemini truly agentic and capable of collaborating with computers in the same way as humans! <3
@sundarpichai
Sundar Pichai
11 months
We are investing in the frontiers of agentic capabilities with a few early prototypes. Project Mariner is built with Gemini 2.0 and is able to understand and reason across information - pixels, text, code, images + forms - on your browser screen, and then uses that info to
8
14
125
@iamandrewdai
Andrew M. Dai
11 months
Gemini 2.0 has landed! Complete with agentic capabilities. This model really impressed us during development.
@JeffDean
Jeff Dean
11 months
🎊Gemini 2.0 is here! 🎊 An AI model for the agentic era The blog post is chock full of announcements: Gemini 2.0 Flash Project Astra Project Mariner Developer features for building agents Agents in games and more! Watch the videos in the blog! https://t.co/SESXSq5hB8
0
0
13
@iamandrewdai
Andrew M. Dai
11 months
Chat with the Gemini team about all things data & research! We'll be at our booth tomorrow @NeurIPSConf at noon answering your questions and describing how it interacts with all the other parts of Gemini!
@JeffDean
Jeff Dean
11 months
I and other members of the Gemini team are looking forward to chatting with @NeurIPSConf attendees tomorrow at the @GoogleDeepMind / @GoogleResearch booth tomorrow at noon!
0
1
12
@JeffDean
Jeff Dean
11 months
Over a decade ago, Google embarked on a journey to build a useful quantum computer. Today, with our latest quantum chip, Willow, we're closer to harnessing the power of quantum mechanics for real-world impact. Learn more about Willow below ⬇️ https://t.co/4QTh61HW2g
Tweet card summary image
research.google
32
145
1K
@iamandrewdai
Andrew M. Dai
11 months
Never underestimate the impact of data📀! Congratulations and thanks to contributors both inside and outside Gemini for getting us to 🥇 across the board.
@JeffDean
Jeff Dean
11 months
What a way to celebrate one year of incredible Gemini progress -- #1🥇across the board on overall ranking, as well as on hard prompts, coding, math, instruction following, and more, including with style control on. Thanks to the hard work of everyone in the Gemini team and
3
5
88
@iamandrewdai
Andrew M. Dai
1 year
This time tested paper truly showed the power of supervised sequence learning. Congratulations!!
@OriolVinyalsML
Oriol Vinyals
1 year
Such a great honor, thanks a lot @NeurIPSConf and congrats to my esteemed co-authors @ilyasut & @quocleix! The 2014 talk also stood the test of time IMO. Here is a slide from it (powerful models of today == large transformers). Believe it or not this talk was controversial at the
0
0
5
@iamandrewdai
Andrew M. Dai
1 year
This time tested paper truly showed the power of supervised sequence learning. Congratulations!!
@OriolVinyalsML
Oriol Vinyals
1 year
Such a great honor, thanks a lot @NeurIPSConf and congrats to my esteemed co-authors @ilyasut & @quocleix! The 2014 talk also stood the test of time IMO. Here is a slide from it (powerful models of today == large transformers). Believe it or not this talk was controversial at the
0
1
8