michaelrzhang Profile Banner
Michael Zhang Profile
Michael Zhang

@michaelrzhang

Followers
2K
Following
1K
Media
53
Statuses
439

neural networks / robotics research @amazonscience @UofT. Prev: @UCBerkeley @VectorInst. Journey before destination.

Boston
Joined August 2017
Don't wanna be here? Send us removal request.
@michaelrzhang
Michael Zhang
16 days
I'm at Neurips! 🌴 I've been thinking more about how large models can synergize with approaches that provide guarantees like classical planning and combinatorial optimization. Please reach out if you want to chat or catch up!
0
0
18
@kchonyc
Kyunghyun Cho
9 days
i gave a keynote talk at NeurIPS'25 just last week. here's the slide deck (link below) i've used to share my thoughts on who we are and what we do.
3
28
248
@pcastr
Pablo Samuel Castro
11 days
Sixth, and last, #runconference at #NeurIPS2025 had the best turnout yet! Thanks to everyone who came out, until the next conference! The second picture is of those who stayed until the end 😅🏅 🤖🏃🏾👋🏾
@pcastr
Pablo Samuel Castro
12 days
Fifth #runconference at #NeurIPS2025 , good turnout again! Tomorrow is the last run (at least with me), so if you've had FOMO this week, you have one more chance! 🤖🏃🏾
9
7
98
@michaelrzhang
Michael Zhang
14 days
You can make your own sticker at the @AmazonScience booth!
0
1
8
@jxmnop
dr. jack morris
14 days
Wondering how to attend an ML conference the right way? ahead of NeurIPS 2025 (30k attendees!) here are ten pro tips: 1. Your main goals: (i) meet people (ii) regain excitement about work (iii) learn things – in that order. 2. Make a list of papers you like
28
127
1K
@michaelrzhang
Michael Zhang
17 days
The Rocky Mountains are so cool
1
0
8
@lazar_atan
Lazar Atanackovic
19 days
I’ll be at @NeurIPSConf this week! I am also actively recruiting grad students for my group at U Alberta and Amii 🇨🇦. If you’re interested in ML/generative models for bio🧫 and biochemistry🧬, feel free to reach out! Looking forward to catching up with everyone.
0
5
30
@michaelrzhang
Michael Zhang
30 days
I love this description of Toronto: "It’s the unlikely combination of friendliness, energy, culture, ease of movement, safety, and sheer human variety."
0
1
14
@TacoCohen
Taco Cohen
2 months
Exactly. I learned a ton of math during my PhD, and it was fun and easy *because I had a goal* to use it in my research. Coding it up is also a great way to detect gaps in your understanding. Totally different from learning in class. Another common fallacy is that you need to
@jeremyphoward
Jeremy Howard
2 months
This is empirically incorrect. Hundreds of thousands of https://t.co/GEOZunWoXj students have learned the required math for ML as they go. By *far* the biggest problem we've seen is from people who try to learn the math first. They learn the wrong stuff & have not context.
22
67
1K
@michaelrzhang
Michael Zhang
2 months
What is the MNIST / CIFAR-10 / ImageNet equivalent for post-training/RL algorithms?
0
0
4
@michaelrzhang
Michael Zhang
4 months
Life update: I've recently moved to Boston and started a job @AmazonScience ! I'm excited to explore - please share local recs and let me know if you want to grab coffee! (picture: White Mountains, NH)
14
2
195
@phil_fradkin
Phil Fradkin
4 months
The news is out! We're starting Blank Bio to build a computational toolkit assisted with RNA foundation models. If you want to see my flip between being eerily still and overly animated check out the video below! The core hypothesis is that RNA is the most customizable molecule
@ycombinator
Y Combinator
4 months
Blank Bio (@blankbio_) is building foundation models to power a computational toolkit for RNA therapeutics, starting with mRNA design and expanding to target ID, biomarker discovery, and more. https://t.co/7VRxSRgSKK Congrats on the launch, @hsu_jonny, @phil_fradkin & @ianshi3!
13
25
178
@lilianweng
Lilian Weng
7 months
Giving your models more time to think before prediction, like via smart decoding, chain-of-thoughts reasoning, latent thoughts, etc, turns out to be quite effective for unblocking the next level of intelligence. New post is here :) “Why we think”:
lilianweng.github.io
Special thanks to John Schulman for a lot of super valuable feedback and direct edits on this post. Test time compute (Graves et al. 2016, Ling, et al. 2017, Cobbe et al. 2021) and Chain-of-thought...
105
474
3K
@alexalbert__
Alex Albert
8 months
We wrote up what we've learned about using Claude Code internally at Anthropic. Here are the most effective patterns we've found (many apply to coding with LLMs generally):
59
542
5K
@AndrewYNg
Andrew Ng
9 months
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the
Tweet card summary image
deeplearning.ai
The Batch AI News and Insights: Some people today are discouraging others from learning programming on the grounds AI will automate it.
521
3K
12K
@emollick
Ethan Mollick
10 months
The past 18 months have seen the most rapid change in human written communication ever By. September 2024, 18% of financial consumer complaints, 24% of press releases, 15% of job postings & 14% of UN press releases showed signs of LLM writing. And the method undercounts true use
28
278
1K
@Shalev_lif
Shalev Lifshitz @ NeurIPS 🌴
10 months
Hot off the Servers 🔥💻 --- we’ve found a new approach for scaling test-time compute! Multi-Agent Verification (MAV) scales the number of verifier models at test-time, which boosts LLM performance without any additional training. Now we can scale along two dimensions: by
9
50
257
@alexalbert__
Alex Albert
10 months
One of the things we've been most impressed by internally at Anthropic is Claude 3.7 Sonnet's one-shot code generation ability. Here are a few of my favorite examples I've seen on here over the past day:
73
188
4K
@michaelrzhang
Michael Zhang
10 months
87 and 97 both getting golden goals is poetic.
0
1
8
@JJWatt
JJ Watt
10 months
It’s just incredible how much of a home run 4 Nations has been for the NHL and hockey in general. Friends who never watched a hockey game in their lives reaching out asking what the plan is for tonight’s game, what food we’re ordering, etc. Definition of growing the game.
1K
4K
80K