
Adam Fisch
@adamjfisch
Followers
1K
Following
488
Media
19
Statuses
302
Research Scientist @ Google DeepMind | Formerly: PhD @ MIT EECS.
Joined August 2017
Work co-led with @ml_angelopoulos , whom we had the pleasure of briefly hosting here at @GoogleDeepMind for this collaboration, together with my GDM and GR colleagues @jacobeisenstein , @JonathanBerant , and Alekh Agarwal.
2
1
3
RT @JonathanBerant: Hi ho!. New work: With amazing collabs @jacobeisenstein @jdjdhekchbdjd @adamjfisch @ddua17 @fan….
0
17
0
RT @stats_stephen: Important topic, but this is more of a quick-start guide. For cutting-edge research on LLM evals, see these papers usin….
0
6
0
RT @ml_angelopoulos: 🚨 New Textbook on Conformal Prediction 🚨. “The goal of this book is to teach the reader about….
0
90
0
Checkout our new paper on Recursive Transformers. Great having Sangmin here at @GoogleDeepMind to lead it! Particularly excited about the potential for continuous depth wise batching for much better early-exiting batch throughout.
🚀 Excited to share our latest research @GoogleDeepMind on ♻️Recursive Transformers!. We make smaller LMs by "sharing parameters" across layers. A novel serving paradigm, ✨Continuous Depth-wise Batching, with 🏃Early-Exiting could significantly boost their decoding speed!. 🧵👇
2
4
30
RT @aviral_kumar2: This work was led by the amazing @setlur_amrith during his internship at Google Research. With @nagpalchirag,.@adamjfisc….
0
2
0
RT @aviral_kumar2: 🚨New paper led by @setlur_amrith on process rewards for reasoning!. Our PRMs that model specific notion of "progress" re….
0
19
0
RT @setlur_amrith: 🚨 Exciting new results with dense process reward models (PRMs) for reasoning. Our PRMs scale.✅ search compute by 1.5-5x….
0
41
0
@GoogleDeepMind @GoogleResearch @ml_angelopoulos Checkout the paper for more details!. Fun work done together with a great team: @maynez_joshua, @rhofour, @bhuwandhingra, @amirgloberson, and @professorwcohen .
0
0
3
@GoogleDeepMind @GoogleResearch @ml_angelopoulos In particular, when the data / autorater is heterogeneous (which we partition based on autorater confidence), we find that this stratified prediction-powered approach can give us substantially tighter confidence intervals for parameters of interest, such as the mean LLM accuracy.
1
0
3
@GoogleDeepMind @GoogleResearch The PPI work of @ml_angelopoulos et. al. allows us to leverage the labeled data to debias the automatic predictions, so that we can get precise, valid confidence intervals for important population parameters. We further improve these estimates by leveraging stratified sampling.
1
0
4
@GoogleDeepMind @GoogleResearch Reliable LLM eval is challenging. We can use auto metrics (e.g., LLM-as-a-judge) which are cheap, but possibly inaccurate. Or we can do manual annotation, which is more accurate, but expensive. The tradeoffs of can vary depending on the subdomain (some are easier than others)!.
1
1
4
Excited to share new work from @GoogleDeepMind / @GoogleResearch on improving LLM evals using ML predictions together with a simple but effective stratified sampling approach that strategically divides the underlying data for better performance. Paper:
5
25
125
RT @raymin0223: 🚨Check out our new paper, Block Transformer! .We propose an efficient architecture with Global-to-Local language modeling.….
0
30
0