Yee Whye Teh
@yeewhye
Followers
25K
Following
2K
Media
30
Statuses
1K
Find me @[email protected] Professor at @OxCSML, @oxfordstats and Research Director at @GoogleDeepMind. All opinions are my own.
Oxford, England
Joined January 2017
We're looking for an exceptional junior researcher in AI/ML with strong interests in diversity, equity and inclusion to fill this exceptional role funded by a generous donation from @GoogleDeepMind. Deadline for application 5 July 2024.
🔎New Senior Postdoc Research Associate Position! We’re looking for someone to be a role model, ambassador & point of contact for emerging talent in the field of #AI or #MachineLearning. 🔗 For more details & to apply, visit: https://t.co/Zni5TqLB7P
#postdoctoralresearch
6
21
59
Excited to share our #NeurIPS2025 paper: Rao-Blackwellised Reparameterisation Gradients! We propose R2-G2 as a general-purpose gradient estimator for latent Gaussians and as the Rao-Blackwellisation of reparam gradients. Joint work with @thdbui @GeorgeDeligian9 @yeewhye
1
2
12
Want to get an LLM agent to succeed in an OOD environment? We tackle the hardest case with SPA (Self-Play Agent). No extra data, tools, or stronger models. Pure self-play. We first internalize a world model via Self-Play, then we learn how to win by RL. Like a child playing
4
26
227
Thrilled to introduce MARCOS, our new paper that redefines AI reasoning! Arxiv: https://t.co/DKXmA4nPlo Key Breakthroughs: 🚀 15.7x faster inference than token-based CoT. 🏆 4.7% higher accuracy on GSM8K. 🧠 The first continuous reasoning method to outperform traditional CoT.
1
2
9
GEM❤️Tinker GEM, an environment suite with a unified interface, works perfectly with Tinker, the API by @thinkymachines that handles the heavy lifting of distributed training. In our latest release of GEM, we 1. supported Tinker and 5 more RL training frameworks 2. reproduced
5
34
284
I'm so excited about StochasTok. It's such a simple and effective methods, leading to big win for sub-token understanding of LLMs, with very little loss in terms of code complexity, compute cost, or overall performance. Great work @anyaasims !
🚀Introducing “StochasTok: Improving Fine-Grained Subword Understanding in LLMs”!🚀 LLMs are incredible but still struggle disproportionately with subword tasks, e.g., for character counts, wordplay, multi-digit numbers, fixing typos… Enter StochasTok, led by @anyaasims! [1/]
0
3
16
🚀Introducing “StochasTok: Improving Fine-Grained Subword Understanding in LLMs”!🚀 LLMs are incredible but still struggle disproportionately with subword tasks, e.g., for character counts, wordplay, multi-digit numbers, fixing typos… Enter StochasTok, led by @anyaasims! [1/]
1
26
77
Postdoc and research engineer opportunities working with Yee Whye!
Postdoctoral fellowships and research engineer positions available for an Oxford+Singapore project on uncertainty quantification in LLMs! https://t.co/nvAOuqmn0l Oxford deadline is Feb 26. Pls apply if interested, forward to your contacts, contact me if you have questions 🙏🙏
0
1
4
📣 Jobs alert: UQ in LLMs! We're looking to hire a Postdoctoral Fellow and a Research Engineer to work on uncertainty quantification in LLMs. The project is a collaboration between @UniofOxford (@yeewhye), @NTUsg (Luke Ong) and @NUSingapore (@WeeSunLee) #LLMs #hiring
2
10
24
Postdoctoral fellowships and research engineer positions available for an Oxford+Singapore project on uncertainty quantification in LLMs! https://t.co/nvAOuqmn0l Oxford deadline is Feb 26. Pls apply if interested, forward to your contacts, contact me if you have questions 🙏🙏
docs.google.com
Postdoctoral Fellow and Research Engineer Positions on Uncertainty Quantification in LLMs in Oxford and Singapore By scaling up data, compute and model size, large language models (LLMs) have gained...
2
17
72
Congrats, well-deserved!
Excited to share that I recently defended my DPhil 🎉 Huge thanks to my supervisors @tom_rainforth and @yeewhye, all my co-authors, especially @AdamEFoster, collaborators and mentors. Thanks to my assessors @maosbot and @samikaski for the interesting and stimulating discussion.
0
0
4
For AI students, researchers & entrepreneurs in London: https://t.co/e9leeQEhVb Join our Autumn Summit on Open Problems in AI! What are the next set of challenges in AI research? What are key opportunities for application? Researchers and leaders from the UK’s top AI
1
22
65
Thank you @ulrichpaquet for your vision, leadership and hard work!
Congratulations to the first graduates from the AI for Science Master’s program at @AIMSacza 🎓 Last year, we partnered with AIMS to provide full scholarships, equipment and compute to students, giving them access to advanced studies in mathematics, AI and machine learning . 📚
0
2
29
❤️
What a lovely workshop on the robustness of LLMs by @ELLISforEurope Oxford. Fantastic speakers. Beautiful @KebleOxford (always love going there). Thanks @yeewhye (the CEO :P) and the wonderful team for organizing this! Learned more compared to overwhelmingly big conferences
0
1
12
RobustLLMs workshop just started :)
📢 We are about to kick off the RobustLLMs workshop in Oxford! The workshop features an amazing speaker line-up. The talks will also be streamed on zoom. Don't miss out! 🚀 Zoom 🔗 https://t.co/A0mxNXBpzk Workshop details https://t.co/UTVylxHufu
0
2
18
We are excited to showcase Women in Machine Learning, highlighting Jessica Shrouff, a Senior Research Scientist at Google DeepMind! 🌟 If you or someone you know is excelling in Machine Learning, we want to showcase your work too. Fill out the Google form: https://t.co/j3SO8qQjww
0
4
26
Come join us at the #ICML2024 poster session to learn more about our paper Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design 📍Poster: Jul 25, 1130am, Hall C #111 📄 https://t.co/sy3lIGT8wT 💻 https://t.co/MemZdi6rHF w/ @leoklarner @OPIGlets @yeewhye
1
3
18
Generative models for molecular optimization & protein design often rely on data-driven guidance functions for conditional sample generation. Our new #ICML2024 paper presents a simple but effective approach to improve their performance in OOD settings. https://t.co/Q1iLAFjDjY
9
36
187
Second day at EEML starts with @yeewhye's deep-dive tutorial on the latest and greatest in Bayesian Deep Learning. I am reminded of his fantastic NeurIPS'17 keynote on the same topic, which was among the highlights for me back then 🥳
0
6
52