Stephan Mandt
@StephanMandt
Followers
3K
Following
1K
Media
41
Statuses
501
AI Professor @UCIrvine | Formerly @blei_lab, @Princeton | #GenAI, #Compression, #AI4Science | General Chair @aistats_conf 2025 | AI Resident @ChanZuckerberg
Irvine, California
Joined March 2015
Congrats to @FelixDrRelax, Yang Meng, and Lukas Laskowski on a NeurIPS Spotlight! 🎉 A simple idea made practical, demonstrated on event sequences, for efficiently modeling mixed discrete-continuous data with transformers.
0
1
12
Check out our #Neurips Spotlight paper. Point process modeling made simple. 👇
How to model event sequences with real world variety: mixed data types, different lengths, …? Meet FlexTPP, a unified transformer framework with discrete & continuous heads for health care, complex annotations and more! NeurIPS spotlight, Fri 11am #2102! https://t.co/swgDRCOInk
0
2
6
Amid all the review frustration, a big shoutout to all reviewers and area chairs. Peer feedback is a crucial step in developing papers---and it takes serious time and effort. As authors, let’s appreciate the process!
0
0
9
When a single telescope is projected to stream ~62 exabytes of data every year, we need better compression. Learned compression is one answer--check out our new project page here:
Made a pretty website for our ICLR 2025 work AstroCompress: neural compression for space telescopes + 320 GB of ML-ready astro image data. https://t.co/BpIvo2sZmn Has links to paper, data, code, Jupyter notebook, reviews, & ICLR video presentation.
0
1
5
Huge thanks to Laura Manduchi, Clara Meister & Kushagra Pandey, who led the 2-year effort of writing “On the Challenges and Opportunities in Generative AI” involving 27 authors. Coming out of a 2023 Dagstuhl Seminar I co-organized with @vincefort, @liyzhen2 & @sirbayes.
Exciting news! Our paper "On the Challenges and Opportunities in Generative AI" has been accepted to TMLR 2025. 📄
0
3
14
I had the pleasure of giving a talk and sharing some recent work on diffusion + compression (together with @justuswill and @StephanMandt) at the Learn to Compress workshop at #isit2025. Here are my slides: https://t.co/yycVA3gkkG Thanks again for the invitation!
0
3
20
✨New edition of our community-building workshop series!✨ Tomorrow at @CVPR, we invite speakers to share their stories, values, and approaches for navigating a crowded and evolving field, especially for early-career researchers. Cheeky title🤭: How to Stand Out in the
sites.google.com
11th June at CVPR 2025 // 12:45pm - 6pm // Room 209 A-C
In this #CVPR2025 edition of our community-building workshop series, we focus on supporting the growth of early-career researchers. Join us tomorrow (Jun 11) at 12:45 PM in Room 209 Schedule: https://t.co/1fKzplQrU5 We have an exciting lineup of invited talks and candid
3
16
66
Thanks for the great collaboration!
ICML 25 paper on variational guidance for diffusion models accepted Happy to share that our diffusion model guidance paper with @farrinsofian, @kpandey008, @felixDrRelax, and @StephanMandt on casting control for guidance as variational inference with auxiliary variables was
0
0
35
TL;DR: Guidance = variational optimal control. I'm excited to share the outcomes of this collaboration with @Tkaraletsos at the @ChanZuckerberg_ Initiative. All credit to my amazing students @farrinsofian and @kpandey008!
🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn���t need model retraining or extra networks. 📄 Paper: https://t.co/nixanKxs9W 💻 Code:
0
1
51
#AISTATS2025 day 3 keynote by Akshay Krishnamurthy about how to do theory research on inference time compute 👍 @aistats_conf
1
11
139
Back from @aistats_conf in Thailand to my Zurich sabbatical—sipping coffee in the same spots Einstein once did. What a journey! Huge thanks to my entire AISTATS team: reviewers, ACs, senior ACs, and Chairs. It’s been amazing to work with you!
5
1
69
And last but not least... the Best Student Paper Award at #AISTATS 2025 goes to Daniel Marks and Dario Paccagnan for "Pick-to-Learn and Self-Certified Gaussian Process Approximations". Congratulations!
0
3
69
The #AISTATS 2025 Test of Time Award goes to ... 🥁 ... Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu, for "Deeply Supervised Nets"! Congratulations!
2
6
71
Congrats!!
Big congrats to Charles Margossian and Lawrence Saul for winning the #AISTATS 2025 Best Paper Award! "Variational Inference in Location-Scale Families: Exact Recovery of the Mean and Correlation Matrix"
0
0
32
#AISTATS2025 is off to a strong start! First keynote: Chris Holmes rethinks Bayesian inference through the lens of predictive distributions—introducing tools like martingale posteriors. 🌴🌴🤖🎓
0
7
54
Proud to share TranscriptFormer - a generative model for single-cell transcriptomics, trained on 112 million cells across 12 species spanning 1.5 billion years of evolution. TranscriptFormer is distinct in being both generative and multi-species. Built entirely on raw gene
biorxiv.org
Single-cell transcriptomics has revolutionized our understanding of cellular diversity, but integrating this knowledge across evolutionary distances remains challenging. Here we present TranscriptF...
2
12
105
Congratulations to @MetodJazbec and the team on winning the ICLR QUESTION workshop's Best Paper Award! 🎓
Great too see that our Generative Uncertainty won the best paper award at the ICLR QUESTION workshop ( https://t.co/5O1iUK2Kmv). If you're interested in what Bayesian/ensembling methods can bring to the world of diffusion models, check out the paper 👇 https://t.co/41SII4NAUq
0
1
26
AABI 2025 has just been concluded. We had a great time talking to each other and listening to great talks about probabilistic ML and its applications in Singapore 🇸🇬 Looking forward to AABI 2026! Stay tuned!
0
2
27