
Kushagra Pandey
@kpandey008
Followers
98
Following
74
Media
4
Statuses
60
CS PhD at @UCIrvine | I like building Generative Models | Prev. intern @Bosch_AI @nvidia | Alumni @IITKanpur @iitbbs | Opinions are my own.
Irvine, California
Joined October 2021
Excited to present some recent work on developing "Fast Samplers for Inverse Problems in Iterative Refinement Models" accepted at NeurIPS'24. A short đź§µ
2
6
32
Exciting news! Our paper "On the Challenges and Opportunities in Generative AI" has been accepted to TMLR 2025. đź“„
arxiv.org
The field of deep generative modeling has grown rapidly in the last few years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning...
1
2
10
We'll present GenMol tomorrow morning at #ICML2025. Come check out our poster and chat about discrete diffusion and small molecule generation. See you at 11 am, West Exhibition Hall B2-B3 #W-113
Can discrete diffusion-based generative models outperform GPT-based next-token predictive models in molecular tasks? Learn about GenMol, NVIDIA’s new #DrugDiscovery Swiss Army Knife! paper: https://t.co/C1xxHMcHE9 blog: https://t.co/r5JfQj4dUx demo:
0
4
35
📢📢 Elucidated Rolling Diffusion Models (ERDM) How can we stably roll out diffusion models for sequence generation in data-scarce dynamical systems? We elucidate the design of rolling diffusion, inspired by prob. flow ODEs and nonisotropic noise. 📄 https://t.co/bQ1IKnZDpj
2
23
113
📢 Test-time Scaling of SDE Diffusion Models Does optimizing the noise trajectory improve sample quality? Significantly. We propose ϵ-greedy search, a simple contextual bandit method matching optimal MCTS in noise space. 📄 https://t.co/ON3ohhttIF 💻 https://t.co/jN1Na1lUkp
2
29
225
@ArashVahdat Heavy-tailed diffusion models: lines of code to improve the ability of your diffusion model to handle extreme events in heavy-tailed distributions. ll;dr: replace you gaussian distribution with a tuned t-student one. @ArashVahdat #uncv2025 #cvpr2025
3
6
14
ICML 25 paper on variational guidance for diffusion models accepted Happy to share that our diffusion model guidance paper with @farrinsofian, @kpandey008, @felixDrRelax, and @StephanMandt on casting control for guidance as variational inference with auxiliary variables was
🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn’t need model retraining or extra networks. 📄 Paper: https://t.co/nixanKxs9W 💻 Code:
1
7
64
Excited to share our recent #icml2025 work on training-free guidance in diffusion models with ideas from optimal control. The method is simple, compatible with latent and pixel space diffusion, and task-agnostic (for differentiable costs/rewards). https://t.co/KzkBw6uMbG
🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn’t need model retraining or extra networks. 📄 Paper: https://t.co/nixanKxs9W 💻 Code:
0
1
4
🔥📔 This week at #ICLR2025, our fundamental generative AI research team (GenAIR) is (co-)presenting 11 papers, 6 of which were developed or led primarily by our team members. Below, I am listing our main papers with a one-sentence summary.
1
9
82
📢 EquiVDM: Equivariant Video Diffusion Models with Temporally Consistent Noise What's the role of equivariance in video diffusion models? and how can warped noise help with it? Is sampling from equivariant video models any easier? Project: https://t.co/HQ0arwtBnR w/ Chao Liu.
2
26
180
Excited to present some recent work on developing "Progressive Compression with Universally Quantized Diffusion Models", accepted as an Oral at ICLR'25. đź§µ1/4
1
5
12
My GTC talk highlighting some of the Gen AI for science projects from my team at NVIDIA and the lessons we've learned along the way is now publicly available. Bonus point, I used some new pictures of Peanut in the presentation. https://t.co/dc5NIU5QRk
nvidia.com
Diffusion models have transformed generative AI by enabling breakthroughs in diverse applications, such as images, videos, and speech
3
18
166
Can discrete diffusion-based generative models outperform GPT-based next-token predictive models in molecular tasks? Learn about GenMol, NVIDIA’s new #DrugDiscovery Swiss Army Knife! paper: https://t.co/C1xxHMcHE9 blog: https://t.co/r5JfQj4dUx demo:
developer.nvidia.com
Traditional computational drug discovery relies almost exclusively on highly task-specific computational models for hit identification and lead optimization. Adapting these specialized models to new…
3
36
132
🚀 Internship Opportunity @ NVIDIA 🚀 We are seeking highly motivated Ph.D. students in #ML and #AI to join the Fundamentals of Gen-AI Research Group as summer #interns in 2025! #NVIDIA #GenerativeAI #PhDInternship
7
32
274
I am happy to announce that the first draft of my RL tutorial is now available. https://t.co/SjMdabl0yW
74
748
4K
Thrilled to share that my group at #UCI is partnering with the @ChanZuckerberg Initiative to advance generative AI for science and cell biology! With world-class domain expertise and compute support, we’re pushing boundaries— all research outcomes will be open source. 🚀
Excited to share we are partnering with new AI residents @StephanMandt, @FrancescoLocat8, @fpcasale and their labs at @UCIrvine, @ISTAustria, and @HelmholtzMunich in support of our work at @cziscience to build virtual cell models that can accelerate scientific discovery and
4
8
54
My group will be hiring several PhD students next year. I can’t reveal the details before the official announcement at NeurIPS, but this involves an exciting collaboration with a well-known non-profit on #AI4Science and serious compute power. Stay tuned and apply at UCI!
5
43
280
[5/n] Joint work with my amazing collaborators Ruihan Yang and @StephanMandt. Code: https://t.co/vjL52Tw3FB Project Page + more results:
0
0
6
[4/n] Also works for noisy and non-linear inverse problems. Some qualitative results for noisy superres (NFE=5) 👇
1
0
3
[3/n] The result? High-fidelity reconstructions in as few as 5 steps (with comparable performance to methods requiring 4-200x more compute) on inverse problems like deblurring, super-res and inpainting. Some qualitative results (NFE=5) 👇
1
0
3
[2/n] We present "Conditional Conjugate Integrators", a conditional extension of our recent ICLR work ( https://t.co/SDeIKXbCJS), which projects guided diffusion dynamics to a more "friendly" space for sampling.
1
0
5