
Shreyas Padhy
@shreyaspadhy
Followers
374
Following
1K
Media
14
Statuses
188
PhD student at the University of Cambridge. Ex @GoogleAI Resident, @jhubme and @iitdelhi. I like the math of machine learning & neuroscience. Also DnD.
Cambridge, England
Joined July 2011
Checkout this paper with some really interesting insights led by the excellent @JiajunHe614 and @YuanqiD - . TLDR: Neural density samplers really need guidance imposed through Langevin annealing to make them work well.
Working on sampling and seeking neural network ansatz? Longing for simulation-free* training approaches? – we review neural samplers and present a “failed” attempt towards it with pitfalls and promises!. Joint work with @JiajunHe614 (co-lead), Francisco Vargas …. 🧵1/n.
0
2
12
Thanks for the kind words @ArnaudDoucet1 ! I wanted to shout-out some other great work in the same vein as us - . (@julberner, @lorenz_richter, @MarcinSendera et al).(@msalbergo et al).(@junhua_c et al).
Tweeting again about sampling, my favourite 2024 Monte Carlo paper is by F. Vargas, @shreyaspadhy, D. Blessing & N. Nüsken: . Propose a "simple" loss to learn the drift you need to add to Langevin to follow a fixed probability path.
2
5
44
RT @ArnaudDoucet1: Tweeting again about sampling, my favourite 2024 Monte Carlo paper is by F. Vargas, @shreyaspadh….
0
38
0
Come chat with folks from our group!.
We're excited to be at #NeurIPS Vancouver!. See the papers we'll be presenting at the main conference below:
0
0
3
RT @CambridgeMLG: We're excited to be at #NeurIPS Vancouver!. See the papers we'll be presenting at the main conference below: https://t.co….
0
2
0
RT @AtinaryTech: Atinary @ #NeurIPS in Vancouver this week🍁 .Connect with our #AI #ML researchers @VictorSabanza & @shreyaspadhy. Our rese….
0
3
0
Finally, we're presenting a new symmetry-aware generative model that discovers which (approximate) symmetries exist in data for improved data efficiency. Catch co-author @JamesAllingham on Friday, Dec 13, 06:00 at Poster Session 6 East, #2500.
I'll be at NeurIPS next week, presenting our work "A Generative Model of Symmetry Transformations." In it, we propose a symmetry-aware generative model that discovers which (approximate) symmetries are present in a dataset, and can be leveraged to improve data efficiency. 🧵⬇️
0
0
3
Second, we're presenting our work on speeding up marginal likelihood estimation in GPs by up to 72x without sacrificing performance! Catch co-author @JihaoAndreasLin on Thursday, Dec 12, 16:30 at Poster Session 4 East, #3910.
"Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes". Three techniques to accelerate marginal likelihood training in GPs by up to 72x without sacrificing performance!. Check out our paper here: . (1/6).
1
0
3
First, we're presenting our paper on efficient fine-tuning of diffusion models for conditional generation. Find us (w/ @DenkerAlexander, Francisco Vargas) on Thursday, Dec 12, 11:00 at Poster Session 3 East, #2500.
I’m really excited to be attending NeurIPS and presenting our work on efficient fine-tuning of pre-trained diffusion models for SOTA conditional generation. Come chat with us on 12th Dec (Thursday) at 11am! . Thread below (🧵) -
1
1
3
RT @JamesAllingham: I'll be at NeurIPS next week, presenting our work "A Generative Model of Symmetry Transformations." In it, we propose a….
0
27
0
We’re looking forward to chatting with folks at NeurIPS this year about this work! Work done with excellent collaborators @DenkerAlexander, Francisco Vargas, @DidiKieran, @SimMat20, @vdutor, @BarbanoRiccardo, @MathieuEmile, @julia_tweeting_, and @pl219_Cambridge!.
0
2
5