alexxthiery Profile Banner
Alex Thiery Profile
Alex Thiery

@alexxthiery

Followers
1K
Following
3K
Media
52
Statuses
353

Associate Prof. of Statistics & Machine Learning National University of Singapore (NUS)

Singapore
Joined April 2009
Don't wanna be here? Send us removal request.
@ArnaudDoucet1
Arnaud Doucet
27 days
🔥 WANTED: Student Researcher to join me,@ValentinDeBort1,@thjashin,@liwenliang,@ArthurGretton in DeepMind London. You'll be working on Multimodal Diffusions for science. Apply here
6
46
315
@alexxthiery
Alex Thiery
1 month
I'm teaching Bayesian Optimization for the first time this semester, so finally had to implement it myself. Fun stuff, with a few gotchas along the way!
0
2
15
@StatCOupdates
Stat.CO Papers
4 months
Art B. Owen. [ https://t.co/kqdORNEPTP]. Better bootstrap t confidence intervals for the mean.
0
2
11
@rob_cornish
Rob Cornish
4 months
I'm looking for talented and ambitious PhD students to join me at Nanyang Technological University Singapore to work on safe and robust AI systems! Full scholarships covering tuition and a stipend are available, and are open to local and international students alike.
5
18
79
@cambUP_maths
Cambridge University Press - Mathematics
6 months
Scalable Monte Carlo for Bayesian Learning by Professor Paul Fearnhead, Dr Christopher Nemeth, Professor Chris J. Oates and Dr Chris Sherlock A clear and intuitive introduction to advanced topics in Markov chain Monte Carlo, with a focus on scalability. 📚 https://t.co/Ytszyn0mp1
1
4
11
@k_neklyudov
Kirill Neklyudov
6 months
Why do we keep sampling from the same distribution the model was trained on? We rethink this old paradigm by introducing Feynman-Kac Correctors (FKCs) – a flexible framework for controlling the distribution of samples at inference time in diffusion models! Without re-training
Tweet card summary image
arxiv.org
While score-based generative models are the model of choice across diverse domains, there are limited tools available for controlling inference-time behavior in a principled manner, e.g. for...
@martoskreto
Marta Skreta
6 months
🧵(1/6) Delighted to share our @icmlconf 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample
1
28
137
@DynamicsSIAM
SIAM Activity Group on Dynamical Systems
6 months
Lecture notes: "Data-driven approaches to inverse problems" (by Carola-Bibiane Schönlieb, Zakhar Shumaylov): https://t.co/C4XrUCMNZ9 [Comments:Notes from Machine Learning: From Data to Mathematical Understanding (CIME 2023)]
Tweet card summary image
arxiv.org
Inverse problems are concerned with the reconstruction of unknown physical quantities using indirect measurements and are fundamental across diverse fields such as medical imaging, remote sensing,...
2
34
154
@alexxthiery
Alex Thiery
6 months
Interesting paper: "Sequential Monte Carlo approximations of Wasserstein--Fisher--Rao gradient flows" By: Francesca Crucinio & Sahani Pathiraja https://t.co/HCJ1DwPH5F
Tweet card summary image
arxiv.org
We consider the problem of sampling from a probability distribution $Ï€$. It is well known that this can be written as an optimisation problem over the space of probability distribution in which...
0
0
7
@alexxthiery
Alex Thiery
6 months
Geodesic path v.s. KL(pi, target) gradient flow under the Fisher-Rao Metric: can you tell which is which?
6
4
44
@alexxthiery
Alex Thiery
1 year
One #postdoc position is still available at the National University of Singapore @NUSingapore to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details: https://t.co/yMILtiJTz6
0
6
38
@alexxthiery
Alex Thiery
1 year
Variational approximation with Gaussian mixtures is looking cute! So here it's just gradient descent on K(q||p) for optimising the mixtures means & covariances & weights...
@alexxthiery
Alex Thiery
1 year
Gaussian approximation of a target distribution: mean-field versus full-covariance! Below shows a simple gradient descent on KL(q||p)
3
21
197
@alexxthiery
Alex Thiery
1 year
Gaussian approximation of a target distribution: mean-field versus full-covariance! Below shows a simple gradient descent on KL(q||p)
1
6
58
@OnlineMCSeminar
Monte Carlo Seminar
1 year
📢 Hi friends! We’re launching a weekly online seminar on Monte Carlo methods, starting on October 1st with Prof. Persi Diaconis. Join us every Tuesday! For more details, visit our website: https://t.co/5AmhPmte7C or subscribe to our mailing list https://t.co/PdpquTV40Z. Welcome!
sites.google.com
Upcoming Seminar Presentations All seminars are on Tuesdays [ 8:30 am PT ] = [ 11:30 am ET ] = [ 4:30 pm London ] = [ 5:30 pm Paris ] = [ 0:30 am Beijing + 1d] Subscribe to our mailing list and...
0
28
103
@EmtiyazKhan
Emtiyaz Khan
1 year
We have two open post-doc positions. You dont' have to be a Bayesian but somebody who is interested to work with at the intersection of DL, Bayes, and optimization. https://t.co/DJ4xwxjhWc Interest in understanding deep learning and continual lifelong learning is a plus!
2
35
123
@canaesseth
Christian A. Naesseth
1 year
I’m hiring a postdoc to work with me on exciting projects in generative modelling (AI) and/or uncertainty quantification. You'll be part of a great team, embedded in @AmlabUva and the UvA-Bosch Delta Lab. Apply here: https://t.co/nJ2phFRqIr RT appreciated! #ML #GenAI
1
30
95
@johnleibniz
Fang Han
2 years
I put it in my book draft:
3
8
121
@alexxthiery
Alex Thiery
2 years
5,000 Hamiltonian dynamics on a double torus, all starting from the same initial position but moving in various directions! #MCMC
@alexxthiery
Alex Thiery
2 years
This morning fun: running 30k Random Walk Metropolis #mcmc chains in parallel to explore a double torus!
2
14
147
@alexxthiery
Alex Thiery
2 years
Doing the approach above (with 100k points) on that double-torus example gives a relative error of about 0.5%
0
0
0
@alexxthiery
Alex Thiery
2 years
I give you 100k points independently & uniformly distributed on a surface of R^3. How to estimate the total area without doing any triangulation etc.. Crude: compute the average distance to the 100th (say) nearest neighbour + do some algebra. Anything much better?
@alexxthiery
Alex Thiery
2 years
This morning fun: running 30k Random Walk Metropolis #mcmc chains in parallel to explore a double torus!
1
1
8