Max Dax Profile
Max Dax

@maximilian_dax

Followers
257
Following
57
Media
5
Statuses
42

PhD student in Machine Learning @MPI_IS

Joined June 2021
Don't wanna be here? Send us removal request.
@maximilian_dax
Max Dax
3 years
Can you trust deep learning for scientific inference? And what can you do when results are inaccurate? We address these questions for ML-based inference of complex gravitational wave models and get highly accurate and reliable results. 1/12
3
64
239
@maximilian_dax
Max Dax
1 year
RT @AnnalenaKofler: New tutorial alert: Run our new introductory DINGO tutorial locally or on Google Colab 1/8 http….
0
158
0
@maximilian_dax
Max Dax
1 year
RT @nihar_gupte: Hi everyone! Today I wanted to announce a paper that we have been working on for over a year now. We found some signs of e….
0
46
0
@maximilian_dax
Max Dax
2 years
RT @gpapamak: 📢 New paper on diffusion models for simulation-based inference, to appear at #ICML2023. tl;dr:. Diff….
0
26
0
@maximilian_dax
Max Dax
3 years
Huge thanks to my collaborators, in particular @stephen_r_green. Check out our paper at 12/12.
2
0
9
@maximilian_dax
Max Dax
3 years
Dingo-IS is an efficient standalone inference method, and at the same time a valuable tool for evaluating ML-based inference. Our analysis shows that Dingo performs well for most, but not all, GW events. This evaluation has been greatly simplified by Dingo-IS. 11/12.
1
0
2
@maximilian_dax
Max Dax
3 years
Dingo(-IS) easily scales to extremely costly GW models, for which MCMC needs several months of computation. Initial Dingo results are available in seconds to minutes. Importance sampling requires 20 minutes to 10 hours, depending on the GW model. 10/12
Tweet media one
1
0
2
@maximilian_dax
Max Dax
3 years
We evaluate Dingo-IS on 42 real GW events and achieve sample efficiencies of ~10%. We further find that out-of-distribution events (e.g., data contaminated with noise artefacts) and adversarial attacks are identified with very low sample efficiencies. 9/12
Tweet media one
1
0
2
@maximilian_dax
Max Dax
3 years
The Dingo density is normalized, so the Bayesian evidence can be computed directly from the normalization of the importance weights. We find a 10 fold reduction in statistical uncertainty compared to nested sampling, enabling model comparison with unprecedented precision. 8/12.
1
0
4
@maximilian_dax
Max Dax
3 years
The final result is likelihood-based and thus free from network inaccuracies. The Dingo proposal can be assessed with the effective sample size. Dingo-IS achieves ~100 times higher sample efficiencies than MCMC, while also being fully parallelizable. 7/12.
2
0
4
@maximilian_dax
Max Dax
3 years
This relies on two key properties. First, the Dingo density can be evaluated exactly. Second, Dingo trains with the mass-covering (forward) KL divergence. Dingo thus infers heavy tailed distributions, covering the entire posterior support without missing secondary modes. 6/12.
1
0
5
@maximilian_dax
Max Dax
3 years
The idea is to interpret Dingo results as the proposal distribution for importance sampling. We first sample from Dingo, and then assign the ratio of posterior density (~ likelihood * prior) and Dingo density as an importance weight. 5/12
Tweet media one
2
0
7
@maximilian_dax
Max Dax
3 years
But how do we know whether specific Dingo results are accurate, given that it builds on black-box neural networks? And what can we do when results are slightly off? Addressing these questions is essential when using ML for scientific downstream analyses. 4/12.
1
0
4
@maximilian_dax
Max Dax
3 years
Our work builds on Dingo which trains a flow with millions of gravitational-wave (GW) simulations to infer astrophysical parameters from GW data. Dingo matches MCMC in accuracy while reducing inference times from days to just 20 seconds. 3/12
Tweet media one
2
0
4
@maximilian_dax
Max Dax
3 years
With @stephen_r_green, Gair, @MPuerrer, @WildbergerJonas, @jakhmack, Buonanno, @bschoelkopf. We perform inference with normalizing flows (neural networks) and then apply importance weights. The result is free from network inaccuracies and includes comprehensive diagnostics. 2/12.
1
0
6
@maximilian_dax
Max Dax
3 years
RT @bkmi13: Today @maximilian_dax will give a talk about the impressive work on Group equivariant neural posterior estimation and his work….
0
3
0
@maximilian_dax
Max Dax
4 years
RT @KyleCranmer: We have 30min for poster session#2, and then we will be back with invited talks from @SuryaGanguli and @laurezanna and a c….
0
2
0
@maximilian_dax
Max Dax
4 years
RT @jakhmack: The ML for Physical Sciences Workshop on Monday looks awesome! Of course, particularly excited about @maximilian_dax's talk….
0
3
0
@maximilian_dax
Max Dax
4 years
RT @JGIBristol: Data Science Seminar Series with the Heilbronn Institute.Don't forget to sign up for the upcoming seminar.6 Dec: Simulation….
0
4
0