Felix Draxler
@FelixDrRelax
Followers
115
Following
34
Media
6
Statuses
23
Machine Learning PostDoc at UC Irvine with Stephan Mandt. PhD from Heidelberg University. Generative models, normalizing flows w/ cell bio applications
Heidelberg, Allemagne
Joined March 2012
Check out our #Neurips Spotlight paper. Point process modeling made simple. 👇
How to model event sequences with real world variety: mixed data types, different lengths, …? Meet FlexTPP, a unified transformer framework with discrete & continuous heads for health care, complex annotations and more! NeurIPS spotlight, Fri 11am #2102! https://t.co/swgDRCOInk
0
2
6
Great joint work with Yang, Kai, @Lasklu1, @YiboYang, @StephanMandt @Tkaraletsos! Thanks to @ChanZuckerberg, Hasso-Plattner-Institute & @UCIrvine!
0
0
4
How to model event sequences with real world variety: mixed data types, different lengths, …? Meet FlexTPP, a unified transformer framework with discrete & continuous heads for health care, complex annotations and more! NeurIPS spotlight, Fri 11am #2102! https://t.co/swgDRCOInk
1
3
10
🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn’t need model retraining or extra networks. 📄 Paper: https://t.co/nixanKxs9W 💻 Code:
3
18
100
Our new preprint trains any neural network architecture as a generative model via maximum likelihood: https://t.co/bgj2SUo9fL Free-form flows (FFF) work well and sample fast. We showcase this on SBI and molecule generation. Thanks to @PeterSorrenson, Armand, Lea and Ullrich!
0
1
16
New paper out from group in Heidelberg that has some extension of the work @johannbrehmer & I did on Manifold-learning flows, but for unrestricted (aka non-invertible) auto-encoders. They also suggest a different way to avoid a failure mode we identified https://t.co/1zVXIRA3kF
arxiv.org
Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data. However, real data is typically only supported on a lower-dimensional manifold leading the model to expend...
1
12
52
We also discuss limits of existing mathematical guarantees for Normalizing Flows. To learn more and discuss: Come to Hall J #331 on Thu 1 Dec 11:30am-1pm ( https://t.co/TJLj8OXmup) Join the oral panel on Wed 7 Dec Wed 7 Dec 11:45am-12pm CST ( https://t.co/VQNu4hf95d)
0
0
1
Check out our #NeurIPS2022 oral paper: We prove theoretically and experimentally: Normalizing Flows whiten the data covariance exponentially fast. This settles one of two terms of the KL divergence objective. https://t.co/ezo708td5q
1
0
6
Check out our paper on the Role of a Single Affine Layer in Normalizing Flows, awarded a Honorable Mention at GCPR 2020: Video: https://t.co/iXRkUxtT6r Paper: https://t.co/M2VqBK75yS
0
1
6
PS. When we examine the optimization landscape, we're looking for a linear form of "mode connectivity." In doing so, we build on foundational work by @feldrify et al. & @tim_garipov et al. showing that neural network optima are connected by nonlinear paths of nonincreasing error.
0
2
13
We're publishing the code to our ICML 2018 paper "Essentially No Barriers in Neural Network Energy Landscape" ( https://t.co/UjV538hzec) at https://t.co/CEqUY3Jtt5. Check out our talk at @icmlconf on Wed 5pm.
github.com
PyTorch AutoNEB implementation to identify minimum energy paths, e.g. in neural network loss landscapes - fdraxler/PyTorch-AutoNEB
0
9
23
Apropos Facebook nachempfunden: Das Blau-WeiĂź von #archeonline kommt von unserer Pfarrjugendfahne! #dioezesanrat
0
1
0
Medienkompetenz heiĂźt auch, Daten nicht groĂźen Firmen wie Facebook, Google & Co. bedenkenlos zu teilen. #dioezesanrat
1
0
1
#dioezesanrat Umfrage unter den Podiumsgästen: Wer hat Facebook, Twitter, ...-Account?
0
0
1
#dioezesanrat Man. Versteht leider nichts von den Podiumsgästen... Lautstärke anpassen?
0
0
0