Bayesian Methods Research Group
@bayesgroup
Followers
3K
Following
83
Media
78
Statuses
244
Research in Bayesian Deep Learning, Reinforcement Learning, Optimization, Structured Prediction, Drug Discovery and more
Joined July 2017
Not a mirage but our new paper!
๐จ New paper alert! ๐จ Our new paper on ArXiv: "MiAD: Mirage Atom Diffusion for De Novo Crystal Generation". We unlock an ability to dynamically add or remove atoms during generation in diffusion models for better materials discovery. ๐ https://t.co/9uWpxqcS7b 1/5
0
0
3
New paper! #NeurIPS2025
๐COSMOS is OUT! @NeurIPSconf 2025! ๐COSMOSย achieves up toย 2xย faster text generation compared to other diffusion models, utilizing up toย 8x compression in text representations for superior efficiency. ๐ย Paper:ย https://t.co/UF4TNZpq4t ๐ปย Code:ย https://t.co/zonTgWVXNF (1/6)
0
0
1
While frontier labs are announcing their new models, we also want to be part of this parade. So, weโre happy to announce gfnx โ a JAX-first library with environments and a single-file baseline implementation for GFlowNet research.
3
10
20
1/ Can we efficiently learn the destruction process of diffusion samplers? Can we learn not just the drift, but also the variance for all transition kernels? โ We answer YES in our recent paper โAdaptive Destruction Processes for Diffusion Samplersโ (Oral at NeurIPS 2025 FPI
1
9
17
Check out our new paper!
(1/n) The usual assumption in GFlowNet environments is acyclicity. Have you ever wondered if it can be relaxed? Does the existing GFlowNet theory translate to the non-acyclic case? Is efficient training possible? We shed new light on these questions in our latest work! @icmlconf
0
0
3
Check out our new work!
๐จ New paper alert! ๐จ Our new paper on ArXiv: "DreamBooth DPO: Controlled Optimization of Personalized Diffusion Models" It addresses the core trade-off in personalized T2I: concept fidelity vs. prompt alignment, without any human-curated data ๐ https://t.co/nT7nC7RtiM 1/5
0
0
2
Check out our new work!
๐จ New paper alert! ๐จ Our new paper on ArXiv: "ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models". It tackles a key challenge in diffusion models: aligning with human preferences without collapsing diversity ๐ https://t.co/gJaqMAi2b8 1/5
0
0
3
๐จโ๐ผNeural Flow Diffusion Models at #NeurIPS2024 tomorrow! Discover how to build learnable noising processes for straight-line generative trajectories end-to-end and without simulations!๐คฏ ๐West Ballroom A-D #6809 โฐFri 13 Dec 4:30 pm โ 7:30 pm ๐ https://t.co/v0wW6VLKCL
๐ฅ Excited to share our new work on Neural Flow Diffusion Models โ a general, end-to-end, simulation-free framework that works with an arbitrary noising processes and even enables learning them! ๐: https://t.co/nGbIIuEqzs ๐งต 1/11
1
12
73
Check out our new paper! To be presented at #NeurIPS2024 by @KateLobacheva this Friday (poster #2408 / Poster Session 5 East / 13 Dec 11 am โ 2 pm PST)
Starting training with a large learning rate benefits generalizationโbut why? In our new #NeurIPS2024 paper, we investigate its role in navigating the loss landscape and its effect on feature learning! 1/7 Paper: https://t.co/nzkkMk22hA Poster: https://t.co/aL7rqBjyq4
0
0
3
๐๐ฎ๐ถ๐ฟ๐๐ฎ๐๐๐๐๐ก: ๐ฅ๐ฒ๐ฎ๐น๐ถ๐๐๐ถ๐ฐ ๐ฎ๐ป๐ฑ ๐ฅ๐ผ๐ฏ๐๐๐ ๐๐ฎ๐ถ๐ฟ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ฒ๐ฟ ๐๐ถ๐๐ต ๐ฎ ๐๐ฎ๐๐ ๐๐ป๐ฐ๐ผ๐ฑ๐ฒ๐ฟ-๐๐ฎ๐๐ฒ๐ฑ ๐๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต by Maxim Nikolaev, Mikhail Kuznetsov, Dmitry Vetrov, @ai_alanov
https://t.co/Uz4Jhb1hdL
0
0
2
๐ก๐ฒ๐๐ฟ๐ฎ๐น ๐๐น๐ผ๐ ๐๐ถ๐ณ๐ณ๐๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น๐: ๐๐ฒ๐ฎ๐ฟ๐ป๐ฎ๐ฏ๐น๐ฒ ๐๐ผ๐ฟ๐๐ฎ๐ฟ๐ฑ ๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐๐ ๐ณ๐ผ๐ฟ ๐๐บ๐ฝ๐ฟ๐ผ๐๐ฒ๐ฑ ๐๐ถ๐ณ๐ณ๐๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น๐น๐ถ๐ป๐ด by Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth https://t.co/8J99VLhrdV
arxiv.org
Conventional diffusion models typically relies on a fixed forward process, which implicitly defines complex marginal distributions over latent variables. This can often complicate the reverse...
1
1
1
๐๐ฟ๐ผ๐๐ฝ ๐ฎ๐ป๐ฑ ๐ฆ๐ต๐๐ณ๐ณ๐น๐ฒ: ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ ๐ฆ๐๐ฟ๐๐ฐ๐๐๐ฟ๐ฒ๐ฑ ๐ข๐ฟ๐๐ต๐ผ๐ด๐ผ๐ป๐ฎ๐น ๐ฃ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฟ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป by Mikhail Gorbunov, Nikolay Yudin, Vera Soboleva, @ai_alanov, Alexey Naumov, Maxim Rakhuba https://t.co/Mr4PHYIFf0
arxiv.org
The increasing size of neural networks has led to a growing demand for methods of efficient fine-tuning. Recently, an orthogonal fine-tuning paradigm was introduced that uses orthogonal matrices...
1
0
1
๐ช๐ต๐ฒ๐ฟ๐ฒ ๐๐ผ ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ฅ๐ฎ๐๐ฒ๐ ๐๐ฒ๐ฎ๐ฑ ๐จ๐? ๐ ๐๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ฃ๐ฒ๐ฟ๐๐ฝ๐ฒ๐ฐ๐๐ถ๐๐ฒ by @irsadrtdinov, Maxim Kodryan, Eduard Pokonechny, @KateLobacheva, Dmitry Vetrov (stay tuned for the full paper, previously: https://t.co/Iu3EsHJT8z)
openreview.net
It is a conventional wisdom that using large learning rates (LRs) early in training improves generalization. Following a line of research devoted to understanding this effect mechanistically, we...
1
0
0
Did you know that networks trained with different learning rates extract different features (and a different number of them!) from the data? Come by our poster at HiLD Workshop #ICML2024 tomorrow to discuss it with @irsadrtdinov! Paper: https://t.co/AHWFaK5wog 1/6
openreview.net
It is a conventional wisdom that using large learning rates (LRs) early in training improves generalization. Following a line of research devoted to understanding this effect mechanistically, we...
3
9
45
I will be presenting our NeurIPS-2023 paper https://t.co/rTUdEZcin3 at @ml_collective this Friday, March 8, 10am PT / 7pm CET! If you haven't decided yet whether to stay in the pre-train basin or not, you definitely need to see this talk!
1
1
11
Check out our new paper!
๐ News from the GFlowNet world: our paper โGenerative Flow Networks as Entropy-Regularized RLโ was honored with oral presentation at #AISTATS2024! Long story short, our result can be described by this picture.
0
0
3
At #NeurIPS2023? Come check out our latest work!
Large learning rates improve generalization, but are they all beneficial? The short answer is No, for more details check out our paper at the #NeurIPS2023 Mathematics of Modern Machine Learning (M3L) Workshop! Paper: https://t.co/517xCsfrWA 1/4
0
0
1
Can we improve ensembles in the transfer learning setup by exploring the target task loss landscape? Find out in our new #NeurIPS2023 paper! Joint work with Ildus Sadrtdinov, Dmitrii Pozdeev, and Dmitry Vetrov. Paper: https://t.co/y14hXqdIaa 1/7
1
12
57