OBohdal Profile Banner
Ondrej Bohdal Profile
Ondrej Bohdal

@OBohdal

Followers
276
Following
274
Media
11
Statuses
58

Machine learning researcher at Samsung Research @samsungresearch. Previously @InfAtEd @EdiDataScience @turinginst @AmazonScience

London, UK
Joined December 2020
Don't wanna be here? Send us removal request.
@OBohdal
Ondrej Bohdal
2 months
Blog post about our LoRA.rar approach (ICCV'25 paper) is now online!
@samsungresearch
Samsung Research
2 months
[Tech Blog] LoRA.rar uses a hypernetwork to merge content and style LoRAs in real-time, outperforming ZipLoRA in speed and quality. Trained on diverse pairs, it generalizes to unseen combinations, making it perfect for edge devices. #AI #LoRA #ImageGen https://t.co/BW2h3IioDP
0
0
1
@OBohdal
Ondrej Bohdal
2 months
Fantastic news that our LoRA.rar paper has won the Best Paper award at the ICCV 2025 Personalization in Generative AI Workshop! 🎉
@umbertomichieli
umberto michieli @ NeurIPS 25
2 months
Our paper LoRA.rar just won the Best Paper Award at the P13N: Personalization in Generative AI Workshop @ ICCV 2025! 🎉 📄 Check it out here:  https://t.co/J2EXdxiEej
0
0
0
@TahaCeritli
Taha Ceritli
3 months
(1/7) Happy to share that our paper on adapter merging ( https://t.co/OdFdAxjclY) has been accepted to EMNLP 2025 (Main Conference)! Huge thanks to my co-authors: @OBohdal, Mete Ozay, KyengHun Lee, Jijoong Moon, Hyeonmok Ko, @umbertomichieli
Tweet card summary image
arxiv.org
Large language models (LLMs) often leverage adapters, such as low-rank-based adapters, to achieve strong performance on downstream tasks. However, storing a separate adapter for each task...
1
2
4
@OBohdal
Ondrej Bohdal
5 months
Fantastic news that our LoRA.rar paper has been accepted to ICCV'25! 🎉 Well done team 🙌
@DonaldShenaj
Donald Shenaj @ ICCV 2025 🏄‍♂️
5 months
🚀Exciting news, 𝗟𝗼𝗥𝗔.𝗿𝗮𝗿 has been accepted to @ICCVConference, which will be held in October in Hawaii🌈. Huge thanks to the team: @OBohdal, Mete Ozay, Pietro Zanuttigh, and @umbertomichieli. 📜Preprint: https://t.co/3p3mpVinBh 💻Code: https://t.co/iVBIOtnbur
0
0
4
@yongshuozong
Yongshuo Zong
8 months
I'll be at #ICLR2025 next week to present VL-ICL, our benchmark for multimodal in-context learning. Find me at the poster session and happy to chat about all kinds of stuffs on multimodal LLMs and more. DM/email is welcome!
1
1
4
@OBohdal
Ondrej Bohdal
11 months
Our benchmark for evaluating in-context learning of multimodal LLMs has been accepted to ICLR'25! 🎉 Check out the project page for more details: https://t.co/qitK1gjBqb 📄
@yongshuozong
Yongshuo Zong
11 months
Our VL-ICL bench is accepted to @iclr_conf! It's been almost a year since we developed it yet state-of-the-art VLMs still struggle on learning in-context. Great to work with @OBohdal and @tmh31.
0
3
18
@OBohdal
Ondrej Bohdal
1 year
🚀Excited to share our latest work 𝗟𝗼𝗥𝗔.𝗿𝗮𝗿: an efficient method to merge LoRAs for personalized content and style image generation! 🖼️✨
@DonaldShenaj
Donald Shenaj @ ICCV 2025 🏄‍♂️
1 year
🛸Excited to release 𝗟𝗼𝗥𝗔.𝗿𝗮𝗿, a groundbreaking method for personalized content and style image generation 🦕. 📜 Paper and video: https://t.co/3p3mpVinBh https://t.co/10GUW58lvr Huge thanks to the co-authors: @OBohdal, Mete Ozay, Pietro Zanuttigh, and @umbertomichieli
0
0
12
@RamanDutt4
Raman Dutt
1 year
Looking to reduce memorization WHILE improving image quality in diffusion models? Delighted to share our work "𝐌𝐞𝐦𝐂𝐨𝐧𝐭𝐫𝐨𝐥" now accepted at WACV '25 (@wacv_official). We show strong results for medical image generation and also establish an initial benchmark! More 👇
1
12
25
@OBohdal
Ondrej Bohdal
2 years
Career update: I'm excited to share the news that I've recently joined Samsung Research! 🎉 I'll be primarily doing research on large language models. Looking forward to catching up with friends in London 🇬🇧 🙌 and also meeting new people here!
1
0
68
@RamanDutt4
Raman Dutt
2 years
🚨 MemControl: Mitigating Memorization in Medical Diffusion Models via Automated Parameter Selection A new strategy to mitigate memorization in Diffusion models Arxiv: https://t.co/EIGrSOk8DL Work done with @SnchzPedro_ @OBohdal @STsaftaris @tmh31 @BioMedAI_CDT 🧵👇
1
7
19
@RamanDutt4
Raman Dutt
2 years
Finally arrived in Vienna to present FairTune at @iclr_conf. A dream come true ✨ Also, co-organizing the ML-Collective social on 8th (12:45-2:15 CEST) with @savvyRL @rahiment @osaukh and @Muhtasham9. Do join us! DM for discussions around PEFT, diffusion, Medical imaging etc
@RamanDutt4
Raman Dutt
2 years
🚨FairTune: Optimizing PEFT for Fairness in Medical Image Analysis A new framework to finetune your large vision models that improves downstream fairness. Accepted in #ICLR2024 ✨ With: @OBohdal @STsaftaris @tmh31 CC: @vivnat @alvarezvalle @fepegar_ @BoWang87 @BioMedAI_CDT
2
5
36
@yongshuozong
Yongshuo Zong
2 years
VLGuard is accepted to #ICML2024! Check out our strong baseline for 🛡️safeguarding🛡️ VLLMs:
@yongshuozong
Yongshuo Zong
2 years
Your #VLLMs are capable, but they are not safe enough! We present the first safety fine-tuning dataset VLGuard for VLLMs. By fine-tuning on it, the safety of VLLMs can be substantially improved while maintaining helpfulness. Check here for more details: https://t.co/CXiTscpMUk
0
2
8
@OBohdal
Ondrej Bohdal
2 years
Noise can be helpful for improving generalisation and uncertainty calibration of neural networks - but how to use it effectively in different scenarios? Find out in our recent paper that was accepted to #TMLR!
@MartinFerianc
Martin Ferianc
2 years
I am thrilled to share our latest paper, "Navigating Noise: A Study of How Noise Influences Generalisation and Calibration of Neural Networks https://t.co/Mq88BKttB3," published in @TmlrOrg, This work is a collective effort by @OBohdal , @tmh31, @mrd_rodrigues and myself :).
0
0
16
@OBohdal
Ondrej Bohdal
2 years
Curious about how to better evaluate in-context learning in multimodal #LLMs? We introduce VL-ICL Bench to enable rigorous evaluation of MLLM's ability to learn from a few examples✨. Details at
@yongshuozong
Yongshuo Zong
2 years
Evaluating the capabilities of multimodal in-context learning of #VLLMs? You can do better than VQA and captioning! Introducing *VL-ICL Bench* for both image-to-text and text-to-image #ICL. Project page: https://t.co/Z9TFot8x8K
0
2
15
@OBohdal
Ondrej Bohdal
2 years
Vision-language models are highly capable yet prone to generate unsafe content. To help with this challenge, we introduce the VLGuard safety fine-tuning dataset ✨, together with two strategies for how to utilise it ✅. Learn more at ➡️
@yongshuozong
Yongshuo Zong
2 years
Your #VLLMs are capable, but they are not safe enough! We present the first safety fine-tuning dataset VLGuard for VLLMs. By fine-tuning on it, the safety of VLLMs can be substantially improved while maintaining helpfulness. Check here for more details: https://t.co/CXiTscpMUk
0
0
11
@OBohdal
Ondrej Bohdal
2 years
Interested in how to improve the fairness of large vision models? Learn more in our FairTune paper that was recently accepted to #ICLR!
@RamanDutt4
Raman Dutt
2 years
🚨FairTune: Optimizing PEFT for Fairness in Medical Image Analysis A new framework to finetune your large vision models that improves downstream fairness. Accepted in #ICLR2024 ✨ With: @OBohdal @STsaftaris @tmh31 CC: @vivnat @alvarezvalle @fepegar_ @BoWang87 @BioMedAI_CDT
0
2
13
@OBohdal
Ondrej Bohdal
2 years
Joint work with @dali_academic, @shelling343 and @tmh31! (4/4)
0
0
1
@OBohdal
Ondrej Bohdal
2 years
To address this challenging problem setting, we introduce a method that utilises a cross-attention mechanism to select relevant examples and adapt the model (3/4)
1
0
1
@OBohdal
Ondrej Bohdal
2 years
The adaptation is expected to be purely feed-forward to reflect hardware limitations and assumes the data used for adaptation are of mixed relevance, with no class or domain labels (2/4)
1
0
1
@OBohdal
Ondrej Bohdal
2 years
We propose a new highly-practical problem setting where we adapt a pre-trained model on end-user devices to keep user's data private (1/4)
1
0
1