Farrin Marouf Sofian
@farrinsofian
Followers
47
Following
40
Media
3
Statuses
11
CS PhD at @UCIrvine| Building Generative Models | AI resident at @ChanZuckerberg |Prev. intern @Adobe
Irvine, CA
Joined September 2022
BREAKING: MIT just analyzed 300 AI deployments worth $40 billion & the results are devastating. Turns out, 95% of enterprise AI projects deliver zero measurable business impact. Here's what the data revealed: (hint: the pattern matches every major technology bubble we've seen)
136
799
4K
[7/n] 🙌 Huge shoutout to my amazing collaborators: @kpandey008, @felixDrRelax, @StephanMandt, and @Tkaraletsos Catch us in Vancouver at #ICML2025. Let’s talk diffusion models, control theory, and beyond!
0
0
2
[6/n] ✨ Plug-and-play with large-scale models like Stable Diffusion! Our method can be extended to tasks like style guidance with latent diffusion models — no retraining, no new components.
1
1
2
[5/n]🔥 Results: On two challenging classes of inverse problems: non-linear & blind inverse problems (yes, our method extends trivially to blind inverse problems), our method outperforms SOTA by 35% and 44%, respectively. Some results on the blind image deblurring task here 👇
1
0
1
[4/n]đź§ Our method brings test-time compute to diffusion models. More optimization steps = better guidance without changing the base pretrained model at the expense of runtime.
1
0
1
[3/n] 🎯 At each diffusion step, we compute control signals via a simple optimization, which guides the diffusion process toward targeted outcomes. Surprisingly, it takes just 2–3 optimization steps in practice (see viz above).
1
0
1
[2/n]đź’ˇInspired by ideas in variational optimal control, our method: âś… Works with any pretrained diffusion model (pixel or latent space) âś… Is task-agnostic (for tasks which can be posed using a differentiable objective) âś… Beats prior methods on inverse problems & style
1
0
2
[1/n]❓Tired of guidance methods that require retraining or extra models? Classifier-based/free guidance comes with hefty costs, task-specific model training, or rigid design choices (especially when using diffusion posterior approximation methods). There’s a simpler way — Let's
1
0
3
🚀 News! Our recent #ICML2025 paper “Variational Control for Guidance in Diffusion Models” introduces a simple yet powerful method for guidance in diffusion models — and it doesn’t need model retraining or extra networks. 📄 Paper: https://t.co/nixanKxs9W 💻 Code:
3
17
99