
Uriel Singer
@urielsinger
Followers
348
Following
132
Media
5
Statuses
47
Research Scientist @ Meta AI Research
Joined October 2015
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
3
20
91
DTM vs FMπ Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
49
324
If you're curious to dive deeper into Transition Matching (TM)β¨π, a great starting point is understanding the similarities and differences between ππ’ππππ«ππ§ππ ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (πππ) and Flow Matching (FM)π‘.
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
128
This paper is awesome. π₯ Flow-matching for flow-matching! βNo more coarse-to-fine generation. πCoarse and fine details emerge together during generation. πResults look super promising, especially when you see how the images evolve.
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
2
19
Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) β¬οΈ List those intersecting with the current state (yellow) β¬οΈ Sample a line from the list (green)
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
139
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
19
131
Check out our team's latest work, led by @urielsinger and @shaulneta!
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
2
17
Introducing Transition Matching (TM) β a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration @shaulneta @itai_gat @lipmanya
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
4
21
Exciting news from #ICML2025 & #ICCV2025 π₯³ - π₯ VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 βοΈinterpretability in the generative era βοΈvideo customization - Organizing two #ICCV2025 workshops βοΈstructural priors for vision βοΈlong video gen π§΅π
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** ππ§΅
17
23
189
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: https://t.co/54etkhxNEK
4
24
89
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, itβs open source! π₯³
π§΅1/ Text-to-video models generate stunning visuals, butβ¦ motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. π
8
13
104
The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: βDonβt Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoningβ. Link: https://t.co/Zsp3BD0TU5 1/n
7
37
113
We are presenting 3 orals and 1 spotlight at #ICLR2025 on two primary topics: On generalizing the data-driven flow matching algorithm to jump processes, arbitrary discrete corruption processes, and beyond. And on highly scalable algorithms for reward-driven learning settings.
1
28
231
Like text-to-video, but hate the hallucinations? This paper is the answer you are looking for!
This work was done during my internship at @AIatMeta π Huge thanks to my amazing collaborators @urielsinger @amit_zhr @YKirstain @adam_polyak90 Yaniv Taigman @liorwolf and @ShellySheynin Check out the project page for many more results and details:
1
0
13
Introducingβ¨Still-Movingβ¨βour work from @GoogleDeepMind that lets you apply *any* image customization method to video modelsπ₯ Personalization (DreamBooth)πΆstylization (StyleDrop) π¨ ControlNetπΌοΈβALL in one method! Plusβ¦ you can control the amount of generated motionπββοΈ π§΅π
5
67
295
Thrilled to announce that our paper has been accepted for an Oral presentation at #ECCV2024! See you in Milan! With @urielsinger, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
4
16
62
Thrilled to share that our paper has been accepted to #ECCV2024! πππ
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
1
8
32
Excited to share our recent work! π₯π We propose an unsupervised method that achieves a new state-of-the-art in text-based video editing π Check it out: https://t.co/c2rAWnJhJL W/ the amazing @urielsinger, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
fdd-video-edit.github.io
TWITTER BANNER DESCRIPTION META TAG
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
0
12
79
Thank you @_akhaliq for sharing our recent work, Emu Video Edit, on video editing! Project page: https://t.co/uv60osIJCo An amazing collaboration with @amit_zhr, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
fdd-video-edit.github.io
TWITTER BANNER DESCRIPTION META TAG
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
1
14
52
Thanks for sharing our work! joint work with @OmriPuny, @itai_gat, Brian Karrer, @urielsinger and @lipmanya
D-Flow Differentiating through Flows for Controlled Generation Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional
0
7
33