Uriel Singer Profile
Uriel Singer

@urielsinger

Followers
348
Following
132
Media
5
Statuses
47

Research Scientist @ Meta AI Research

Joined October 2015
Don't wanna be here? Send us removal request.
@helibenhamu
Heli Ben-Hamu
3 days
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
3
20
91
@shaulneta
Neta Shaul
2 months
DTM vs FMπŸ‘‡ Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
49
324
@shaulneta
Neta Shaul
2 months
If you're curious to dive deeper into Transition Matching (TM)βœ¨πŸ”, a great starting point is understanding the similarities and differences between πƒπ’πŸπŸπžπ«πžπ§πœπž π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (πƒπ“πŒ) and Flow Matching (FM)πŸ’‘.
Tweet media one
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
128
@ArarMoab
moab.arar
2 months
This paper is awesome. πŸ”₯ Flow-matching for flow-matching! ❌No more coarse-to-fine generation. πŸš€Coarse and fine details emerge together during generation. πŸ†Results look super promising, especially when you see how the images evolve.
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
2
19
@shaulneta
Neta Shaul
2 months
Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) ⬇️ List those intersecting with the current state (yellow) ⬇️ Sample a line from the list (green)
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
139
@lipmanya
Yaron Lipman
2 months
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
19
131
@itai_gat
Itai Gat
2 months
Check out our team's latest work, led by @urielsinger and @shaulneta!
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
2
17
@urielsinger
Uriel Singer
2 months
Introducing Transition Matching (TM) β€” a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration @shaulneta @itai_gat @lipmanya
@shaulneta
Neta Shaul
2 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
4
21
@hila_chefer
Hila Chefer
3 months
Exciting news from #ICML2025 & #ICCV2025 πŸ₯³ - πŸ₯‡ VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 ☝️interpretability in the generative era ✌️video customization - Organizing two #ICCV2025 workshops ☝️structural priors for vision ✌️long video gen πŸ§΅πŸ‘‡
@hila_chefer
Hila Chefer
7 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** πŸ‘‡πŸ§΅
17
23
189
@itai_gat
Itai Gat
3 months
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: https://t.co/54etkhxNEK
Tweet media one
4
24
89
@hila_chefer
Hila Chefer
3 months
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, it’s open source! πŸ₯³
@itayhzn
Itay Hazan
3 months
🧡1/ Text-to-video models generate stunning visuals, but… motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. πŸ‘‡
8
13
104
@MichaelHassid
Michael Hassid
3 months
The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: β€œDon’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: https://t.co/Zsp3BD0TU5 1/n
Tweet media one
7
37
113
@RickyTQChen
Ricky T. Q. Chen
5 months
We are presenting 3 orals and 1 spotlight at #ICLR2025 on two primary topics: On generalizing the data-driven flow matching algorithm to jump processes, arbitrary discrete corruption processes, and beyond. And on highly scalable algorithms for reward-driven learning settings.
1
28
231
@urielsinger
Uriel Singer
7 months
Like text-to-video, but hate the hallucinations? This paper is the answer you are looking for!
@hila_chefer
Hila Chefer
7 months
This work was done during my internship at @AIatMeta πŸŽ‰ Huge thanks to my amazing collaborators @urielsinger @amit_zhr @YKirstain @adam_polyak90 Yaniv Taigman @liorwolf and @ShellySheynin Check out the project page for many more results and details:
1
0
13
@hila_chefer
Hila Chefer
1 year
Introducing✨Still-Movingβœ¨β€”our work from @GoogleDeepMind that lets you apply *any* image customization method to video modelsπŸŽ₯ Personalization (DreamBooth)🐢stylization (StyleDrop) 🎨 ControlNetπŸ–ΌοΈβ€”ALL in one method! Plus… you can control the amount of generated motionπŸƒβ€β™€οΈ πŸ§΅πŸ‘‡
5
67
295
@amit_zhr
Amit Zohar
1 year
Thrilled to announce that our paper has been accepted for an Oral presentation at #ECCV2024! See you in Milan! With @urielsinger, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
4
16
62
@amit_zhr
Amit Zohar
1 year
Thrilled to share that our paper has been accepted to #ECCV2024! πŸš€πŸš€πŸŽ‰
@_akhaliq
AK
1 year
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
1
8
32
@amit_zhr
Amit Zohar
1 year
Excited to share our recent work! πŸŽ₯πŸ“ We propose an unsupervised method that achieves a new state-of-the-art in text-based video editing πŸš€ Check it out: https://t.co/c2rAWnJhJL W/ the amazing @urielsinger, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
fdd-video-edit.github.io
TWITTER BANNER DESCRIPTION META TAG
@_akhaliq
AK
1 year
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
0
12
79
@urielsinger
Uriel Singer
1 year
Thank you @_akhaliq for sharing our recent work, Emu Video Edit, on video editing! Project page: https://t.co/uv60osIJCo An amazing collaboration with @amit_zhr, @YKirstain, @ShellySheynin, @adam_polyak90, @deviparikh, and @taigman
fdd-video-edit.github.io
TWITTER BANNER DESCRIPTION META TAG
@_akhaliq
AK
1 year
Meta presents Video Editing via Factorized Diffusion Distillation We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing
1
14
52
@helibenhamu
Heli Ben-Hamu
2 years
Thanks for sharing our work! joint work with @OmriPuny, @itai_gat, Brian Karrer, @urielsinger and @lipmanya
@_akhaliq
AK
2 years
D-Flow Differentiating through Flows for Controlled Generation Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional
Tweet media one
0
7
33