Yaron Lipman Profile
Yaron Lipman

@lipmanya

Followers
4K
Following
1K
Media
25
Statuses
322

Research scientist @AIatMeta (FAIR), prev/visiting @WeizmannScience. Interested in generative models and deep learning of irregular/geometric data.πŸŽ—οΈ

Israel
Joined August 2014
Don't wanna be here? Send us removal request.
@lipmanya
Yaron Lipman
1 year
A new (and comprehensive) Flow Matching guide and codebase released! Join us tomorrow at 9:30AM @NeurIPSConf for the FM tutorial to hear more... https://t.co/uaDy00wEw6 https://t.co/ceJlUiTuWO
9
108
521
@bose_joey
Joey Bose @ NeurIPS2025
27 days
Come do a PhD with me πŸ˜€! Promise of fun science and great coffee β˜•
@giladturok
Gilad
27 days
I like the way @joeybos lays out his vision for PhD supervision! Seems intense and rewarding.
31
72
742
@lipmanya
Yaron Lipman
2 months
New work led by @peholderrieth showing how to transform an already trained flow matching model to a stochastic transition/posterior model that can still be sampled via an efficient ODE solver!
@peholderrieth
Peter Holderrieth
2 months
New work: β€œGLASS Flows: Transition Sampling for Alignment of Flow and Diffusion Models”. GLASS generates images by sampling stochastic Markov transitions with ODEs - allowing us to boost text-image alignment for large-scale models at inference time! https://t.co/unsuG3mYer [1/7]
0
6
61
@peholderrieth
Peter Holderrieth
2 months
New work: β€œGLASS Flows: Transition Sampling for Alignment of Flow and Diffusion Models”. GLASS generates images by sampling stochastic Markov transitions with ODEs - allowing us to boost text-image alignment for large-scale models at inference time! https://t.co/unsuG3mYer [1/7]
4
62
253
@adiyossLC
Yossi Adi
2 months
We release Code World Model (CWM)! πŸ‘©β€πŸ’»πŸŒŽπŸ“Š A coding LLM designed to advance code generation research through agentic reasoning and world-model-based planning. Super excited about this release and proud of the team’s work! πŸ˜ƒ See Gab's post for more info πŸ‘‡
@syhw
Gabriel Synnaeve
2 months
(🧡) Today, we release Meta Code World Model (CWM), a 32-billion-parameter dense LLM that enables novel research on improving code generation through agentic reasoning and planning with world models. https://t.co/BJSUCh2vtg
0
12
50
@FelixKreuk
Felix Kreuk
2 months
1/ We released CWM, a 32B dense LLM for coding, agentic use, and, more importantly, to further World-Modeling research. To support this research, we release the pre-training, sft and rl model weights, along with inference code and the tech report. See:
@syhw
Gabriel Synnaeve
2 months
(🧡) Today, we release Meta Code World Model (CWM), a 32-billion-parameter dense LLM that enables novel research on improving code generation through agentic reasoning and planning with world models. https://t.co/BJSUCh2vtg
1
7
39
@syhw
Gabriel Synnaeve
2 months
(🧡) Today, we release Meta Code World Model (CWM), a 32-billion-parameter dense LLM that enables novel research on improving code generation through agentic reasoning and planning with world models. https://t.co/BJSUCh2vtg
60
314
2K
@_akhaliq
AK
3 months
Set Block Decoding is a Language Model Inference Accelerator
3
6
55
@lipmanya
Yaron Lipman
3 months
The blue vertical lines in the animation indicate blocks start/end; the method generates block-by-block.
0
0
0
@lipmanya
Yaron Lipman
3 months
A new paper showing a simple method for accelerating LLMs with a short fine-tune and **no** architecture changes….
@helibenhamu
Heli Ben-Hamu
3 months
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
1
2
25
@itai_gat
Itai Gat
3 months
Check out our recent work Set Block Decoding! Super simple modeling, combining NTP and discrete diffusion, allowing parallel decoding without any architectural changes and with exact KV cache! Arxiv:
Tweet card summary image
arxiv.org
Autoregressive next token prediction language models offer powerful capabilities but face significant challenges in practical deployment due to the high computational and memory costs of...
@helibenhamu
Heli Ben-Hamu
3 months
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
0
3
23
@helibenhamu
Heli Ben-Hamu
3 months
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
3
25
114
@jbhuang0604
Jia-Bin Huang
4 months
Explaining Flow Matching in 4 minutes
9
130
1K
@shaulneta
Neta Shaul
5 months
DTM vs FMπŸ‘‡ Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
50
329
@shaulneta
Neta Shaul
5 months
If you're curious to dive deeper into Transition Matching (TM)βœ¨πŸ”, a great starting point is understanding the similarities and differences between πƒπ’πŸπŸπžπ«πžπ§πœπž π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (πƒπ“πŒ) and Flow Matching (FM)πŸ’‘.
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
129
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
5
46
290
@shaulneta
Neta Shaul
5 months
Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) ⬇️ List those intersecting with the current state (yellow) ⬇️ Sample a line from the list (green)
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
17
138
@urielsinger
Uriel Singer
5 months
Introducing Transition Matching (TM) β€” a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration @shaulneta @itai_gat @lipmanya
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
2
4
22
@itai_gat
Itai Gat
5 months
Check out our team's latest work, led by @urielsinger and @shaulneta!
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
2
17
@lipmanya
Yaron Lipman
5 months
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! πŸš€ Excited to introduce π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (π“πŒ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🀯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
19
131
@guanhorng_liu
Guan-Horng Liu
5 months
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate SchrΓΆdinger bridges, despite all being SB-inspired. πŸ“’ Proudly introduce #Adjoint #SchrΓΆdinger #Bridge #Sampler, a full SB-based sampler that
3
39
224