Amit Zohar Profile
Amit Zohar

@amit_zhr

Followers
204
Following
185
Media
3
Statuses
56

AI Researcher @ Meta AI

Tel Aviv
Joined November 2018
Don't wanna be here? Send us removal request.
@amit_zhr
Amit Zohar
1 year
Thrilled to share results from the Movie Gen models we've been working on these past few months, and particularly the Movie Gen Edit model for precise editing! 🚀🚀
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
5
13
112
@roibar_on
Roi Bar-On
4 months
1/9 Excited to share EditP23! 🎨 Finally, a single tool for ALL your 3D editing needs: ✅ Pose & Geometry Changes ✅ Object Additions ✅ Global Style Transformations ✅ Local Modifications All driven by one simple 2D image edit. It's mask-free ✨ and works in seconds ⚡️. 🧵
2
28
94
@amit_zhr
Amit Zohar
5 months
Awesome work that may very well start a new paradigm for media generation!
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
0
6
@amit_zhr
Amit Zohar
6 months
Amazing work by hila!!
@hila_chefer
Hila Chefer
6 months
Exciting news from #ICML2025 & #ICCV2025 🥳 - 🥇 VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 ☝️interpretability in the generative era ✌️video customization - Organizing two #ICCV2025 workshops ☝️structural priors for vision ✌️long video gen 🧵👇
1
0
7
@MichaelHassid
Michael Hassid
6 months
The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: https://t.co/Zsp3BD0TU5 1/n
7
37
113
@guy_yariv
Guy Yariv
9 months
I'm thrilled to announce that Through-The-Mask (TTM) has been accepted to #CVPR2025! TTM is an I2V generation framework that leverages mask-based motion trajectories to enhance object-specific motion and maintain consistency, especially in multi-object scenarios More details👇
@guy_yariv
Guy Yariv
11 months
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
7
44
@robertarail
Roberta Raileanu
10 months
Super excited to share 🧠MLGym 🦾 – the first Gym environment for AI Research Agents 🤖🔬 We introduce MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. The key contributions of our work are: 🕹️ Enables the
16
121
493
@giffmana
Lucas Beyer (bl16)
10 months
This is extremely cool! They find diffusion loss is not very sensitive to motion. Thus they fine-tune videogen models with additional explicit motion prediction, making the model generate much more coherent videos. Also, Hila has been doing consistently good work, follow her!
@hila_chefer
Hila Chefer
10 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
6
22
277
@_akhaliq
AK
10 months
Meta just dropped VideoJAM Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models comparison with openai sora and kling
17
127
684
@amit_zhr
Amit Zohar
10 months
🚀 Our latest work, VideoJAM, introduces a new method to enhance motion in any T2V model, significantly improving its motion and physics. We also train a DiT model that, combined with VideoJAM, achieves a new SOTA in motion generation! 🔥 https://t.co/WZm5RLfgyX
hila-chefer.github.io
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Model
@hila_chefer
Hila Chefer
10 months
This work was done during my internship at @AIatMeta 🎉 Huge thanks to my amazing collaborators @urielsinger @amit_zhr @YKirstain @adam_polyak90 Yaniv Taigman @liorwolf and @ShellySheynin Check out the project page for many more results and details:
1
1
15
@Or__Tal
Or Tal
11 months
Release Announcement!📢💣 🥁🎷JASCO 🎶🪇 training & inference code + model weights are out! Paper📜: https://t.co/exolVO1sJV Samples🔊: https://t.co/uPe7QPk9DB Code🐍: https://t.co/NvhKDFZGoU Models🤗: https://t.co/cEiw2nt41D @lonziks @itai_gat @FelixKreuk @adiyossLC
Tweet card summary image
huggingface.co
5
20
71
@guy_yariv
Guy Yariv
11 months
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
27
80
@dtrinh
Danny Trinh
1 year
VERY excited about the era of generative AR we're bringing to life. Check out this preview! It's early but so damn promising — this isn't "AI slop"... it's unlocking Creators' imaginations on their own videos. Change your wardrobe, scene, lighting etc. with little expertise. PS
24
18
215
@imisra_
Ishan Misra
1 year
MovieGen powering Instagram's video editing features :)
theverge.com
A new background or a silly hat is just a text prompt away.
2
7
65
@chihyaoma
Kevin Chih-Yao Ma
1 year
Movie Gen claims to be the state-of-the-art in text-to-video generation, outperforming Sora, Kling, Gen3, and more. But how can you trust the results? Today, we're releasing 1003 videos and their prompts - no cherry-picking allowed. Our goal? To set a new standard for evaluating
@AIatMeta
AI at Meta
1 year
As detailed in the Meta Movie Gen technical report, today we’re open sourcing Movie Gen Bench: two new media generation benchmarks that we hope will help to enable the AI research community to progress work on more capable audio and video generation models. Movie Gen Video Bench
2
8
53
@imisra_
Ishan Misra
1 year
Two exciting updates on Movie Gen (1) MovieGenBench containing thousands of *random* generations for benchmarking for video/audio tasks :) (2) Folks in Hollywood (Casey Affleck, Blumhouse productions) took Movie Gen for a spin:
Tweet card summary image
ai.meta.com
We’re sharing initial results from our work with award-winning production company Blumhouse and select creators—part of a pilot program focused on creative industry feedback.
@AIatMeta
AI at Meta
1 year
As detailed in the Meta Movie Gen technical report, today we’re open sourcing Movie Gen Bench: two new media generation benchmarks that we hope will help to enable the AI research community to progress work on more capable audio and video generation models. Movie Gen Video Bench
3
14
148
@Andrew__Brown__
Andrew Brown
1 year
So how did we get to these amazing videos for Meta Movie Gen? One of the things I’m proudest of is that we released a very detailed technical report ( https://t.co/FU2PzloDhr…) Lets dive into a technical summary of what we did & learnt 🧵 1/n https://t.co/BJPvf7wC9v
Tweet card summary image
ai.meta.com
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or transform your personal image into a unique video.
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
25
158
1K
@YKirstain
Yuval Kirstain
1 year
So proud to be part of the Movie Gen project, pushing GenAI boundaries! Two key insights: 1. Amazing team + high-quality data + clean, scalable code + general architecture + GPUs go brr = SOTA video generation. 2. Video editing *without* supervised data: train a *single* model
6
25
153
@adam_polyak90
Adam Polyak
1 year
Excited to share our progress on Movie Gen, a SOTA model for video generation! 🎥✨ I worked on this project as part of a cutting-edge team 🔥, pushing the boundaries of video editing ✂️— all without supervised data. Can’t wait to show you what’s next! 🚀🎬
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
3
8
47
@imisra_
Ishan Misra
1 year
We released 92 pages worth of detail including how to benchmark these models! Super critical for the scientific progress in this field :) We'll also release evaluation benchmarks next week to help the research community 💪
@soumithchintala
Soumith Chintala
1 year
yes, Meta released a full scientific paper on MovieGen, with a lot of details that'll help the field move forward.
10
35
427