Amit Zohar
@amit_zhr
Followers
204
Following
185
Media
3
Statuses
56
Thrilled to share results from the Movie Gen models we've been working on these past few months, and particularly the Movie Gen Edit model for precise editing! 🚀🚀
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
5
13
112
1/9 Excited to share EditP23! 🎨 Finally, a single tool for ALL your 3D editing needs: ✅ Pose & Geometry Changes ✅ Object Additions ✅ Global Style Transformations ✅ Local Modifications All driven by one simple 2D image edit. It's mask-free ✨ and works in seconds ⚡️. 🧵
2
28
94
Awesome work that may very well start a new paradigm for media generation!
[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
0
6
The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: https://t.co/Zsp3BD0TU5 1/n
7
37
113
I'm thrilled to announce that Through-The-Mask (TTM) has been accepted to #CVPR2025! TTM is an I2V generation framework that leverages mask-based motion trajectories to enhance object-specific motion and maintain consistency, especially in multi-object scenarios More details👇
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
7
44
Super excited to share 🧠MLGym 🦾 – the first Gym environment for AI Research Agents 🤖🔬 We introduce MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. The key contributions of our work are: 🕹️ Enables the
16
121
493
This is extremely cool! They find diffusion loss is not very sensitive to motion. Thus they fine-tune videogen models with additional explicit motion prediction, making the model generate much more coherent videos. Also, Hila has been doing consistently good work, follow her!
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
6
22
277
Meta just dropped VideoJAM Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models comparison with openai sora and kling
17
127
684
🚀 Our latest work, VideoJAM, introduces a new method to enhance motion in any T2V model, significantly improving its motion and physics. We also train a DiT model that, combined with VideoJAM, achieves a new SOTA in motion generation! 🔥 https://t.co/WZm5RLfgyX
hila-chefer.github.io
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Model
This work was done during my internship at @AIatMeta 🎉 Huge thanks to my amazing collaborators @urielsinger @amit_zhr @YKirstain @adam_polyak90 Yaniv Taigman @liorwolf and @ShellySheynin Check out the project page for many more results and details:
1
1
15
Release Announcement!📢💣 🥁🎷JASCO 🎶🪇 training & inference code + model weights are out! Paper📜: https://t.co/exolVO1sJV Samples🔊: https://t.co/uPe7QPk9DB Code🐍: https://t.co/NvhKDFZGoU Models🤗: https://t.co/cEiw2nt41D
@lonziks @itai_gat @FelixKreuk @adiyossLC
huggingface.co
5
20
71
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
27
80
VERY excited about the era of generative AR we're bringing to life. Check out this preview! It's early but so damn promising — this isn't "AI slop"... it's unlocking Creators' imaginations on their own videos. Change your wardrobe, scene, lighting etc. with little expertise. PS
24
18
215
MovieGen powering Instagram's video editing features :)
theverge.com
A new background or a silly hat is just a text prompt away.
2
7
65
Movie Gen claims to be the state-of-the-art in text-to-video generation, outperforming Sora, Kling, Gen3, and more. But how can you trust the results? Today, we're releasing 1003 videos and their prompts - no cherry-picking allowed. Our goal? To set a new standard for evaluating
As detailed in the Meta Movie Gen technical report, today we’re open sourcing Movie Gen Bench: two new media generation benchmarks that we hope will help to enable the AI research community to progress work on more capable audio and video generation models. Movie Gen Video Bench
2
8
53
Two exciting updates on Movie Gen (1) MovieGenBench containing thousands of *random* generations for benchmarking for video/audio tasks :) (2) Folks in Hollywood (Casey Affleck, Blumhouse productions) took Movie Gen for a spin:
ai.meta.com
We’re sharing initial results from our work with award-winning production company Blumhouse and select creators—part of a pilot program focused on creative industry feedback.
As detailed in the Meta Movie Gen technical report, today we’re open sourcing Movie Gen Bench: two new media generation benchmarks that we hope will help to enable the AI research community to progress work on more capable audio and video generation models. Movie Gen Video Bench
3
14
148
So how did we get to these amazing videos for Meta Movie Gen? One of the things I’m proudest of is that we released a very detailed technical report ( https://t.co/FU2PzloDhr…) Lets dive into a technical summary of what we did & learnt 🧵 1/n https://t.co/BJPvf7wC9v
ai.meta.com
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or transform your personal image into a unique video.
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
25
158
1K
So proud to be part of the Movie Gen project, pushing GenAI boundaries! Two key insights: 1. Amazing team + high-quality data + clean, scalable code + general architecture + GPUs go brr = SOTA video generation. 2. Video editing *without* supervised data: train a *single* model
6
25
153
Excited to share our progress on Movie Gen, a SOTA model for video generation! 🎥✨ I worked on this project as part of a cutting-edge team 🔥, pushing the boundaries of video editing ✂️— all without supervised data. Can’t wait to show you what’s next! 🚀🎬
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
3
8
47
We released 92 pages worth of detail including how to benchmark these models! Super critical for the scientific progress in this field :) We'll also release evaluation benchmarks next week to help the research community 💪
yes, Meta released a full scientific paper on MovieGen, with a lot of details that'll help the field move forward.
10
35
427