Adam Polyak Profile
Adam Polyak

@adam_polyak90

Followers
164
Following
299
Media
2
Statuses
41

Joined November 2014
Don't wanna be here? Send us removal request.
@adam_polyak90
Adam Polyak
1 year
Excited to share our progress on Movie Gen, a SOTA model for video generation! 🎥✨ I worked on this project as part of a cutting-edge team 🔥, pushing the boundaries of video editing ✂️— all without supervised data. Can’t wait to show you what’s next! 🚀🎬
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
3
8
47
@lipmanya
Yaron Lipman
5 months
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
@shaulneta
Neta Shaul
5 months
[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
0
19
131
@Ahmad_Al_Dahle
Ahmad Al-Dahle
8 months
Introducing our first set of Llama 4 models! We’ve been hard at work doing a complete re-design of the Llama series. I’m so excited to share it with the world today and mark another major milestone for the Llama herd as we release the *first* open source models in the Llama 4
320
915
6K
@guy_yariv
Guy Yariv
9 months
I'm thrilled to announce that Through-The-Mask (TTM) has been accepted to #CVPR2025! TTM is an I2V generation framework that leverages mask-based motion trajectories to enhance object-specific motion and maintain consistency, especially in multi-object scenarios More details👇
@guy_yariv
Guy Yariv
11 months
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
7
44
@adam_polyak90
Adam Polyak
10 months
🚀 Introducing VideoJAM – a framework that instills a strong motion prior into any video model! By denoising an optical flow derivative alongside pixels, VideoJAM teaches models to generate coherent motion and physics with high-quality visuals. 📽️
@hila_chefer
Hila Chefer
10 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
2
0
11
@giffmana
Lucas Beyer (bl16)
10 months
This is extremely cool! They find diffusion loss is not very sensitive to motion. Thus they fine-tune videogen models with additional explicit motion prediction, making the model generate much more coherent videos. Also, Hila has been doing consistently good work, follow her!
@hila_chefer
Hila Chefer
10 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
6
22
277
@_akhaliq
AK
10 months
Meta just dropped VideoJAM Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models comparison with openai sora and kling
17
127
684
@hila_chefer
Hila Chefer
10 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
61
201
1K
@Or__Tal
Or Tal
11 months
Release Announcement!📢💣 🥁🎷JASCO 🎶🪇 training & inference code + model weights are out! Paper📜: https://t.co/exolVO1sJV Samples🔊: https://t.co/uPe7QPk9DB Code🐍: https://t.co/NvhKDFZGoU Models🤗: https://t.co/cEiw2nt41D @lonziks @itai_gat @FelixKreuk @adiyossLC
Tweet card summary image
huggingface.co
5
20
71
@adam_polyak90
Adam Polyak
11 months
Great work on image-to-video generation led by the amazing @guy_yariv during his internship with our team 🖼️➡️👤➡️🎥 Paper: https://t.co/f46Y9mlIor page:
@guy_yariv
Guy Yariv
11 months
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
0
0
5
@guy_yariv
Guy Yariv
11 months
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
7
27
80
@imisra_
Ishan Misra
1 year
MovieGen powering Instagram's video editing features :)
theverge.com
A new background or a silly hat is just a text prompt away.
2
7
65
@dtrinh
Danny Trinh
1 year
VERY excited about the era of generative AR we're bringing to life. Check out this preview! It's early but so damn promising — this isn't "AI slop"... it's unlocking Creators' imaginations on their own videos. Change your wardrobe, scene, lighting etc. with little expertise. PS
24
18
215
@Andrew__Brown__
Andrew Brown
1 year
So how did we get to these amazing videos for Meta Movie Gen? One of the things I’m proudest of is that we released a very detailed technical report ( https://t.co/FU2PzloDhr…) Lets dive into a technical summary of what we did & learnt 🧵 1/n https://t.co/BJPvf7wC9v
Tweet card summary image
ai.meta.com
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or transform your personal image into a unique video.
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
25
158
1K
@jpineau1
Joelle Pineau
1 year
Sharing some of our latest work on generative AI! The video editing features and sound generation are especially exciting. And it comes with a full research paper.
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
3
13
91
@imisra_
Ishan Misra
1 year
We released 92 pages worth of detail including how to benchmark these models! Super critical for the scientific progress in this field :) We'll also release evaluation benchmarks next week to help the research community 💪
@soumithchintala
Soumith Chintala
1 year
yes, Meta released a full scientific paper on MovieGen, with a lot of details that'll help the field move forward.
10
35
427
@YKirstain
Yuval Kirstain
1 year
So proud to be part of the Movie Gen project, pushing GenAI boundaries! Two key insights: 1. Amazing team + high-quality data + clean, scalable code + general architecture + GPUs go brr = SOTA video generation. 2. Video editing *without* supervised data: train a *single* model
6
25
153
@ShellySheynin
Shelly Sheynin
1 year
I’m thrilled and proud to share our model, Movie Gen, that we've been working on for the past year, and in particular, Movie Gen Edit, for precise video editing. 😍 Look how Movie Gen edited my video!
@AIatMeta
AI at Meta
1 year
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in
56
91
828