Ariel Shaulov
@ariel__shaulov
Followers
30
Following
15
Media
0
Statuses
7
MSc @ TAU | AI researcher @ Mentee Robotics
Joined July 2023
Update: Our paper “FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation” is accepted to #NeurIPS2025 ! Paper: https://t.co/Gy89h76X1s Project page:
2
0
5
In response to questions from our previous tweet, we are sharing a behind-the-scenes view of the same task. This video shows the MenteeBot’s head camera view in the top-left, along with its “thoughts” and decision-making output in the bottom-left, offering a direct view of the
2
11
27
Thrilled to share that two papers got into #NeurIPS2025 🎉 ✨ FlowMo (my first last-author paper 🤩) ✨ Revisiting LRP I’m immensely proud of the students, who not only led great papers but also grew and developed so much throughout the process 👇
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, it’s open source! 🥳
4
13
158
Exciting news from #ICML2025 & #ICCV2025 🥳 - 🥇 VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 ☝️interpretability in the generative era ✌️video customization - Organizing two #ICCV2025 workshops ☝️structural priors for vision ✌️long video gen 🧵👇
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
17
23
189
FlowMo Variance-Based Flow Guidance for Coherent Motion in Video Generation
2
32
175
🚀 Just dropped our latest work: “FlowMo – Variance-Based Flow Guidance for Coherent Motion in Video Generation” 🎥✨ 📝 https://t.co/Gy89h76X1s
#AI #VideoGeneration #DiffusionModels
🧵1/ Text-to-video models generate stunning visuals, but… motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. 👇
0
0
2
🚀 Introducing Diffusion-Based Attention Warping for Consistent 3D Scene Editing – a method that ensures view-consistent 3D edits from a single reference image, done in collaboration with @liorwolf 🌐 Project page: https://t.co/FW9xovC1px 📄 Paper: https://t.co/R1RQKP3nGI 1/5
1
4
7