Shiran Zada Profile
Shiran Zada

@ShiranZada

Followers
131
Following
85
Media
11
Statuses
43

Google DeepMind

Joined January 2021
Don't wanna be here? Send us removal request.
@ShiranZada
Shiran Zada
6 months
🚀 Excited to launch a powerful new capability in Flow: Style Ingredients — bring your favorite visual style into motion. Watch it in action 👇 #Veo #StyleTransfer #Flow #GoogleDeepMind #Filmmaking #VideoGeneration
1
4
18
@GoogleDeepMind
Google DeepMind
4 months
What if you could not only watch a generated video, but explore it too? 🌐 Genie 3 is our groundbreaking world model that creates interactive, playable environments from a single text prompt. From photorealistic landscapes to fantasy realms, the possibilities are endless. 🧵
831
3K
14K
@GoogleDeepMind
Google DeepMind
4 months
🔘 Long-horizon consistency Environments created remain largely consistent over several minutes, with visual memory extending as far as 1️⃣ minute in the past. This ability is critical to enable AI agents to learn about the world, and provides humans with an immersive
11
52
695
@inbar_mosseri
Inbar Mosseri
5 months
Excited to share that TokenVerse won Best Paper Award at SIGGRAPH 2025! 🎉 TokenVerse enables personalization of complex visual concepts, from objects and materials to poses and lighting, each can be extracted from a single image and be recomposed into a coherent result. 👇
9
22
210
@ShiranZada
Shiran Zada
5 months
Proud to be part of the amazing team that made it happen. MatanCohen, @kusichan @ShulAsaf OriKelner, @Yxp52492, @inbar_mosseri, AlexRavAcha, @talidekel, @YHoshen, @GarbuzItzhak, JessicaGallegos, @navinsarmaphoto, BarakMeiri, MichaelChang, @trippedout and many others!
1
0
0
@ShiranZada
Shiran Zada
5 months
From research to the red carpet 🎬 Our video object insertion tech is now in ANCESTRA, a film that premiered at Tribeca Festival! This is a huge milestone — turning cutting-edge AI into real cinematic storytelling. #VideoEditing #TribecaFestival #Veo #DeepMind #Filmmaking
@GoogleDeepMind
Google DeepMind
5 months
The first film from our partnership with @primordialsoup_ - a storytelling venture founded by visionary director Darren Aronofsky - is debuting at @Tribeca. Directed by Eliza McNitt, ANCESTRA uses traditional filmmaking alongside Veo, our generative video model. Take a look ↓
1
0
0
@ShiranZada
Shiran Zada
6 months
Proud to have built this with an outstanding team: @Roni_Paiss, @nikoskolot, @YuliaRubanova, @inbar_mosseri, @sserenazz, @tkipf, @acoadmarmon, @philipphenzler, Jieru Hu
0
0
2
@ShiranZada
Shiran Zada
6 months
Simply upload a reference image, and Flow will generate videos that match its look and feel — instantly. Whether it’s cinematic, artistic, or abstract style — your vision, now in motion. Go try it in Flow:
1
0
0
@ShiranZada
Shiran Zada
6 months
Adding objects is coming soon. Stay tuned!👀
@GoogleDeepMind
Google DeepMind
6 months
Add and remove objects 🚫 insert or remove items or characters in your videos all while matching the consistency and style of your scene. 🚀 We can remove a spaceship from the backdrop. 🦆 And add a rubber duck to a panning shot.
0
0
1
@ShiranZada
Shiran Zada
6 months
Is it just a dream? #Veo3
0
1
16
@ShiranZada
Shiran Zada
6 months
Want to control the characters, objects and the scene. Simply use the reference powered video generation tool
1
0
2
@ShiranZada
Shiran Zada
6 months
Got a style image you love? Use it to generate videos that match its vibe.
0
0
5
@ShiranZada
Shiran Zada
6 months
Veo 3 now has sound and Veo 2 can control the style of the video given a reference image. Go try it on Flow: https://t.co/yi14Cb0UAX
@GoogleDeepMind
Google DeepMind
6 months
Video, meet audio. 🎥🤝🔊 With Veo 3, our new state-of-the-art generative video model, you can add soundtracks to clips you make. Create talking characters, include sound effects, and more while developing videos in a range of cinematic styles. 🧵
3
3
32
@ShiranZada
Shiran Zada
7 months
Excited to share our #SIGGRAPH2025 paper: #TokenVerse 🎉 A new way to personalize text-to-image models by combining objects, styles, poses, lighting — pulling each concept from a different image and blending them into a single generation.
@DanielGaribi
Daniel Garibi
7 months
Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent
2
0
14
@hila_chefer
Hila Chefer
10 months
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
61
202
1K
@natanielruizg
Nataniel Ruiz
1 year
It's time to share something exciting that we've been working on at Google. ReCapture is a method to generate new versions of a user-provided video with customized camera trajectories. It's basically post-capture cinematography for all. Project links are below. 🧵
17
70
417
@JingweiMa2
Jingwei Ma
1 year
We are excited to introduce "VidPanos: Generative Panoramic Videos from Casual Panning Videos" VidPanos converts phone-captured panning videos into (fully playing) video panoramas, instead of the usual (static) image panoramas. Website: https://t.co/epKYFNuXGf Paper:
2
11
45
@hila_chefer
Hila Chefer
1 year
Introducing✨Still-Moving✨—our work from @GoogleDeepMind that lets you apply *any* image customization method to video models🎥 Personalization (DreamBooth)🐶stylization (StyleDrop) 🎨 ControlNet🖼️—ALL in one method! Plus… you can control the amount of generated motion🏃‍♀️ 🧵👇
5
68
295
@omer_tov
omer tov
2 years
Seeing the project we’ve been working so hard on getting well received by the community is so fulfilling! @omerbartal @hila_chefer Charles Herrmann @Roni_Paiss @ShiranZada @arielephrat @JunhwaHur Yuanzhen Li, Tomer Michaeli @oliver_wang2 @DeqingSun @talidekel @InbarMosseri
6
7
29
@_akhaliq
AK
2 years
TikTok presents Depth Anything Unleashing the Power of Large-Scale Unlabeled Data paper page: https://t.co/nLAcATJTUI demo: https://t.co/wCusdKMdc4 Depth Anything is trained on 1.5M labeled images and 62M+ unlabeled images jointly, providing the most capable Monocular Depth
35
393
2K