
Meera Hahn
@MeeraHahn
Followers
284
Following
139
Media
2
Statuses
34
Research Scientist @GoogleAI PhD in Computer Science @GeorgiaTech Undergrad @EmoryUniversity
Atlanta, GA
Joined November 2019
Exciting new work from @sihyun_yu and our team at Google Deep Mind!. Memory-Augmented Latent Transformers (MALT) Diffusion, a new diffusion model specialized for long video generation! .
arxiv.org
Diffusion models are successful for synthesizing high-quality videos but are limited to generating short clips (e.g., 2-10 seconds). Synthesizing sustained footage (e.g. over minutes) still...
3
16
113
RT @NithishKannen: Check out our tech report on proactive T2I agents that ask clarification questions to reduce uncertainty!. With this ag….
0
22
0
RT @natanielruizg: legit treflip by Veo 2 with just one error (wheel inversion in the middle). legs move realistically for a treflip 🤯. ska��.
0
6
0
RT @ziwphd: Many thanks to @MeeraHahn , Wenjun Zeng, Nithish Kannen, Rich Galt, Kartikeya Badola, @_beenkim for making this happen! Stay tu….
0
1
0
RT @ziwphd: Tired of endless prompt tweaking? We've released a tech report on proactive text-to-image agents powered by #Gemini @GoogleDeep….
0
5
0
RT @agrimgupta92: 6/ Finally, our model can be used to generate videos with consistent 3D camera motion.
0
16
0
RT @agrimgupta92: 2/ website: Our approach has two key design decisions. First, we use a causal encoder to compres….
0
13
0
RT @agrimgupta92: We introduce W.A.L.T, a diffusion model for photorealistic video generation. Our model is a transformer trained on image….
0
248
0
RT @ptsi: If you’re having trouble keeping up with Video AI😅, there have been 5 state-of-the-art generative video models released *in last….
0
671
0
Have you ever wondered about emergent intelligence in robotic agents?. This work shows interesting emergent intelligence and behaviors in blind navigation agents! Blind agents learn maps as they navigate. This allows them to navigate as successfully as an agent with vision.
How do 'map-less' agents navigate? They learn to build implicit maps of their environment in their hidden state!. We study 'blind' AI navigation agents and find the following 🧵.
0
1
15
RT @maxxu05: How can we fill in missing pulsative sensor data? Prior state-of-the-art fails in our novel setting, despite its well-defined….
0
11
0
RT @michellehuang42: i trained an ai chatbot on my childhood journal entries - so that i could engage in real-time dialogue with my "inner….
0
7K
0
RT @sstj389: Dense self-supervised learning from multiple 3D viewpoints → dense feature representations that generalize both to novel objec….
0
20
0
✨Transformer-based Localization from Embodied Dialog with Large-scale Pre-training✨ has been accepted as an oral at @aaclmeeting!. w/ @RehgJim
1
5
19
RT @natanielruizg: Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choic….
0
404
0
RT @sstj389: A new year, a new shameless twitter plug: Check out our Toys4K 3D object dataset. 4K instances, 105 categories, 15+ instances….
0
12
0
Excited to present our #neurips work NRNS!.
#NeurIPS21 paper on No RL No Simulation (NRNS): Learning to Navigate without Navigating!. NRNS not only beats RL/IL algorithms in simulation but when it comes to real-world…there is no sim2real required!. Webpage: Code: (1/2)
0
3
9