Franziska Meier
@_kainoa_
Followers
1K
Following
582
Media
13
Statuses
206
Research Scientist and Manager at @MetaAI (FAIR). My research focuses on Lifelong Learning for Robotics.
California, USA
Joined January 2019
FAIR robotics has released a large scale, high quality, human annotated dataset to foster research on scene understanding. We release 130k+ language annotations on existing (open-sourced) scene scans. Find the dataset here: https://t.co/oMWc32VrH9
0
3
9
I'm looking to hire a great infra or devops engineer who cares deeply about research productivity + team velocity. Think dev containers, fast build systems for monorepos. The metric of success is how fast a new joiner can ship a new feature to our prod robot fleet, or how fast
11
8
124
On #ICML2025 16 Jul, 11 AM We present Meta Locate 3D: a model for accurate object localization in 3D environments. Meta Locate 3D can help robots accurately understand their surroundings and interact more naturally with humans. Demo, model, paper: https://t.co/8ZhV21TDxq
5
15
54
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
8
109
491
Join my team at @genesistxai ! 🧬 We're forging AI foundation models to unlock groundbreaking therapies for patients with severe diseases. We're hiring ML Scientists, Engineers, TPMs & Interns in foundation models, #LLMs , #RL, #diffusion models, and other cutting-edge areas of
genesis.ml
Genesis is full of multi-dimensional, curious, and open-minded people. Join us to be a part of a world-class team of innovators and drug hunters changing the way we find treatments for severe...
1
9
113
Our vision is for AI that uses world models to adapt in new and dynamic environments and efficiently learn new skills. We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model,
82
353
2K
Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: https://t.co/DgTq9OPMap 1/6
3
57
237
The @Tesla_Optimus new release looks intriguing! The behavior of the robot definitely looks very smooth. But the messaging is confusing and I would love to get some clarity on (a) type of video data used, (b) amount of teleop needed & (c) level of generality.
3
2
70
Introducing Meta Locate 3D: a model for accurate object localization in 3D environments. Learn how Meta Locate 3D can help robots accurately understand their surroundings and interact more naturally with humans. You can download the model and dataset, read our research paper,
33
213
1K
FAIR robotics has released a large scale, high quality, human annotated dataset to foster research on scene understanding. We release 130k+ language annotations on existing (open-sourced) scene scans. Find the dataset here: https://t.co/oMWc32VrH9
0
3
9
1/ Despite having access to rich 3D inputs, embodied agents still rely on 2D VLMs—due to the lack of large-scale 3D data and pre-trained 3D encoders. We introduce UniVLG, a unified 2D-3D VLM that leverages 2D scale to improve 3D scene understanding. https://t.co/DGGtYYPaQi
1
28
137
New work from the Robotics team at @AIatMeta . Want to be able to tell your robot bring you the keys from the table in the living room? Try out Locate 3D! interactive demo: https://t.co/aS9WPPmhcF model & code & dataset: https://t.co/oMWc32VrH9
0
6
50
👀Interested in fairness in health applications? Check out our open Postdoc position on the topic. 🎯Deadline is 31st o October. 🗒️Apply here:
candidate.hr-manager.net
🎥Announcing the Fairness of AI in Medical Imaging (FAIMI) YouTube channel! 🎥 We are happy to share our YouTube channel ( https://t.co/gK6q38nvKF) which now contains videos and playlists from multiple FAIMI events. Thanks to Dewinda Julianensi Rumala and Emma Stanley for helping
0
1
3
We have an open postdoc position in the Embodied AI team at FAIR. Interested candidates with a strong track record of projects and publications in top-tier ML, vision, or robotics conferences are encouraged to apply using the link below. https://t.co/j8eBBHKrsn
0
10
52
I am looking for an intern for 2024 to work on the Cortex project in @AIatMeta 's Embodied AI team! Relevant skills include: experience with LLMs/VLMs, EAI simulators such as Habitat, and RL. DM or email at mikaelhenaff [at] meta [dot] com ✨ #AI #InternshipOpportunity #LLM
3
17
78
A challenge in robotics research is the prevalence of misleading demos—those trained on test data, pre-scripted, or heavily tuned for a single environment. This can make genuinely innovative work seem less impressive, as people have seen more ‘impressive’ demos in the past.
7
13
153
A nice way to end the year with some data: 66 Good News Stories You Didn't Hear About in 2023
8
102
329