
Xavier Puig
@xavierpuigf
Followers
1K
Following
973
Media
41
Statuses
288
Research Scientist at FAIR @AIatMeta working on EmbodiedAI | PhD @MIT_CSAIL
San Francisco, CA
Joined October 2011
Thrilled to announce Habitat 3.0, an Embodied AI simulator to study human-robot interaction at scale!.Habitat 3.0 is designed to train and evaluate agents to perform tasks along with humans, it includes:.- Humanoid simulation.- Human interaction tools.- Multi-agent benchmarks.1/6.
Today we’re announcing Habitat 3.0, Habitat Synthetic Scenes Dataset and HomeRobot — three major advancements in the development of social embodied AI agents that can cooperate with and assist humans in daily tasks. More details on these announcements ➡️
1
5
58
Check out our workshop on Continual Robot Learning from Humans, at #RSS2025, with amazing speakers covering topics including learning from human visual demonstrations, generative models for continual robot learning or the role of LLMs in embodied contexts .
The #RSS2025 Workshop on Continual Robot Learning from Humans is happening on June 21. We have an amazing lineup of speakers discussing how we can enable robots to acquire new skills and knowledge from humans continuously. Join us in person and on Zoom (info on our website)!
0
0
9
RT @YXWangBot: 🤖 Does VLA models really listen to language instructions? Maybe not 👀.🚀 Introducing our RSS paper: CodeDiffuser -- using VLM….
0
27
0
RT @tianminshu: 🚀 Excited to introduce SimWorld: an embodied simulator for infinite photorealistic world generation 🏙️ populated with diver….
0
43
0
I will be talking at the #CVPR2025 workshop on Humanoid Agents, tomorrow June 11th at 9:30 am. I will discuss how humanoid agents can help us improve human-robot collaboration. See you there!.
0
4
52
RT @RoozbehMottaghi: I'll be giving two talks at the #CVPR2025 workshops: 3D LLM/VLA and POETS .
3d-llm-vla.github.io
Bridging Language, Vision and Action in 3D Environments. Join us at CVPR 2025 in Nashville, TN, USA to explore the integration of language and 3D perception.
0
1
0
RT @ZhaoMandi: DexMachina lets us perform a functional comparison between different dexterous hands: we evaluate 6 hands on 4 challenging l….
0
3
0
I will be at ICLR to present PARTNR. Reach out if you want to talk about our work at FAIR or interesting problems in Robotics!.
We released PARTNR, the largest benchmark to study human-robot collaboration in households, with +100K natural language tasks!.PARTNR tests agents in key capabilities including: .🔍 Perceiving dynamic environments.🎯 Task planning and skill execution.🤝 Coordination with humans.
0
1
8
RT @RamRamrakhya: 🚨New Preprint 🚨. Embodied agents 🤖 operating in indoor environments must interpret ambiguous and under-specified human in….
0
7
0
How do we enable agents to perform tasks even when these are underspecified? In this work, led by @RamRamrakhya, we train VLA agents via RL to decide when to act in the environment or ask clarifying questions, enabling them to handle ambiguous instructions.
🚨New Preprint 🚨. Embodied agents 🤖 operating in indoor environments must interpret ambiguous and under-specified human instructions. A capable household robot 🤖 should recognize ambiguity and ask relevant clarification questions to infer the user🧑🚒 intent accurately, leading
0
1
6
RT @chuanyang_jin: How to achieve human-level open-ended machine Theory of Mind?. Introducing #AutoToM: a fully automated and open-ended To….
0
22
0
RT @AIatMeta: Meta PARTNR is a benchmark for planning and reasoning in embodied multi-agent tasks. This large-scale human and robot collabo….
0
70
0
RT @SniperPaper: The trained policy can be integrated with a high-level planner for real-world applications. By combining our object manipu….
0
1
0
🪑How do you train robots to move furniture? This requires robots to synchronize whole-body movements, making teleoperation or RL approaches challenging. Check out this amazing work by @SniperPaper, using human demonstrations to train robots to move furniture in the real world!.
We've seen robots move like our favorite athletes. We've watched them fold clothes and do the dishes. Now, it's time for robots to help you move furniture. Introducing RobotMover—a learning framework that enables robots to acquire object-moving skills from human demonstrations.
0
1
9
RT @ChongZitaZhang: This is the mobile manipulation I want to see. You can only get this via RL.
0
31
0
We also release the largest dataset of real humans doing tasks in PARTNR both alone or in collaboration with other humans. We hope our work enables progress in human-robot collaboration in such complex scenarios. Paper: .Code:
github.com
A repository accompanying the PARTNR benchmark for using Large Planning Models (LPMs) to solve Human-Robot Collaboration or Robot Instruction Following tasks in the Habitat simulator. - facebookres...
0
0
2
We released PARTNR, the largest benchmark to study human-robot collaboration in households, with +100K natural language tasks!.PARTNR tests agents in key capabilities including: .🔍 Perceiving dynamic environments.🎯 Task planning and skill execution.🤝 Coordination with humans.
Meta PARTNR is a research framework supporting seamless human-robot collaboration. Building on our research with Habitat, we’re open sourcing a large-scale benchmark, dataset and large planning model that we hope will enable the community to effectively train social robots.
1
20
102
RT @jiaman01: 🤖 Introducing Human-Object Interaction from Human-Level Instructions! First complete system that generates physically plausib….
0
112
0
RT @ManlingLi_: [NeurIPS D&B Oral] Embodied Agent Interface: Benchmarking LLMs for Embodied Agents. A single line of code to evaluate your….
0
69
0
RT @AIatMeta: Additionally, looking towards the future, we’re releasing PARTNR: a benchmark for Planning And Reasoning Tasks in humaN-Robot….
0
24
0