
Amir Bar
@_amirbar
Followers
2K
Following
11K
Media
50
Statuses
590
Postdoc at Meta (FAIR). Prev: PhD at TAU and Berkeley AI Research.
NYC
Joined March 2016
RT @9LdROhjZE56jSh9: 🚨 Excited to announce our ICCV 2025 Workshop: Reliable and Interactive World Model (RIWM 2025) — Call for Papers is no….
0
3
0
Check out PEVA 🌎, our recent attempt to build a world model for human body control.
What would a World Model look like if we start from a real embodied agent acting in the real world?. It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short:.It has to
0
2
27
RT @chris_j_paxton: World models are such an interesting topic. Really fun discussion about how they can be used for navigation with @_amir….
0
5
0
RT @ericjang11: We've made substantial progress on our action-conditioned video generation model, aka the "1X World Model", and we show tha….
0
24
0
(Un)surprisingly @ylecun didn't mind 😅 and so did @trevordarrell which was a bit reassuring. Anyway--it's nice to see the outcome is an award. if you're interested to hear more, come tomorrow (Sat) to Oral Session 4B (ExHall A2, 1:00-2:15) and visit poster #396 (Hall D, 5-7pm).
1
0
8
NWM intersects with multiple communities (video generation, 3D vision, robotics, RL, representation learning. ) and it seemed to equally piss off everyone. I remember I told @GaoyueZhou and @dans_t123 - "lower expectations, this paper is a 99% reject".
1
0
10
Navigation World Models won the Best Paper Honorable Mention Award at #CVPR2025 ☺️ . It is my first postdoc paper since joining Yann's lab at @AIatMeta, so I am very excited. It was also extremely fun working with @GaoyueZhou, @dans_t123, @trevordarrell (and @ylecun) . Fun story:.
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!.@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
26
20
268
heading to Nashville to attend @CVPR tomorrow. looking forward to meeting old & new friends and chat about #WorldModels.
5
1
56
RT @WilliamRudmanjr: When vision-language models answer questions, are they truly analyzing the image or relying on memorized facts? We int….
0
4
0
RT @geopavlakos: Make sure to check out Hanwen's @hanwenjiang1 latest work! 🚀 We introduce RayZer, a self-supervised model for novel view s….
0
5
0
Need a strong feature extractor for your upcoming NeurIPS paper? we got you 😉.
We are open-sourcing all the models in Web-SSL, from ViT-L to ViT-7B! .It was super fun to train and play with these massive ViTs. Models: Github: Huge credit to @DavidJFan for putting these models together!.
0
0
40
WORLDMEM: Adding memory to world models.
Thanks for sharing! @_akhaliq .For more information:.📜ArXiv: .🤗 Hugging Face: .🌐 .🧑💻 GitHub: 🚀 Demo:
0
0
10
Excited to share that our paper on Navigation World Models was selected for an Oral presentation at CVPR!.Code & models:.
Happy to share our new work on Navigation World Models! 🔥🔥 . Navigation is a fundamental skill of agents with visual-motor capabilities. We train a single World Model across multiple environments and diverse agent data. w/ @GaoyueZhou, Danny Tran, @trevordarrell and @ylecun.
3
7
104
FAIR is probably the only lab outside of academia where research projects can start like this.
[7/8] This side project started in October when @TongPetersb, @_amirbar, and I were thinking about the rise of CLIP as a popular vision encoder for MLLMs. The community often assumes that language supervision is the primary reason for CLIP's strong performance. However, we.
3
6
112
Kudos to the authors @DavidJFan @TongPetersb @JiachenAI @koustuvsinha @liuzhuang1234 @endernewton, Michael Rabbat, Nicolas Ballas, @ylecun @sainingxie.
0
0
3
RT @DavidJFan: Can visual SSL match CLIP on VQA?. Yes! We show with controlled experiments that visual SSL can be competitive even on OCR/C….
0
94
0