Amir Bar Profile
Amir Bar

@_amirbar

Followers
2K
Following
11K
Media
50
Statuses
590

Postdoc at Meta (FAIR). Prev: PhD at TAU and Berkeley AI Research.

NYC
Joined March 2016
Don't wanna be here? Send us removal request.
@_amirbar
Amir Bar
2 days
RT @9LdROhjZE56jSh9: 🚨 Excited to announce our ICCV 2025 Workshop: Reliable and Interactive World Model (RIWM 2025) — Call for Papers is no….
0
3
0
@_amirbar
Amir Bar
6 days
Check out PEVA 🌎, our recent attempt to build a world model for human body control.
@YutongBAI1002
Yutong Bai
7 days
What would a World Model look like if we start from a real embodied agent acting in the real world?. It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short:.It has to
0
2
27
@_amirbar
Amir Bar
12 days
RT @chris_j_paxton: World models are such an interesting topic. Really fun discussion about how they can be used for navigation with @_amir….
0
5
0
@_amirbar
Amir Bar
17 days
RT @ericjang11: We've made substantial progress on our action-conditioned video generation model, aka the "1X World Model", and we show tha….
0
24
0
@_amirbar
Amir Bar
20 days
(Un)surprisingly @ylecun didn't mind 😅 and so did @trevordarrell which was a bit reassuring. Anyway--it's nice to see the outcome is an award. if you're interested to hear more, come tomorrow (Sat) to Oral Session 4B (ExHall A2, 1:00-2:15) and visit poster #396 (Hall D, 5-7pm).
1
0
8
@_amirbar
Amir Bar
20 days
NWM intersects with multiple communities (video generation, 3D vision, robotics, RL, representation learning. ) and it seemed to equally piss off everyone. I remember I told @GaoyueZhou and @dans_t123 - "lower expectations, this paper is a 99% reject".
1
0
10
@_amirbar
Amir Bar
20 days
Navigation World Models won the Best Paper Honorable Mention Award at #CVPR2025 ☺️ . It is my first postdoc paper since joining Yann's lab at @AIatMeta, so I am very excited. It was also extremely fun working with @GaoyueZhou, @dans_t123, @trevordarrell (and @ylecun) . Fun story:.
@CVPR
#CVPR2025
20 days
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!.@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
Tweet media one
Tweet media two
26
20
268
@_amirbar
Amir Bar
24 days
heading to Nashville to attend @CVPR tomorrow. looking forward to meeting old & new friends and chat about #WorldModels.
5
1
56
@_amirbar
Amir Bar
1 month
RT @WilliamRudmanjr: When vision-language models answer questions, are they truly analyzing the image or relying on memorized facts? We int….
0
4
0
@_amirbar
Amir Bar
2 months
a NeurIPS 2025 nightmare ☠️
Tweet media one
1
0
17
@_amirbar
Amir Bar
2 months
RT @geopavlakos: Make sure to check out Hanwen's @hanwenjiang1 latest work! 🚀 We introduce RayZer, a self-supervised model for novel view s….
0
5
0
@_amirbar
Amir Bar
2 months
Need a strong feature extractor for your upcoming NeurIPS paper? we got you 😉.
@TongPetersb
Peter Tong
2 months
We are open-sourcing all the models in Web-SSL, from ViT-L to ViT-7B! .It was super fun to train and play with these massive ViTs. Models: Github: Huge credit to @DavidJFan for putting these models together!.
0
0
40
@_amirbar
Amir Bar
2 months
Our code & pretrained models:.
@ylecun
Yann LeCun
3 months
New paper from FAIR+NYU:.Q: Is language supervision required to learn effective visual representations for multimodal tasks? .A: No. ⬇️⬇️⬇️.
1
2
19
@_amirbar
Amir Bar
3 months
WORLDMEM: Adding memory to world models.
@zeqi_xiao
Zeqi Xiao
3 months
Thanks for sharing! @_akhaliq .For more information:.📜ArXiv: .🤗 Hugging Face: .🌐 .🧑‍💻 GitHub: 🚀 Demo:
0
0
10
@_amirbar
Amir Bar
3 months
Excited to share that our paper on Navigation World Models was selected for an Oral presentation at CVPR!.Code & models:.
@_amirbar
Amir Bar
7 months
Happy to share our new work on Navigation World Models! 🔥🔥 . Navigation is a fundamental skill of agents with visual-motor capabilities. We train a single World Model across multiple environments and diverse agent data. w/ @GaoyueZhou, Danny Tran, @trevordarrell and @ylecun.
3
7
104
@_amirbar
Amir Bar
3 months
RT @ylecun: New paper from FAIR+NYU:.Q: Is language supervision required to learn effective visual representations for multimodal tasks?….
0
66
0
@_amirbar
Amir Bar
3 months
FAIR is probably the only lab outside of academia where research projects can start like this.
@DavidJFan
David Fan
3 months
[7/8] This side project started in October when @TongPetersb, @_amirbar, and I were thinking about the rise of CLIP as a popular vision encoder for MLLMs. The community often assumes that language supervision is the primary reason for CLIP's strong performance. However, we.
3
6
112
@_amirbar
Amir Bar
3 months
Kudos to the authors @DavidJFan @TongPetersb @JiachenAI @koustuvsinha @liuzhuang1234 @endernewton, Michael Rabbat, Nicolas Ballas, @ylecun @sainingxie.
0
0
3
@_amirbar
Amir Bar
3 months
CLIP is arguably the leading pretraining paradigm in computer vision. In a new preprint, we show that vision-only SSL models trained on web data can match CLIP on VQA tasks, despite not using language. Paper: Project Page:
Tweet media one
3
64
317
@_amirbar
Amir Bar
3 months
RT @DavidJFan: Can visual SSL match CLIP on VQA?. Yes! We show with controlled experiments that visual SSL can be competitive even on OCR/C….
0
94
0