Omar Alama عمر الأعمى
@OmarAlama
Followers
217
Following
2K
Media
20
Statuses
180
ECE Vision and Robot Perception PhD @CarnegieMellon @AirLabCMU
Joined July 2012
Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment https://t.co/n8B3FM0pOC The trick →🧵👇
2
6
25
↗️Many don't realize that RayFronts can operate at any depth range, Yes, even at 0 depth range‼️We don't show this visual anywhere in the paper or website. We probably should. Catch our RayFronts presentation live at #IROS2025 Tuesday 16:45-16:50 !
1
0
4
First, I thought ZED-X stereo depth was unusable for flying drones. The intense vibrations can make depth estimation go crazy 👎. But with aggressive voxel grid filtering, you can get something reasonable, not for fine reconstruction, but for autonomy.🤖 See you in #IROS2025 !
0
0
3
➡️RayFronts can be viewed as view-conditioned semantic frontiers where the viewing angle determines the semantic feature. But I find rays more straight forward. I'll visualize the alternate interpretation some day. For now, getting ready to present RayFronts at #IROS2025 🔥!
0
0
2
⛔️Stop throwing away far range semantics, encode them as Rays instead ! 🔥Excited to present RayFronts at #IROS2025 in Hangzhou, China ! 🎥Catch us in the live presentation next Tuesday 16:45-16:50 Track 9.
Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment https://t.co/n8B3FM0pOC The trick →🧵👇
0
2
6
🐦⬛Very excited for RAVEN which truly supports long-term and long-range aerial outdoor object-goal navigation in unseen environments. 🔥 Checkout Seungchan's post to see it in action !
We introduce RAVEN, a 3D open-set memory-based behavior tree framework for aerial outdoor semantic navigation. RAVEN not only navigates reliably toward detected targets, but also performs long-range semantic reasoning and LVLM-guided informed search
0
0
1
We have completed the SLAM Handbook "From Localization and Mapping to Spatial Intelligence" and released it online: https://t.co/AnKa398nyw . The handbook will be published by Cambridge University Press. [1/n]
4
76
292
Could it be related to the #DINOv3 observation that longer training schedules push the visual features to lose spatial structure ?👀
0
0
0
Why do vision #LLM backbones produce spatially incoherent images while SSL and CLIP like architectures have much better spatial features ?🤔 Features below extracted from #InternVL3 and visualized with PCA.
1
0
0
"When a measure becomes a target, it ceases to be a good measure" Goodhart's law. Seems more and more true with these LLM and LVLM benchmarks. Higher numbers don't always reflect real world performance.
0
0
4
Want to learn how to empower 🤖 with real-time scene understanding and exploration capabilities? Catch Me, @hocherie1 & @QiuYuhengQiu presenting RayFronts at #RSS2025 SemRob Workshop (OHE 122) & Epstein Plaza at 10:00 am PST Today! https://t.co/yE90CQVU4y
0
4
13
I was waiting for the AI to make a mistake the whole time. Was shocked by the quality. It was even simplifying new concepts introduced in our paper with analogies. Really impressive tool @NotebookLM Listen to the full podcast here https://t.co/cDmrqKuwHl
Very surprised by the quality of podcast style overviews generated by @NotebookLM. The RayFronts team tried them out and we were amazed by the quality and accuracy of the explanation. Some couldn't tell if it was AI generated. Should the AirLab start its own podcast channel? 📷
0
0
6
RayFronts code has been released ! https://t.co/wecp43Gx8l 🤖 Guide your robot with semantics within & beyond depth. 🖼️ Stop using slow SAM crops + CLIP pipelines. RayFronts gets dense language aligned features in one forward pass. 🚀 Test your mapping ideas in our pipeline !
github.com
[IROS 2025] Source code for "RayFronts: Open-Set Semantic Ray Frontiers for Online Scene Understanding and Exploration" - RayFronts/RayFronts
0
6
15
#ICRA2025 alert! 🚨🥳 Congratulations to Yuheng Qui, Yutian Chen, Zihao Zhang, Wenshan Wang, and Sebastian Scherer on winning the Best Conference Paper Award for: "MAC-VO: Metrics-Aware Covariance for Leaning-Based Stereo Visual Odometry"! #CMUrobotics
https://t.co/qkGCHOA8dM
4
14
90
🚀 Thrilled to present ViSafe, a vision-only airborne collision avoidance system that achieved drone-to-drone avoidance at 144 km/h. In an era of congested airspace and growing autonomy, reliable self-separation is paramount 🧵👇
3
16
64
نمط متكرر. الناس تكتشف أسئلة بسيطة جدا النماذج اللغوية مثل ChatGpt تفشل في الجواب عليها. الشركات القائمة على النماذج تنتبه، تعيد تدريب النموذج ليجاوب بشكل صحيح. الناس تنبهر وتقول الآن صار فاهم. وتنعاد الحلقة. الواقع هو ان النماذج اللغوية الى الآن تنطبق عليها مقولة "حافظ مش فاهم".
At what point should it be reasonable to expect coherent answers to these? How far beyond PhD-level reasoning must we climb?
1
1
4
SIGLIP wins over CLIP even in dense tasks like zero shot open-vocab semantic segmentation on Replica . Using the RayFronts encoder (NA attention + RADIO @PavloMolchanov + SIGLIP @giffmana) projection to the CLS token gives you SoTA performance. No more SAM+CROP+CLIP business.
1
6
34
أول ورقة بحثية لي في مسيرة الدكتوراه. فكرة المشروع ببساطة اعطاء الروبوت او الدرون المقدرة على فهم جميع الأشياء من حوله في أي بيئة. بحيث تقدر تسأل الروبوت بأي نص أو أي صورة عن أي شيء شافه. اذا كان الشيء داخل اطار احساسه بالمسافة فحيقدر يحدد مكانه بدقة. وان كان أبعد فيحدد اتجاهه.
Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment https://t.co/n8B3FM0pOC The trick →🧵👇
6
3
21
Getting tired of visual language navigation in indoor environments? Check RayFronts🏹, open-set semantic mapping for beyond depth-sensing-range observations.
Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment https://t.co/n8B3FM0pOC The trick →🧵👇
0
2
7
Super excited about this work pushing the boundary of online semantic mapping!! One step closer to making robots see the world 🌎🚀 Check out Omar's thread for a lot more eye candy 🤩 and impressive results!
Want to push the online 🌎 understanding & search capabilities of robots? Introducing RayFronts 🌟→ 💡 Semantics within & beyond depth sensing 🏃♂️ Online & real-time mapping 🔍 Querying with images & text ⚙️ Operating in any environment https://t.co/n8B3FM0pOC The trick →🧵👇
0
2
15