ainaz_eftekhar Profile Banner
Ainaz Eftekhar Profile
Ainaz Eftekhar

@ainaz_eftekhar

Followers
332
Following
379
Media
17
Statuses
40

Computer Science PhD @UW, Student Researcher @allen_ai

Joined September 2020
Don't wanna be here? Send us removal request.
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🎉 Excited to share that our paper “Convergent Functions, Divergent Forms” will be presented at NeurIPS 2025🤖 in San Diego! We present LOKI, a compute-efficient framework for co-evolving robot morphologies🦾 and control policies⚙️. LOKI discovers diverse, high-performing robot
1
7
12
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🎥 Check out the website + paper here ⬇️ 🔗 https://t.co/GfD4KOt0Vp 🔗 https://t.co/plQCUmWwYN A huge thank-you to my amazing collaborators: Hyeonseong Jeon, @aaronwalsman, @KuoHaoZeng, Ali Farhadi, and @RanjayKrishna.
0
0
2
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵7/7 Across downstream tasks — agility, stability, manipulation — LOKI shows superior adaptability at both the morphology and policy levels. Top 1–10 LOKI morphologies achieve up to 2× higher rewards than DERL on bump and push-box incline tasks 💪
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵6/7 LOKI discovers a wide spectrum of locomotion behaviors maintaining both quality and diversity. Quadrupeds, bipeds, crab-like, cheetah, spinner, crawler, and more — while traditional methods collapse to a single type due to global selection pressure and premature convergence
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵5/7 Results 📈 : LOKI is far more sample-efficient: - ~78% fewer simulation steps (20B → 4.6B) - 40% fewer training FLOPs per design - Explores ~780× more morphologies than prior evolution-based methods
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵4/7 How it works: 1️⃣ LOKI clusters 500k morphologies in a learned latent space, grouping similar designs. 2️⃣ Each cluster trains a shared policy on a pool of elite designs, enabling efficient evaluation of new morphologies without retraining. 3️⃣ Morphologies and policies then
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵3/7 We introduce LOKI (Locally Optimized Kinematic Instantiations) — an efficient framework for discovering diverse, high-performing morphologies that generalize to unseen tasks. Our key idea💡: reuse controllers + expand search efficiently through clustered co-evolution.
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵2/7 Quality-Diversity (QD) algorithms help to avoid local optima by maintaining behavioral diversity — finding many distinct, high-performing solutions. But... diversity comes at the cost💸 of evaluating many more designs (using design-specific controllers).
1
0
0
@ainaz_eftekhar
Ainaz Eftekhar
17 days
🧵1/7 Brain–body co-design methods jointly optimize both control and design for a given task. They are typically framed as a bi-level optimization🧩: an outer loop searches for morphologies (using evolutionary strategies), while an inner loop trains a control policy for each
1
0
0
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
15
82
341
@ainaz_eftekhar
Ainaz Eftekhar
8 months
📢We're organizing a workshop at #RSS2025 on Mobile Manipulation—bringing together researchers and practitioners pushing the frontier of MoMA. Hope to see you in LA!
@DJiafei
Jiafei Duan
8 months
📢Excited to announce our #RSS2025 workshop: Mobile Manipulation: Emerging Opportunities & Contemporary Challenges in LA! 🤖🦿🚗 We’re bringing together leading voices from academia & industry to explore the frontier of MoMA—where mobility meets dexterity for real-world robot
0
1
13
@ainaz_eftekhar
Ainaz Eftekhar
11 months
A huge thank-you to my amazing collaborators @KuoHaoZeng, @ehsanik, @rosemhendrix @LucaWeihs @anikembhavi @RanjayKrishna and others at @Ai2Prior @allen_ai.
0
0
5
@ainaz_eftekhar
Ainaz Eftekhar
11 months
RING is ready to navigate a wide range of robots straight out of the box☑️. We'll release our pretrained policies, making them accessible for deployment on your robots!🚀 👉 Project page: https://t.co/Rcjy7Ep5HP 📰 Arxiv:
Tweet card summary image
arxiv.org
Modern robots vary significantly in shape, size, and sensor configurations used to perceive and interact with their environments. However, most navigation policies are embodiment-specific; a...
1
0
4
@ainaz_eftekhar
Ainaz Eftekhar
11 months
Embodiment-adaptive: RING dynamically adapts navigation strategies based on the robot's physical attributes (e.g., collider height).
1
0
2
@ainaz_eftekhar
Ainaz Eftekhar
11 months
The result? Zero-shot generalization to unseen robots including Stretch RE-1, LoCoBot, Unitree Go1, and even humans! Its performance competes (and surpasses) embodiment-specialized policies trained for individual robots.
1
0
2
@ainaz_eftekhar
Ainaz Eftekhar
11 months
RING is trained entirely in simulation with 1M+ diverse random embodiments (body size, camera settings, rotation pivot points) in the AI2-THOR simulator. 🏠📸
1
0
2
@ainaz_eftekhar
Ainaz Eftekhar
11 months
Modern robots come in all shapes, sizes, and sensor configurations. Yet, most navigation policies are tailored to specific embodiments. A policy trained on one robot rarely generalizes well to others. What if one policy could work for all?🌍 RING is a a universal,
2
0
2
@ainaz_eftekhar
Ainaz Eftekhar
11 months
🎉 Excited to introduce "The One RING: a Robotic Indoor Navigation Generalist" – our latest work on achieving cross-embodiment generalization in robot visual navigation! 🤖🌍 RING is a universal navigation policy trained entirely in simulation on diverse, random embodiments at
4
36
176
@ainaz_eftekhar
Ainaz Eftekhar
1 year
Molmo's here!🎉
@allen_ai
Ai2
1 year
Meet Molmo: a family of open, state-of-the-art multimodal AI models. Our best model outperforms proprietary systems, using 1000x less data. Molmo doesn't just understand multimodal data—it acts on it, enabling rich interactions in both the physical and virtual worlds. Try it
0
0
9
@ainaz_eftekhar
Ainaz Eftekhar
2 years
🎉 Very Excited to present our recent work on “Selective🔍 Visual Representations for Embodied-AI🤖” next week at ICLR in Vienna🇦🇹!! 📣📣Important update! Our code and pretrained models are now available through our project website 🌐: https://t.co/HLfTF9zFJo🚀 👋Come to my
@ainaz_eftekhar
Ainaz Eftekhar
2 years
Embodied-AI 🤖 models employ general-purpose vision backbones such as CLIP to encode the observation. How can we have a more task-driven visual perception for embodied-AI? We introduce a parameter-efficient approach that selectively filters visual representations for Embodied-AI
1
8
53