Dexmal
@Dexmal_AI
Followers
188
Following
1
Media
5
Statuses
22
Build Intelligent, Useful and Trustworthy Robots to Make Our Life Better
Joined September 2025
(1/N) 💡Introducing Dexbotic—an open-source, PyTorch based toolbox for Vision-Language-Action (VLA) models. 🚀 Built for embodied AI researchers, it delivers a unified, end-to-end codebase to accelerate VLA development and evaluation.
7
2
14
Dexmal was thrilled to be a Gold Partner at #COSCon2025. We shared our Full-Stack Open-Source solutions in Embodied AI software, open hardware, and @RoboChallengeAI. Big thanks to the community. Let’s keep pushing the boundaries of open-source robotics! 🚀 #Dexmal #EmbodiedAI
0
0
1
🌍Introducing RAGNet: A massive benchmark & framework for reasoning-based robotic grasping. RAGNet tackles open-world data scarcity with: 🖼️ 273k images (Wild/Robot/Sim) 🧠 26k functional instructions (e.g., "something to drink") Check out our paper: https://t.co/GYBHbkmocJ
arxiv.org
General robotic grasping systems require accurate object affordance perception in diverse open-world scenarios following human instructions. However, current studies suffer from the problem of...
0
0
0
🤖Most robots get confused when objects change scale or viewpoint. Not GeoVLA. By explicitly modeling 3D geometry alongside visual semantics, our new framework adapts seamlessly to the physical world.🌍 🔗 Explore the work:
0
0
0
Dexmal presents ManiAgent: An agentic architecture for general robotic manipulation. By leveraging multi-agent collaboration, ManiAgent crushes long-horizon tasks: 📈 86.8% on SimplerEnv 🦾 95.8% real-world success Check out the code 👇 https://t.co/6FwKkWEBsl
0
0
0
📢 RoboChallenge Committee Established! To bridge the gap to Action Intelligence, we need real benchmark. 🦾 Partners: @Dexmal_AI , @huggingface, BAAI, @AgiBot_zhiyuan,@GalaxeaDynamics,@XSquareRobot,Qwen,@gosimfoundation & others. Define the standard for real-world robot eval.🌐
1
4
3
How to make robotic manipulation robust against noisy depth data? 🤔 Meet SpatialActor (AAAI26 Oral 🌟). Our method is a disentangled framework that explicitly decouples semantics and geometry. The result? Smarter, more reliable robots🤖 Check it out 👇
arxiv.org
Robotic manipulation requires precise spatial understanding to interact with objects in the real world. Point-based methods suffer from sparse sampling, leading to the loss of fine-grained...
0
0
0
How to enable robots to have memory? We Introduce MemoryVLA: a Cognition-Memory-Action framework boosting long-horizon robotic manipulation by modeling temporal context with working memory and PCMB—Check our paper for details: https://t.co/GQftpxzXmd
arxiv.org
Temporal context is essential for robotic manipulation because such tasks are inherently non-Markovian, yet mainstream VLA models typically overlook it and struggle with long-horizon, temporally...
2
0
6
🤖We've released RT-VLA---our optimized inference code for the Pi0 model by @physical_int, achieving up to 30fps on a single RTX 4090! Check it out on GitHub:
github.com
Running VLA at 30Hz frame rate and 480Hz trajectory frequency - Dexmal/realtime-vla
4
1
6
Thrilled to see Hugging Face co-founder Thomas Wolf @Thom_Wolf visiting the @RoboChallenge_AI booth at #IROS2025! Great discussions with @Dexmal_AI’s Co-founder Fan Haoqiang on the future of #RoboChallenge. Exciting steps ahead for the robotics community! 🤖🚀
2
1
14
(11/N) 📩Join the community. Code, test, contribute. As Linus Torvalds said: “Software evolution requires collective wisdom.”Let’s build the future of embodied AI—together. Discord: https://t.co/KB23EbZp0E
discord.com
来 Discord Dexbotic 社区瞧瞧——结交近 28 名成员,畅享免费语音与文字聊天。
0
0
0
(10/N)🎯Our goal of Dexbotic is to build the foundational layer for general-purpose robot intelligence. Hugging Face: https://t.co/V8kz0OkfqO
huggingface.co
1
0
0
(9/N) 🔅We’re committed to expanding the Dexbotic ecosystem—integrating more base models, sim2real tools, and real-world deployment support. GitHub: https://t.co/3N4A23ikuf
github.com
Dexbotic: Open-Source Vision-Language-Action Toolbox - Dexmal/dexbotic
1
0
0
(8/N) 🎻And key features of DOS - W1: 🔘Fully open-source hardware design. 🔘Extensive quick-release, modular, and replaceable components. 🔘Low cost. 🔘Ergonomic design tailored to data collectors to reduce fatigue.
1
0
0
(7/N) 🤖We also offer our first robot product——Dexbotic Open Source - W1 (DOS - W1). Achieving the integration of hardware design and embodied intelligence, DOS - W1 is not just an execution terminal, but an open-source intelligent platform.
1
0
0
(6/N) Compatible with popular VLA policies, including: 🔘Pi0 🔘OpenVLA-OFT 🔘CogACT 🔘MemoryVLA 🔘MUVLA 🔘…and growing
1
0
0
(5/N) Key Features of Dexbotic: ✅ Unified modular VLA framework ✅ Powerful pretrained foundation models ✅ Experiment-centric development ✅ Cloud & local training support ✅ Diverse robot training and deployment
1
0
0
(4/N) 🌖Go from idea to result in minimal steps. Modify a single Exp script to launch new experiments—No more rewriting pipelines. Plus, use our high-performance pretrained models to boost your VLA policies from the start. Please see tech report:
1
0
0
(3/N) 💡Architecture Overview Data Layer: Dexdata format unifies multimodal inputs and optimizes storage. Model Layer: Integrates strong pretrained VLMs and supports policies like π0. Experiment Layer: Config-driven scripts enable fast iteration without compromising stability.
1
0
0
(2/N) 🧩VLA research today is fragmented. Inconsistent setups, irreproducible benchmarks, and outdated base models slow progress. Dexbotic changes that —support multiple VLA policies under one environment. Reproduce, compare, and extend with ease. Website: https://t.co/VLCqsvCzlP
1
0
0