Dexmal_AI Profile Banner
Dexmal Profile
Dexmal

@Dexmal_AI

Followers
188
Following
1
Media
5
Statuses
22

Build Intelligent, Useful and Trustworthy Robots to Make Our Life Better

Joined September 2025
Don't wanna be here? Send us removal request.
@Dexmal_AI
Dexmal
2 months
(1/N) 💡Introducing Dexbotic—an open-source, PyTorch based toolbox for Vision-Language-Action (VLA) models. 🚀 Built for embodied AI researchers, it delivers a unified, end-to-end codebase to accelerate VLA development and evaluation.
7
2
14
@Dexmal_AI
Dexmal
7 days
Dexmal was thrilled to be a Gold Partner at #COSCon2025. We shared our Full-Stack Open-Source solutions in Embodied AI software, open hardware, and @RoboChallengeAI. Big thanks to the community. Let’s keep pushing the boundaries of open-source robotics! 🚀 #Dexmal #EmbodiedAI
0
0
1
@Dexmal_AI
Dexmal
12 days
🌍Introducing RAGNet: A massive benchmark & framework for reasoning-based robotic grasping. RAGNet tackles open-world data scarcity with: 🖼️ 273k images (Wild/Robot/Sim) 🧠 26k functional instructions (e.g., "something to drink") Check out our paper: https://t.co/GYBHbkmocJ
Tweet card summary image
arxiv.org
General robotic grasping systems require accurate object affordance perception in diverse open-world scenarios following human instructions. However, current studies suffer from the problem of...
0
0
0
@Dexmal_AI
Dexmal
13 days
🤖Most robots get confused when objects change scale or viewpoint. Not GeoVLA. By explicitly modeling 3D geometry alongside visual semantics, our new framework adapts seamlessly to the physical world.🌍 🔗 Explore the work:
0
0
0
@Dexmal_AI
Dexmal
14 days
Dexmal presents ManiAgent: An agentic architecture for general robotic manipulation. By leveraging multi-agent collaboration, ManiAgent crushes long-horizon tasks: 📈 86.8% on SimplerEnv 🦾 95.8% real-world success Check out the code 👇 https://t.co/6FwKkWEBsl
0
0
0
@RoboChallengeAI
RoboChallenge
22 days
📢 RoboChallenge Committee Established! To bridge the gap to Action Intelligence, we need real benchmark. 🦾 Partners: @Dexmal_AI , @huggingface, BAAI, @AgiBot_zhiyuan,@GalaxeaDynamics,@XSquareRobot,Qwen,@gosimfoundation & others. Define the standard for real-world robot eval.🌐
1
4
3
@Dexmal_AI
Dexmal
23 days
How to make robotic manipulation robust against noisy depth data? 🤔 Meet SpatialActor (AAAI26 Oral 🌟). Our method is a disentangled framework that explicitly decouples semantics and geometry. The result? Smarter, more reliable robots🤖 Check it out 👇
Tweet card summary image
arxiv.org
Robotic manipulation requires precise spatial understanding to interact with objects in the real world. Point-based methods suffer from sparse sampling, leading to the loss of fine-grained...
0
0
0
@Dexmal_AI
Dexmal
1 month
How to enable robots to have memory? We Introduce MemoryVLA: a Cognition-Memory-Action framework boosting long-horizon robotic manipulation by modeling temporal context with working memory and PCMB—Check our paper for details: https://t.co/GQftpxzXmd
Tweet card summary image
arxiv.org
Temporal context is essential for robotic manipulation because such tasks are inherently non-Markovian, yet mainstream VLA models typically overlook it and struggle with long-horizon, temporally...
2
0
6
@Dexmal_AI
Dexmal
2 months
🤖We've released RT-VLA---our optimized inference code for the Pi0 model by @physical_int, achieving up to 30fps on a single RTX 4090! Check it out on GitHub:
Tweet card summary image
github.com
Running VLA at 30Hz frame rate and 480Hz trajectory frequency - Dexmal/realtime-vla
4
1
6
@RoboChallengeAI
RoboChallenge
2 months
Thrilled to see Hugging Face co-founder Thomas Wolf @Thom_Wolf visiting the @RoboChallenge_AI booth at #IROS2025! Great discussions with @Dexmal_AI’s Co-founder Fan Haoqiang on the future of #RoboChallenge. Exciting steps ahead for the robotics community! 🤖🚀
2
1
14
@Dexmal_AI
Dexmal
2 months
(11/N) 📩Join the community. Code, test, contribute. As Linus Torvalds said: “Software evolution requires collective wisdom.”Let’s build the future of embodied AI—together. Discord: https://t.co/KB23EbZp0E
discord.com
来 Discord Dexbotic 社区瞧瞧——结交近 28 名成员,畅享免费语音与文字聊天。
0
0
0
@Dexmal_AI
Dexmal
2 months
(10/N)🎯Our goal of Dexbotic is to build the foundational layer for general-purpose robot intelligence. Hugging Face: https://t.co/V8kz0OkfqO
Tweet card summary image
huggingface.co
1
0
0
@Dexmal_AI
Dexmal
2 months
(9/N) 🔅We’re committed to expanding the Dexbotic ecosystem—integrating more base models, sim2real tools, and real-world deployment support. GitHub: https://t.co/3N4A23ikuf
Tweet card summary image
github.com
Dexbotic: Open-Source Vision-Language-Action Toolbox - Dexmal/dexbotic
1
0
0
@Dexmal_AI
Dexmal
2 months
(8/N) 🎻And key features of DOS - W1: 🔘Fully open-source hardware design. 🔘Extensive quick-release, modular, and replaceable components. 🔘Low cost. 🔘Ergonomic design tailored to data collectors to reduce fatigue.
1
0
0
@Dexmal_AI
Dexmal
2 months
(7/N) 🤖We also offer our first robot product——Dexbotic Open Source - W1 (DOS - W1). Achieving the integration of hardware design and embodied intelligence, DOS - W1 is not just an execution terminal, but an open-source intelligent platform.
1
0
0
@Dexmal_AI
Dexmal
2 months
(6/N) Compatible with popular VLA policies, including: 🔘Pi0 🔘OpenVLA-OFT 🔘CogACT 🔘MemoryVLA 🔘MUVLA 🔘…and growing
1
0
0
@Dexmal_AI
Dexmal
2 months
(5/N) Key Features of Dexbotic: ✅ Unified modular VLA framework ✅ Powerful pretrained foundation models ✅ Experiment-centric development ✅ Cloud & local training support ✅ Diverse robot training and deployment
1
0
0
@Dexmal_AI
Dexmal
2 months
(4/N) 🌖Go from idea to result in minimal steps. Modify a single Exp script to launch new experiments—No more rewriting pipelines. Plus, use our high-performance pretrained models to boost your VLA policies from the start. Please see tech report:
1
0
0
@Dexmal_AI
Dexmal
2 months
(3/N) 💡Architecture Overview Data Layer: Dexdata format unifies multimodal inputs and optimizes storage. Model Layer: Integrates strong pretrained VLMs and supports policies like π0. Experiment Layer: Config-driven scripts enable fast iteration without compromising stability.
1
0
0
@Dexmal_AI
Dexmal
2 months
(2/N) 🧩VLA research today is fragmented. Inconsistent setups, irreproducible benchmarks, and outdated base models slow progress. Dexbotic changes that —support multiple VLA policies under one environment. Reproduce, compare, and extend with ease. Website: https://t.co/VLCqsvCzlP
1
0
0