Jianlan Luo Profile
Jianlan Luo

@jianlanluo

Followers
1K
Following
120
Media
45
Statuses
115

Previously Postdoc @berkeley_ai at Google X @Theteamatx, PhD from @UCBerkeley

Berkeley, CA
Joined January 2013
Don't wanna be here? Send us removal request.
@jianlanluo
Jianlan Luo
11 months
We present HIL-SERL, a reinforcement learning framework for training general-purpose vision-based robotic manipulation policies directly in the real-world. It effectively addresses a wide range of challenging manipulation tasks: dynamic manipulation, dual-arm coordination,
Tweet media one
Tweet media two
Tweet media three
Tweet media four
8
31
302
@jianlanluo
Jianlan Luo
20 days
HIL-SERL appears at Science Robotics today!It learns complex robotic skills with reinforcement learning directly in the real world in just 1-2 hours Website: https://t.co/BBkivJmXGG https://t.co/rZE0K0FQxH
Tweet media one
Tweet media two
6
19
136
@jianlanluo
Jianlan Luo
1 month
Super fun project! It folds boxes and pack them now
@agibotworld
AgiBot Research
1 month
World models for robotics should learn, act, and evaluate in one loop. We're releasing Genie Envisioner (GE): a unified, video‑generative platform that integrates prediction, policy learning, and neural simulation together.
0
0
3
@svlevine
Sergey Levine
5 months
This was fun a fun project! It's also nice to work together with @jianlanluo and his new colleagues at @AgiBot_zhiyuan to get such a variety of skills running with one cross-embodiment VLA.
@physical_int
Physical Intelligence
5 months
We are excited to share new experiments with AgiBot @AgiBot_zhiyuan on multi-task, multi-embodiment VLAs! With one model that can perform many tasks with both two-finger grippers and multi-fingered hands, we take another step toward one model for all robots and tasks.
2
7
103
@AgiBot_zhiyuan
AgiBot
5 months
AgiBot 🤝 Physical Intelligence @physical_int
1
5
10
@jianlanluo
Jianlan Luo
5 months
Very happy to work with some old and new friends at @physical_int and @AgiBot_zhiyuan to make this happen! One generalist policy for dexterous manipulation (hand + gripper), more videos are available at:
@physical_int
Physical Intelligence
5 months
We are excited to share new experiments with AgiBot @AgiBot_zhiyuan on multi-task, multi-embodiment VLAs! With one model that can perform many tasks with both two-finger grippers and multi-fingered hands, we take another step toward one model for all robots and tasks.
0
0
25
@jianlanluo
Jianlan Luo
7 months
And it works really well on these very difficult tasks! I'll highlight a few that the initial configurations were intentionally presented as infeasible to proceed. ReflectVLM was able to generate plans that move out the blocking objects and replan.
Tweet media one
Tweet media two
1
0
1
@jianlanluo
Jianlan Luo
7 months
At inference time, it uses the trained "base" model together with the diffusion model to perform planning, the reflected outcome could revise the action proposed by the base model. In this regard, it could be seen as a lightweight yet effective inference-time compuation method!
Tweet media one
1
0
3
@jianlanluo
Jianlan Luo
7 months
ReflectVLM utilizes a pre-trained VLM such as LLaVa to generate initial action plan; however it also uses a diffusion model to imagine future outcomes if executing such plans and reflect on that outcome when fine-tuning such models.
Tweet media one
1
1
8
@jianlanluo
Jianlan Luo
7 months
Out of 100 test assembly sets, we found SoTA commercial VLMs such as GPT-01, Gemini 2.0-thinking struggle with these tasks requiring nuanced understanding of the intricate physics. Our method, ReflectVLM was able to achieve 6x more performance on these tasks.
Tweet media one
1
0
2
@jianlanluo
Jianlan Luo
7 months
We consider the problem of high-level robotic task planning, where a planner needs to reactively choose the right low-level skills. We procedurally generate many of these interlocking objects to be assembled so that it presents a true challenge for physical reasoning and planning
Tweet media one
1
1
3
@jianlanluo
Jianlan Luo
7 months
VLMs are great and could potentially be used to solve robotic planning problems. But can they really solve multi-stage long-horizon planning problems that require sophisticated reasoning about nuanced physics? In ReflectVLM, we tackle this problem🧵
Tweet media one
Tweet media two
2
23
132
@berkeley_ai
Berkeley AI Research
7 months
Work from BAIR researchers at RAIL lab led by @svlevine.
@UCBerkeley
UC Berkeley
7 months
UC Berkeley researchers devised a fast and precise way to teach robots tasks like assembling a motherboard or an IKEA drawer. 🤖 https://t.co/IxELGtkC1m
3
3
45
@jianlanluo
Jianlan Luo
7 months
HIL-SERL/SERL is featured by Berkeley News! @svlevine @CharlesXu0124 @real_ZheyuanHu @JeffreyWu13579
@UCBerkeley
UC Berkeley
7 months
UC Berkeley researchers devised a fast and precise way to teach robots tasks like assembling a motherboard or an IKEA drawer. 🤖 https://t.co/IxELGtkC1m
0
0
12
@jianlanluo
Jianlan Luo
8 months
NB!
0
0
1
@jianlanluo
Jianlan Luo
8 months
The end-to-end part back then was pixel to torque, which is actually different than what we are doing today. But this paper did inspire me to work on robot learning and changed my career, hard to imagine it has been 10 years!!
@chelseabfinn
Chelsea Finn
8 months
Disappointed with your ICLR paper being rejected? Ten years ago today, Sergey and I finished training some of the first end-to-end neutral nets for robot control 🤖 We submitted the paper to RSS on January 23, 2015. It was rejected for being "incremental" and "unlikely to have
0
0
9
@QuanVng
Quan Vuong
9 months
I'm hiring exceptional researchers, engineers (both research and full-stack) at @physical_int. Please apply on our website or DM me with questions. Referrals very appreciated!
10
25
234
@jianlanluo
Jianlan Luo
9 months
What she said is not at all acceptable. This is precisely nothing but racism. PC: Internet
Tweet media one
@NeurIPSConf
NeurIPS Conference
9 months
NeurIPS acknowledges that the cultural generalization made by the keynote speaker today reinforces implicit biases by making generalisations about Chinese scholars. This is not what NeurIPS stands for. NeurIPS is dedicated to being a safe space for all of us. We want to address
1
0
4
@NeurIPSConf
NeurIPS Conference
9 months
NeurIPS acknowledges that the cultural generalization made by the keynote speaker today reinforces implicit biases by making generalisations about Chinese scholars. This is not what NeurIPS stands for. NeurIPS is dedicated to being a safe space for all of us. We want to address
533
384
3K