Ruohan Zhang Profile
Ruohan Zhang

@RuohanZhang76

Followers
1,048
Following
425
Media
6
Statuses
75

Postdoc @stanfordsvl @StanfordAILab ; robot, brain, art; climbing, soccer, cooking, music, dance

Joined September 2021
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@RuohanZhang76
Ruohan Zhang
6 months
Introducing our new work @corl_conf 2023, a novel brain-robot interface system: NOIR (Neural Signal Operated Intelligent Robots). Website: Paper: 🧠🤖
20
186
757
@RuohanZhang76
Ruohan Zhang
6 months
Envisioned by many, brain-robot interface (BRI) stands out as a thrilling but challenging research topic. It is an exciting time for BRI research. Brain signal decoding and robot intelligence are improved a lot by modern machine learning algorithms.
1
2
28
@RuohanZhang76
Ruohan Zhang
6 months
NOIR is a general-purpose, intelligent BRI system that enables humans to command robots to perform 20 challenging everyday activities using their brain signals, such as cooking, cleaning, playing games with friends, and petting a (robot) dog.
1
3
28
@RuohanZhang76
Ruohan Zhang
6 months
With 10-minute calibration for each session, 3 human participants successfully accomplished 20 long-horizon tasks (4-15 subtasks). On average, each task requires 1.8 attempts to succeed with an average task completion time of 20 minutes.
Tweet media one
2
1
22
@RuohanZhang76
Ruohan Zhang
6 months
NOIR uses non-invasive EEG devices to record brain activities. We decode human intention, including what object to interact with (via SSVEP), how to interact (motor imagery), and where to interact (motor imagery).
Tweet media one
1
2
17
@RuohanZhang76
Ruohan Zhang
6 months
The effectiveness of NOIR is improved by few-shot robot learning algorithms that are based on foundation models. This allows NOIR to adapt to individual users, predict their intentions, and reduce human time and effort.
1
0
16
@RuohanZhang76
Ruohan Zhang
6 months
NOIR holds a significant potential to augment human capabilities and enable critical assistive technology for individuals who require everyday support. We hope NOIR paves the path for future BRI research!
1
0
15
@RuohanZhang76
Ruohan Zhang
6 months
Decoded human intention signals are communicated to our robots. These robots are equipped with 14 pre-defined parameterized primitive skills, such as Pick(object,x,y,z).
1
0
14
@RuohanZhang76
Ruohan Zhang
2 years
Proud to be part of the amazing BEHAVIOR team, hope to see you at our Tutorial on Monday!
@drfeifei
Fei-Fei Li
2 years
Do you want to learn to train and evaluate embodied AI solutions for 1000 household tasks in a realistic simulator? Join our BEHAVIOR Tutorial at #ECCV2022 : Benchmarking Embodied AI Solutions in Natural Tasks! Time: Monday, Oct 24th 14:00 local time (4:00 Pacific Time)
10
69
424
0
1
14
@RuohanZhang76
Ruohan Zhang
2 years
Check out our recent article @gradientpub for training decision AI with human guidance! The original JAAMAS review paper was written with Faraz Torabi @GarrettWarnell @PeterStone_TX .
@gradientpub
The Gradient
2 years
How do humans transfer their knowledge and skills to artificial decision-making agents? What kind of knowledge and skills should humans provide and in what format? @RuohanZhang76 , a postdoc at @StanfordSVL and @StanfordAILab , provides a summary: 👇
1
4
15
0
1
11
@RuohanZhang76
Ruohan Zhang
2 years
Excited to be part of this NeurIPS workshop, if you are interested in attention, please consider submitting your work!
@attentioneurips
AllThingsAttention
2 years
We invite you to submit papers (up to 9 pages for long papers and up to 5 pages for short papers, excluding references and appendix) in the NeurIPS 2022 format. All submissions will be managed through OpenReview submission website. See Call For Papers 4/N
1
0
2
0
1
10
@RuohanZhang76
Ruohan Zhang
10 months
Check out our new work on using foundation model for robot manipulation!
@wenlong_huang
Wenlong Huang
10 months
How to harness foundation models for *generalization in the wild* in robot manipulation? Introducing VoxPoser: use LLM+VLM to label affordances and constraints directly in 3D perceptual space for zero-shot robot manipulation in the real world! 🌐 🧵👇
10
141
580
0
0
8
@RuohanZhang76
Ruohan Zhang
2 years
In honor of our upcoming @NeurIPSConf workshop on "All Things Attention", and the fact that the deadline for you to submit your work has been extended to **Oct 3**, I present a thread on attention and decision making in AI!
@attentioneurips
AllThingsAttention
2 years
The Submission Deadline has been extended to Oct 3, 2022 (11:59PM AoE) @NeurIPSConf Consider submitting your work to our workshop @attentioneurips See details here:
1
2
5
1
2
7
@RuohanZhang76
Ruohan Zhang
6 months
@HarryXu12 @corl_conf Lol thanks Huazhe!
0
0
2
@RuohanZhang76
Ruohan Zhang
1 year
Real world performance is amazing.
@dchaplot
Devendra Chaplot
1 year
Robot visual navigation in unseen homes is hard: end-to-end RL works well in sim but gets only 23% real-world success. Today, in the first real-world empirical study of visual navigation, we show Modular Learning achieves 90% success in unseen homes! 1/N
3
48
241
0
1
4
@RuohanZhang76
Ruohan Zhang
2 years
Please come by to see our spotlight paper at #NeurIPS2021 on Thursday!
0
0
3
@RuohanZhang76
Ruohan Zhang
1 year
At @NeurIPSConf today, haven’t been to an in-person conference for years. Let’s catch up!
0
1
3
@RuohanZhang76
Ruohan Zhang
2 years
Glad to be part of the team and thanks @sunfanyun for the lead. Come to talk to us at @NeurIPSConf this year!
@sunfanyun
Fan-Yun Sun
2 years
How can we effectively predict the dynamics of multi-agent systems? 💥 Identify the relationships. 💥 We are excited to share IMMA at #Neurips2022 , a SOTA forward prediction model that infers agent relationships -- simply by observing their behavior. 1/
2
4
24
0
0
3
@RuohanZhang76
Ruohan Zhang
2 years
0
0
2
@RuohanZhang76
Ruohan Zhang
2 years
@kylehkhsu Go Kyle!
0
0
1
@RuohanZhang76
Ruohan Zhang
2 years
0
0
1
@RuohanZhang76
Ruohan Zhang
2 years
@yukez Congratulations!
0
0
1
@RuohanZhang76
Ruohan Zhang
1 year
@chenwang_j it’s a pleasure to work with you and this team. The key insight is that for robot learning from humans, data for training high-level planer and low-level visuomotor skills can be different. PLAY data is a good candidate for learning to plan.
@chenwang_j
Chen Wang
1 year
How to teach robots to perform long-horizon tasks efficiently and robustly🦾? Introducing MimicPlay - an imitation learning algorithm that uses "cheap human play data". Our approach unlocks both real-time planning through raw perception and strong robustness to disturbances!🧵👇
20
144
742
1
0
1
@RuohanZhang76
Ruohan Zhang
1 year
Amazing work!
@jerryptang
Jerry Tang
1 year
Our language decoding paper ( @AmandaLeBel3 @shaileeejain @alex_ander ) is out! We found that it is possible to use functional MRI scans to predict the words that a user was hearing or imagining when the scans were collected
15
81
232
0
0
1
@RuohanZhang76
Ruohan Zhang
1 year
@dabelcs here as well!
1
0
1
@RuohanZhang76
Ruohan Zhang
8 months
@ShiqiZhang7 oh congrats Shiqi!
1
0
1