Ed__Johns Profile Banner
Edward Johns Profile
Edward Johns

@Ed__Johns

Followers
3K
Following
897
Media
193
Statuses
400

Associate Professor and Director of the Robot Learning Lab at Imperial College London.

London, UK
Joined January 2019
Don't wanna be here? Send us removal request.
@Ed__Johns
Edward Johns
1 month
RT @DJiafei: 1/ 🚀 Announcing #GenPriors — the CoRL 2025 workshop on Generalizable Priors for Robot Manipulation!. 📍 Seoul, Korea 📅 Sat 27 S….
0
9
0
@Ed__Johns
Edward Johns
2 months
I am very happy and proud today that my student Norman Di Palo (@normandipalo) has just passed his PhD viva!. Norman has had a very creative few years in my lab and I've learned a lot from his insights and curiosity. Congratulations Norman, it has been a pleasure working with
Tweet media one
3
0
53
@Ed__Johns
Edward Johns
2 months
Vitalis and I had a cracking chat about "Instant Policy" with @micoolcho and @chris_j_paxton. Thanks for the invite!. If you want to know how to really get in-context learning working in robotics, @vitalisvos19 and I take you through it in the video below 👇.
@RoboPapers
RoboPapers
2 months
Ep#13 with @vitalisvos19 & @Ed__Johns on Instant Policy: In-Context Imitation Learning via Graph Diffusion . Co-hosted by @chris_j_paxton & @micoolcho
1
2
14
@Ed__Johns
Edward Johns
2 months
RT @RoboPapers: Full episode coming soon!. Geeking out with @vitalisvos19 & @Ed__Johns on Instant Policy: In-Context Imitation Learning via….
0
6
0
@Ed__Johns
Edward Johns
3 months
A few years ago, humanoids with legs walking around the ICRA exhibition was the new thing. This time, it’s the year of the hands! Tons and tons of humanoid hands! #ICRA2025
Tweet media one
Tweet media two
Tweet media three
Tweet media four
12
49
309
@Ed__Johns
Edward Johns
3 months
At #ICRA2025, I'm about to present:."R+X: Retrieval and Execution from Everyday Human Videos". By using a VLM for retrieval and in-context IL for execution, robots can now learn from just videos of humans!. Led by @geopgs and @normandipalo. See:
3
1
46
@Ed__Johns
Edward Johns
3 months
RT @normandipalo: we open sourced the code to transform human videos into robot trajectories, so you can train robots with your hands 👐🏻. w….
0
29
0
@Ed__Johns
Edward Johns
3 months
Tomorrow morning at #ICRA2025, I will be presenting our findings on whether robots can learn dual-arm tasks from just a single demonstration. (Spoiler: they can!). Come along!. This was led by my excellent PhD student Yilong Wang. Paper & videos here:
1
12
94
@Ed__Johns
Edward Johns
3 months
Code: Paper: Thanks @geopgs for the code, and @normandipalo and @pitvit_ for the collaboration!
0
0
1
@Ed__Johns
Edward Johns
3 months
In advance of presenting our ICRA 2025 paper tomorrow, we have released our code for "R+X: Retrieval and Execution from Everyday Human Videos". You can now use this code yourself to extract robot actions just from videos of human demonstrations!. See below for more info 👇👇👇
2
1
23
@Ed__Johns
Edward Johns
4 months
Thank you very much to @Andrey__Kolobov, @shahdhruv_, @AlexBewleyAI, and the rest of the organisers, for delivering an excellent and packed-out workshop!.
0
0
6
@Ed__Johns
Edward Johns
4 months
Vitalis (@vitalisvos19) and I were really honoured today to win the Best Paper Award at the ICLR 2025 Robot Learning Workshop, for our paper “Instant Policy”. The video below shows Instant Policy in action… A single demonstration is all you need!. See:
4
19
154
@Ed__Johns
Edward Johns
4 months
This was led by my excellent student Vitalis Vosylius (@vitalisvos19), in the final project of his PhD. To read the paper and see more videos, please visit (7/7)
Tweet media one
0
0
2
@Ed__Johns
Edward Johns
4 months
Beyond just regular imitation learning, we also discovered two intriguing downstream applications:. (1) Cross-embodiment transfer from human-hand demonstrations to robot policies. (2) Zero-shot transfer to language-defined tasks without needing large language-annotated datasets.
1
0
1
@Ed__Johns
Edward Johns
4 months
One very exciting aspect of Instant Policy is that the network can be trained primarily with "pseudo-demonstrations": arbitrary trajectories with random objects, all in simulation. And we found very promising scaling laws: we can continue to generate these pseudo-demonstrations
1
0
1
@Ed__Johns
Edward Johns
4 months
The figure below shows our network architecture, which jointly expresses the context (demonstrations, as sequences of observations and actions), the current observation, and the future actions. Observations are point clouds, and actions are relative gripper poses. During
Tweet media one
1
0
1
@Ed__Johns
Edward Johns
4 months
Instant Policy can learn some tasks with one demonstration. But if a task requires more than one demonstration, it can condition on however many demonstrations are required. In this video, we show how the robot continues to improve and generalise its learned behaviour as further
1
0
1
@Ed__Johns
Edward Johns
4 months
In-context learning is where a trained model accepts examples of a new task (the "context") at its input, and can then make predictions for that same task given a new instance of it, without any further training or weight updates. Achieving this in robotics is very exciting:
1
0
1
@Ed__Johns
Edward Johns
4 months
Today, we are presenting "Instant Policy" in our #ICLR oral!. Below is a single uncut video: after just one demonstration for a task, the robot has learned that task instantly. So. we've achieved in-context learning in robotics! . See: (1/7) 🧵
1
1
20
@Ed__Johns
Edward Johns
4 months
This was led by my excellent student Vitalis Vosylius (@vitalisvos19), in the final project of his PhD. To read the paper and see more videos, please visit (7/7)
Tweet media one
0
0
0