Karthik Mahadevan
@karthikm0
Followers
572
Following
890
Media
8
Statuses
123
Human-Robot Interaction Researcher | PhD candidate in the @dgpToronto lab at @UofT.
Toronto, Canada
Joined April 2018
I am on the job market, seeking tenure-track or industry research positions starting in 2025. My research combines human-computer interaction and robotics—please visit https://t.co/POmSPUd2H9 for updated publications and CV. Feel free to reach out if interested. RT appreciated!
1
39
96
Congrats to @karthikm0 (and amazing co-authors!) on the Best Paper Award at #HRI2025 for "ImageInThat: Manipulating Images to Convey User Instructions to Robots." The paper proposes direct manipulation of images as a new paradigm to instruct robots. 🔗 https://t.co/VUc8UvKV3x
0
7
43
💼 I'm on the job market for tenure-track faculty positions or industry research scientist roles, focusing on HCI, Human-AI interaction, Creativity Support, and Educational Technology. Please reach out if hiring or aware of relevant opportunities! RT appreciated! 🧵 (1/n)
3
62
161
Collect robot demos from anywhere through AR! Excited to introduce 🎯DART, Dexterous AR Teleoperation interface enabling anyone to teleoperate robots in cloud-hosted simulation. With DART, anyone can collect robot demos anywhere, anytime, for multiple robots and tasks in one
4
47
228
I'm on the job market👀seeking TT-faculty and post-doc positions starting Fall 2025 to continue my research in family-centered design of socially interactive systems👀 I wrote a "blog" announcing this & my reflections on our latest RO-MAN'24 publication: https://t.co/0eYh3l8Zvs
5
21
100
What’s the future of #HCI + #AI innovation? I believe it’s bright! Had some fun writing this article on drawing parallels with the world of mixed martial arts 💪👊 https://t.co/yFgUGiqyJe
0
9
29
Happy to announce 2 #CHI2024 papers from @ExiiUW, @uwhci, & @UWCheritonCS! First, @nonsequitoria & I show that a text highlight constraint limiting how many words can be highlighted in a document reader can improve reading comprehension. ⭐️📃 Details here:
nikhitajoshi.ca
Research from University of Waterloo HCI: Constrained Highlighting in a Document Reader can Improve Reading Comprehension
1
9
57
📢📢📢 A pulse of light takes ~3ns to pass through a Coke bottle—100 million times less than it takes you to blink. Our work lets you fly around this 3D scene at the speed of light, revealing propagating wavefronts of light that are invisible to the naked eye—from any viewpoint!
3
40
258
✨ Introducing Keypoint Action Tokens. 🤖 We translate visual observations and robot actions into a "language" that off-the-shelf LLMs can ingest and output. This transforms LLMs into *in-context, low-level imitation learning machines*. 🚀 Let me explain. 👇🧵
4
28
161
Through a weeklong, immersive program at @UofTCompSci’s Dynamic Graphics Project lab, high school students got to know more about graduate school and what it’s like to be a computer science researcher. https://t.co/ywwssZtEd4
1
5
17
How do we get robots to efficiently explore diverse scenes and answer realistic questions? e.g., is the dishwasher in the kitchen open❓ 👇Explore until Confident — know where to explore (with VLMs) and when to stop exploring (with guarantees) https://t.co/hvPEtr3Evu
1
11
71
Our @HCI_Bath group at @UniofBath (ranked 🇬🇧 top 5) is searching for a rockstar Lecturer/Assistant Professor with interests in AR/VR, fabrication, interaction techniques, wearables, BCI, AI/ML🎉 ⏰Deadline: April 5th 💼Apply here: https://t.co/pVfbRE7IOP
#HCI #CHI2024 #UIST2024
HCI hiring alert! Come and work with us in @HCI_Bath at @UniofBath - we have a lecturer (Assistant Professor) position open to align with our interests in AR/VR, fabrication, wearables etc. Deadline is April 5. Please share with any great candidates!
1
7
25
Fantastic talk by Carolina Parada @carolina_parada from Google Deepmind on using LLMs to control and teach robots. LLMs seem to be the hammer we’ve been looking for in personal robotics. @HRI_Conference
0
3
25
(9/9) This work was done during an internship at Google Deepmind Robotics where I was supervised by many wonderful people including, Jonathan Chien, Noah Brown, Zhuo Xu, @carolina_parada, @xf1280, @andyzeng_, @leilatakayama, and @DorsaSadigh.
0
0
5
(8/9) Please see our HRI2024 paper: https://t.co/1jSV6z6j38 and website: https://t.co/PQDuwgnhl3 for more details.
1
0
3
(7/9) GenEM can also generate more complex behaviors by composing previously learned behaviors, adapt to user feedback, and generate behaviors for robots with different affordances.
1
0
5
(6/9) Our results suggest that the GenEM behaviors were not perceived as significantly worse by participants (and performed better than the animator in some cases).
1
0
4
(5/9) We ran two studies where participants watched videos of robot behaviors ranging from simple (e.g., nodding) to complex (e.g., observing a human demonstration). We compared GenEM (w/o user feedback) and GenEM++ (w/ user feedback) to behaviors made by a professional animator.
1
0
4
(4/9) However, since the behavior may not always align with the user’s intention, they can provide feedback to iteratively refine it.
1
0
3