_ericrosen Profile Banner
Eric Rosen Profile
Eric Rosen

@_ericrosen

Followers
1K
Following
1K
Media
216
Statuses
1K

Robotics Research Scientist @ Robotics and AI Institute (RAI) | Making robots smarter for everyone | CS PhD from @BrownUniversity 🤖

Boston, MA
Joined July 2019
Don't wanna be here? Send us removal request.
@_ericrosen
Eric Rosen
3 years
CoRL was really fun this year! My notes for @corl_conf are available here:
Tweet media one
4
68
341
@_ericrosen
Eric Rosen
18 days
RT @GeorgiaChal: 🚨 We're hiring a Postdoc in Robot Learning @ PEARL Lab, TU Darmstadt 🚨. Join our ERC-funded project SIREN (Structured Inte….
0
12
0
@_ericrosen
Eric Rosen
1 month
🚨 Fall 2025 Internships 🚨 . Do you want to research:.- Fault-tolerant robot manipulation .- Long-horizon reasoning .- Skill composition and task understanding . Then consider interning at RAI for Fall 2025! . Please DM me for more details / share for others to see!
Tweet media one
4
14
83
@_ericrosen
Eric Rosen
2 months
Typo correction: Should be “Boolean operators AND and OR”, not “Boolean operators AND and NOT” 😆.
0
0
0
@_ericrosen
Eric Rosen
2 months
De Morgan’s laws lets you turn the Boolean operators AND and NOT into each other via NOT. Did you know that it also applies to NOR and NAND? Can you prove it?
Tweet media one
1
0
1
@_ericrosen
Eric Rosen
8 months
RT @NaveenManwani17: 🚨Paper Alert 🚨. ➡️Paper Title: Verifiably Following Complex Robot Instructions with Foundation Models. 🌟Few pointers f….
0
3
0
@_ericrosen
Eric Rosen
8 months
Excited to see more works combine foundation models with task and motion planning for robots! . Great job @Benedict_Q !.
@Benedict_Q
Benedict Quartey
9 months
🚨 What is the best way to use foundation models in robotics?. Our new work shows that combining LLMs & VLMs with ideas from formal methods leads to robots that can verifiably follow complex, open-ended instructions in the real world. 🌍. We evaluate on over 150 tasks🚀. 🧵 (1/4)
1
1
16
@_ericrosen
Eric Rosen
9 months
RT @lucacarlone1: If you are applying for #gradschool and have last-minute questions about your application, I'm willing to offer office ho….
0
119
0
@_ericrosen
Eric Rosen
10 months
🤖 🧠 If you’re interested in learning abstractions and planning, definitely check out the #LEAP2024 workshop @corl_conf and consider submitting!. Looking forward to #CoRL2024 !.
0
1
16
@_ericrosen
Eric Rosen
10 months
You can check out the Github repo at the link below. 🚨I made this repo at the end of my PhD and was mostly used for learning, so if you're having issues with it / annoyed with the code jankiness, please feel free to reach out and ask questions!.
Tweet media one
0
0
0
@_ericrosen
Eric Rosen
10 months
If you don't have a Spot, that's fine! The codebase lets you build NLMap from an offline dataset, and includes some example RGBD data collected from Spot already for you!. Below are some example RGBD images + an example multi-view pointcloud generated from the dataset. 4/🧵
Tweet media one
Tweet media two
Tweet media three
1
0
1
@_ericrosen
Eric Rosen
10 months
`nlmap_spot` provides utilities, such as:. 1. Collecting RGBD and pose data from Spot.2. Generating multiview color pointclouds .3. Generating a NLMap from offline data.4. Visualizing 2D/3D detections from language queries.5. Controlling Spot to do object picking. 3/🧵
Tweet media one
1
0
0
@_ericrosen
Eric Rosen
10 months
If you're not familiar with NLMap, check out the thread below, but to take a quote from the paper:. "NLMap is an open-vocabulary, queryable semantic representation based on ViLD and CLIP". Detections from VLMs are back-projected into a 3D scene. 2/🧵
Tweet media one
Tweet media two
@BoyuanChen0
Boyuan Chen
3 years
How can we ground large language models (LLM) with the surrounding scene for real-world robotic planning?. Our work NLMap-Saycan allows LLMs to see and query objects in the scene, enabling real robot operations unachievable by previous methods. Link: 1/6.
1
0
0
@_ericrosen
Eric Rosen
10 months
🤖A year ago, I wrote `nlmap_spot`, a library for creating a natural language queryable scene representation using VLMs (nlmap), and utils for object picking with Spot. Below are 3D detections of " book shelf" and 2D of "Coffee maker". more info on Github repo below 👇. 1/🧵
Tweet media one
Tweet media two
1
2
7
@_ericrosen
Eric Rosen
11 months
I love robot learning approaches that embrace modularity, skill libraries make me feel 🥰. Awesome job!
Tweet media one
@arankomatsuzaki
Aran Komatsuzaki
11 months
🚀 Google unveils "Achieving Human Level Competitive Robot Table Tennis"!. 🤖 The robot won 100% vs. beginners and 55% vs. intermediate players, showcasing solid amateur human-level performance. Check out the details:
Tweet media one
0
0
10
@_ericrosen
Eric Rosen
11 months
Survey paper on grounding language for robots. Looks at the spectrum from grounding to discrete symbols, to continuous embeddings, and everything between!.
@jasonxyliu
Jason Liu @RSS
11 months
How do robots understand natural language?. #IJCAI2024 survey paper on robotic language grounding. We situated papers into a spectrum w/ two poles, grounding language to symbols and high-dimensional embeddings. We discussed tradeoffs, open problems & exciting future directions!
Tweet media one
0
0
9
@_ericrosen
Eric Rosen
1 year
RT @thomas_weng: The AI Institute is hiring! Check out the careers page and feel free to reach out to me :).
0
5
0
@_ericrosen
Eric Rosen
1 year
I’ve recently seen many papers on VLMs for robotics mix up object position with object pose. Pose is position + orientation. If you use a VLM to get a bounding box of an object, you may have the object position, but you don’t necessarily have its orientation, so it’s not pose.
0
3
17
@_ericrosen
Eric Rosen
1 year
For conferences, I often hear “prioritize networking, you can read papers / watch talks whenever”. Networking is def important, but I think paying deep attention to talks / taking notes is underrated. Helps me remember the works longer + makes connecting with the authors easier!.
0
0
9
@_ericrosen
Eric Rosen
1 year
Sounds like a great PhD opportunity to me if you’re interested in spatial AI!.
@AjdDavison
Andrew Davison
1 year
This is an opportunity to do a PhD with me at Imperial College, fully funded and starting in October this year. Apply via the link below by 12th June next week. On-sensor vision will be very important to the future of low power vision in robotics + AR/VR.
0
0
0
@_ericrosen
Eric Rosen
1 year
How to represent task specifications is an important problem in robotics, and especially interesting with the increased popularity of LLMs!. Looking forward to this RSS workshop! 😊.
@jasonxyliu
Jason Liu @RSS
1 year
Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th. Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!
Tweet media one
0
0
6