jenngrannen Profile Banner
Jenn Grannen Profile
Jenn Grannen

@jenngrannen

Followers
452
Following
144
Media
16
Statuses
42

@StanfordAILab PhD, previously @ToyotaResearch, @berkeley_ai. I can teach your robot new tricks.

Palo Alto, CA
Joined July 2022
Don't wanna be here? Send us removal request.
@jenngrannen
Jenn Grannen
1 month
Such a powerful use of robotics. Love seeing this new paper take real steps toward making assistive feeding robots a reality.
@rkjenamani
Rajat Kumar Jenamani
1 month
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys.🧵1/8
0
1
5
@jenngrannen
Jenn Grannen
2 months
RT @LerrelPinto: We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printa….
0
754
0
@jenngrannen
Jenn Grannen
2 months
Worked with Sidd for years and can honestly say he was born to be a mentor. Any student would be incredibly lucky to have his guidance through a PhD. They’re in amazing hands 🥹🎓.
@siddkaramcheti
Siddharth Karamcheti
2 months
Thrilled to share that I'll be starting as an Assistant Professor at Georgia Tech (@ICatGT / @GTrobotics / @mlatgt) in Fall 2026. My lab will tackle problems in robot learning, multimodal ML, and interaction. I'm recruiting PhD students this next cycle – please apply/reach out!
Tweet media one
Tweet media two
1
0
3
@jenngrannen
Jenn Grannen
2 months
ProVox builds on our Vocal Sandbox framework (Oral at @corl_conf '24), but emphasizes personalization -- adapting to each user's unique goals and preferences. Check out the original tweet for the first chapter of the Vocal Sandbox story! 🧵6/7.
@jenngrannen
Jenn Grannen
9 months
Introducing 🆚Vocal Sandbox: a framework for building adaptable robot collaborators that learns new 🧠high-level behaviors and 🦾low-level skills from user feedback in real-time. ✅. Appearing today at @corl_conf as an Oral Presentation (Session 3, 11/6 5pm). 🧵(1/6)
1
0
4
@jenngrannen
Jenn Grannen
2 months
In a user study packing household lunch bags, ProVox led to:. 🍱 38.7% faster task completion .🧍‍♀️ 31.9% less user burden.👍 27.3% higher ease of use. 🧵5/7
1
0
5
@jenngrannen
Jenn Grannen
2 months
Armed with personalized context, ProVox’s LLM-powered proactive planner suggests helpful next steps such as “Should I pack the hand sanitizer next?”. No more micromanaging! It’s not just reactive — it’s your collaborative partner. 🧵4/7.
1
0
4
@jenngrannen
Jenn Grannen
2 months
At the heart of ProVox is a meta-prompting protocol: a quick, natural way for users to tell the robot their goals, preferred behaviors, and vocabulary — before any physical action starts. This early “handshake” sets the stage for seamless collaboration. 🧵3/7.
1
0
5
@jenngrannen
Jenn Grannen
2 months
Every user is different. Some want full autonomy. Others want granular control. ProVox adapts to both — personalizing behavior down to how you want the robot to grasp and move objects. Collaboration shouldn���t be one-size-fits-all. 🧵2/7
2
0
3
@jenngrannen
Jenn Grannen
2 months
Meet ProVox: a proactive robot teammate that gets you 🤖❤️‍🔥. ProVox models your goals and expectations before a task starts — enabling personalized, proactive help for smoother, more natural collaboration. All powered by LLM commonsense. Recently accepted at @ieeeras R-AL!. 🧵1/7
3
14
72
@jenngrannen
Jenn Grannen
2 months
Saying hi 👋 to @exploratorium's new feathery friend. Congrats @michelllepan and @CatieCuan!!
Tweet media one
0
0
5
@jenngrannen
Jenn Grannen
2 months
RT @YueminMao: 🤖📦 Want to move many items FAST with your robot? Use a tray. But at high speeds, objects may fall off 💥. Introducing our ne….
0
21
0
@jenngrannen
Jenn Grannen
2 months
RT @priyasun_: How can we move beyond static-arm lab setups and learn robot policies in our messy homes?.We introduce HoMeR, an imitation l….
0
51
0
@jenngrannen
Jenn Grannen
5 months
RT @ShuangL13799063: Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action gener….
0
53
0
@jenngrannen
Jenn Grannen
7 months
So excited to try this out!!.
@kevin_zakka
Kevin Zakka
7 months
The ultimate test of any physics simulator is its ability to deliver real-world results. With MuJoCo Playground, we’ve combined the very best: MuJoCo’s rich and thriving ecosystem, massively parallel GPU-accelerated simulation, and real-world results across a diverse range of
0
0
11
@jenngrannen
Jenn Grannen
8 months
RT @suneel_belkhale: Want a smaller VLA that performs better? We just released some core improvements to OpenVLA, like:. MiniVLA: 7x small….
Tweet card summary image
github.com
OpenVLA: An open-source vision-language-action model for robotic manipulation. - Stanford-ILIAD/openvla-mini
0
16
0
@jenngrannen
Jenn Grannen
8 months
Excited to share @StanfordHAI’s article on our Vocal Sandbox work!. Looking forward to pushing Vocal Sandbox out into real world settings (bakery🥐/mall👕/library📚) next!.
@StanfordHAI
Stanford HAI
8 months
A new robot system called Vocal Sandbox is the first of many systems that promise to help integrate robots into our daily lives. Learn about the prototype that @Stanford researchers presented at the 8th annual Conference on Robot Learning.
0
3
22
@jenngrannen
Jenn Grannen
9 months
Today at 3-4pm, I'll be presenting our work again at the Language and Robot Learning Workshop. Come check out my poster and say hi! :).
@jenngrannen
Jenn Grannen
9 months
Introducing 🆚Vocal Sandbox: a framework for building adaptable robot collaborators that learns new 🧠high-level behaviors and 🦾low-level skills from user feedback in real-time. ✅. Appearing today at @corl_conf as an Oral Presentation (Session 3, 11/6 5pm). 🧵(1/6)
0
0
23
@jenngrannen
Jenn Grannen
9 months
This is joint work with @siddkaramcheti (on the faculty job market!) and in collaboration with @suvir_m, @percyliang, and @DorsaSadigh. For more details, check out our paper and website!. Paper: Website: (6/6).
0
0
4
@jenngrannen
Jenn Grannen
9 months
What can we do with hours of interaction? LEGO stop-motion! The robot operates the camera + the user focuses on creativity: directing, writing a script, arranging LEGO. Over a 2hr interaction, we shoot a 52s film, with 43% of frames shot autonomously from 40 user commands. (5/6)
1
0
4