KyleMorgenstein Profile Banner
Kyle🤖🚀🦭 Profile
Kyle🤖🚀🦭

@KyleMorgenstein

Followers
15K
Following
196K
Media
2K
Statuses
35K

Full of childlike wonder. Teaching robots manners. UT Austin PhD candidate. 🆕 RL Intern @ Apptronik. Past: Boston Dynamics AI Institute, NASA JPL, MIT ‘20.

he/him
Joined September 2018
Don't wanna be here? Send us removal request.
@KyleMorgenstein
Kyle🤖🚀🦭
1 year
when you argue with me about control theory this is who you’re arguing with
Tweet media one
5
5
111
@KyleMorgenstein
Kyle🤖🚀🦭
2 days
RT @EugeneVinitsky: Personally I would have trouble getting up in the morning if my job was "make sure the bot can be antisemitic" but that….
0
1
0
@KyleMorgenstein
Kyle🤖🚀🦭
2 days
RT @EugeneVinitsky: If you work at xai, you can just quit. You can get a job almost anywhere. What on earth are you doing. .
0
40
0
@KyleMorgenstein
Kyle🤖🚀🦭
2 days
RT @tom_jiahao: Introducing Muscle v0 -- infinite degrees of freedom, from @DaxoRobotics. A different mountain to climb - with a far more b….
0
72
0
@KyleMorgenstein
Kyle🤖🚀🦭
3 days
in middle school I helped raise $20,000 for kony 2012.
@frozenaesthetic
Moon Dragon
3 months
Share a piece of lore about yourself
Tweet media one
0
0
5
@KyleMorgenstein
Kyle🤖🚀🦭
3 days
RT @EugeneVinitsky: Still in stealth but our team has grown to 20 and we're still hiring. If you're interested in joining the research fron….
0
15
0
@KyleMorgenstein
Kyle🤖🚀🦭
3 days
RT @kiwi_sherbet: Many roboticists focus on designing human-like hands, but we took a closer look at the fingers. Human fingers are soft, r….
0
12
0
@KyleMorgenstein
Kyle🤖🚀🦭
3 days
deleted almost 7000 lines of code from my research codebase. good weekend.
1
0
27
@KyleMorgenstein
Kyle🤖🚀🦭
3 days
sorry to my advisor my next paper will be delayed by 1 hour and 36 minutes.
@DiscussingFilm
DiscussingFilm
6 days
‘POKEMON: THE FIRST MOVIE’ is now free to watch on YouTube. 🔗:
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
1
64
@KyleMorgenstein
Kyle🤖🚀🦭
5 days
unless your task is finite horizon! most learning libraries don’t differentiate, and most end users would never think about it unless they get really in the weeds with the math.
@KyleMorgenstein
Kyle🤖🚀🦭
12 days
@jsuarez5341 it’s ultimately a question of how you define your state-value function for the critic; most RL texts present the definitions for both finite and infinite horizons but most code bases don’t differentiate based on task (but they should, it absolutely makes a difference).
0
0
9
@KyleMorgenstein
Kyle🤖🚀🦭
5 days
here’s a great example: in many PPO implementations for robotics we use infinite-horizon value bootstrapping because we derive the algo with a finite-horizon critic but then use it for infinite horizon tasks like velocity tracking. this isn’t standard but helps, UNLESS.
@jsuarez5341
Joseph Suarez (e/🐡)
12 days
@KyleMorgenstein State based rewards that peak in the target state. This isn't how reward works in the rest of RL and it looks like robotics hacks around this with a modification to gae, which also doesn't work in the rest of RL.
3
1
11
@KyleMorgenstein
Kyle🤖🚀🦭
5 days
when training RL policies for robotics, what are some common pitfalls people hit? what feels mysterious or hard to intuit? or what do you intuit but not have a better explanation for? starting to outline a blog more explicitly.
@KyleMorgenstein
Kyle🤖🚀🦭
15 days
I wish there was a good venue to write/present about RL “tricks”. how PD gains affect action scale, how to tune reward functions, actor STD, etc. there’s good intuition for all of it, grounded both in learning theory and robot dynamics, but I don’t often see good explanations.
7
6
175
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
RT @robot_in_space2: A lot of people, especially practitioners, think they understand RL because they know PPO (or DDPG or SAC). I used to….
0
2
0
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
part of it: I see robot people in ML classes all the time, but I hardly ever see ML people in robotic dynamics or control theory classes. that should change!.
1
0
24
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
which group does better directly depends on whether or not you can abstract away physics. robot arms in free space are pretty simple (physically) so you see all the big VLA methods there. locomotion has weird hybrid dynamics, so roboticists have historically had greater success.
3
0
13
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
it feels like in robot learning there are two groups:. > learning people who treat robotics as a problem to solve. > roboticists using learning as a tool. we need both camps to kiss.
10
2
130
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
paper is “Learning Agile and Dynamic Motor Skills for Legged Robots” Hwangbo 2019.
0
0
16
@KyleMorgenstein
Kyle🤖🚀🦭
6 days
reading the original Anymal RL walking paper is like taking a time machine to the ancient past:. > TRPO.> uses control theory notation.> justifies learning architecture with physical intuition and not just vibes (communications delays, mechanical response time, motor dynamics).
3
3
79
@KyleMorgenstein
Kyle🤖🚀🦭
7 days
new OMAM chat we’re so back.
0
0
1
@KyleMorgenstein
Kyle🤖🚀🦭
7 days
it’s genuinely so embarrassing that the biggest physical ailment in my life atm is a broken nail because I too ravenously tried to open a banana.
1
0
9
@KyleMorgenstein
Kyle🤖🚀🦭
8 days
** focus will be on Advanced Skills by Learning Locomotion and Local Navigation End-to-End from Rudin 2022.
0
0
6