
Jitendra MALIK
@JitendraMalikCV
Followers
5K
Following
11
Media
1
Statuses
33
Joined December 2021
Angjoo Kanazawa @akanazawa and I taught CS 280, graduate computer vision, this semester at UC Berkeley. We found a combination of classical and modern CV material that worked well, and are happy to share our lecture material from the class. Enjoy!.
8
101
748
Enjoy watching a humanoid walking around UC Berkeley. It only looks inebriated :-).
our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy. (w/ @redstone_hong @junyi42 @davidrmcall)
1
2
90
I'm happy to post course materials for my class at UC Berkeley "Robots that Learn", taught with the outstanding assistance of @ToruO_O. Lecture videos at Lecture notes & other course materials at
17
248
1K
Happy to share these exciting new results on video synthesis of humans in movement. Arguably, these establish the power of having explicit 3D representations. Popular video generation models like Sora don't do that, making it hard for the resulting video to be 4D consistent.
I’ve dreamt of creating a tool that could animate anyone with any motion from just ONE image… and now it’s a reality!.🎉 Super excited to introduce updated 3DHM: Synthesizing Moving People with 3D Control. 🕺💃3DHM can generate human videos from a single real or synthetic human
0
7
70
Touche', Sergey!.
Lots of memorable quotes from @JitendraMalikCV at CoRL, the most significant one of course is: “I believe that Physical Intelligence is essential to AI” :).I did warn you Jitendra that out of context quotes are fair game. Some liberties taken wrt capitalization.
0
0
54
RT @Cinnabar233: Fun collaboration w/ @antoniloq, @carlo_sferrazza, @HaozhiQ, @jane_h_wu, @pabbeel, @JitendraMalikCV Checkout our paper at….
0
1
0
RT @vonekels: Please see the website for more details. synNsync🪩is joint work with my awesome ✨co-authors✨:.@LeaMue27 @brjathu @geopavlakos….
0
1
0
Autoregressive modeling is not just for language, it can equally be used to model human behavior. This paper shows how. .
Please see the website for more details. synNsync🪩is joint work with my awesome ✨co-authors✨:.@LeaMue27 @brjathu @geopavlakos @shiryginosar @akanazawa @JitendraMalikCV. Website🖥️: Data💾: Arxiv📜: 🧵6/6.
0
2
84
RT @PeterStone_TX: 10 years after DQN, what are deep RL’s impacts on robotics? Which robotic problems have seen the most thrilling real-wor….
0
100
0
RT @neerjathakkar: It was great to work with Karttikeya Mangalam, @andrea_bajcsy and @JitendraMalikCV!.Project: Arx….
0
1
0
RT @ToruO_O: Imitation learning works™ – but you need good data 🥹 How to get high-quality visuotactile demos from a bimanual robot with mul….
0
76
0
Another success of sim-to-real for training robot policies! This task, using two multi-fingered hands, requires considerable dexterity, and is hopefully representative of other household tasks that we wish to solve in the future.
Achieving bimanual dexterity with RL + Sim2Real!. TLDR - We train two robot hands to twist bottle lids using deep RL followed by sim-to-real. A single policy trained with simple simulated bottles can generalize to drastically different real-world objects.
1
3
63
Want to make your photorealistic 3D avatar dance like your favorite actor? Check this out!.
Super excited to announce our new work: Synthesizing Moving People with 3D Control (3DHM)💡. Why is 3DHM unique?.With 3D Control, 3DHM can animate a 𝗿𝗮𝗻𝗱𝗼𝗺 human photo with 𝗮𝗻𝘆 poses in a 𝟯𝟲𝟬-𝗱𝗲𝗴𝗿𝗲𝗲 camera view and 𝗮𝗻𝘆 camera azimuths from 𝗮𝗻𝘆 video!
0
2
34
RT @AIatMeta: Together with the Ego4D consortium, today we're releasing Ego-Exo4D, the largest ever public dataset of its kind to support r….
0
237
0
RT @YutongBAI1002: How far can we go with vision alone? .Excited to reveal our Large Vision Model! Trained with 420B tokens, effective scal….
0
158
0
RT @Karttikeya_m: Every CV guy I know has privately admitted at some point that current video datasets do not really seem to care about tim….
0
30
0