GillmanLab Profile Banner
Nate Gillman Profile
Nate Gillman

@GillmanLab

Followers
757
Following
236
Media
25
Statuses
94

ML researcher, interning @Google, PhD-ing @BrownUniversity

Joined August 2021
Don't wanna be here? Send us removal request.
@GillmanLab
Nate Gillman
1 month
Ever wish you could turn your video generator into a controllable physics simulator? . We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. 🧵(1/n)
8
66
302
@GillmanLab
Nate Gillman
4 days
RT @davisblalock: Deep learning training is a mathematical dumpster fire. But it turns out that if you *fix* the math, everything kinda ju….
0
147
0
@GillmanLab
Nate Gillman
8 days
RT @Koven_Yu: 🤩 WorldScore is accepted to #ICCV2025. Benchmark your 3D/4D/video world models with WorldScore!.
0
6
0
@GillmanLab
Nate Gillman
9 days
RT @Koven_Yu: #ICCV2025 🤩3D world generation is cool, but it is cooler to play with the worlds using 3D actions 👆💨, and see what happens! —….
0
37
0
@GillmanLab
Nate Gillman
14 days
RT @frankzydou: Check out 🌟Vid2Sim: Generalizable, Video-based Reconstruction of Appearance, Geometry & Physics for Mesh-Free Simulation #C….
0
27
0
@GillmanLab
Nate Gillman
20 days
RT @zhang_yunzhi: (1/n) Time to unify your favorite visual generative models, VLMs, and simulators for controllable visual generation—Intro….
0
63
0
@GillmanLab
Nate Gillman
22 days
RT @sainingxie: Had a great time at this CVPR community-building workshop---lots of fun discussions and some really important insights for….
0
61
0
@GillmanLab
Nate Gillman
25 days
RT @ShijieWang20: I'm in #CVPR2025!. Fri, 13 Jun, 4-6 PM CAT, poster session 2. At ExHall D Poster #230. come and have a chat about our wor….
0
1
0
@GillmanLab
Nate Gillman
28 days
RT @openworldlabs: In this blog post we will summarize some of our findings with training autoencoders for diffusion! We also share some nu….
0
16
0
@GillmanLab
Nate Gillman
1 month
RT @drsrinathsridha: Existing 3D human manipulation datasets are valuable, but are limited in scale and diversity. At #CVPR2025, we will in….
0
29
0
@GillmanLab
Nate Gillman
1 month
alternative/better paper title: "Use The Force!" (thanks @danbgoldman)
@GillmanLab
Nate Gillman
1 month
Ever wish you could turn your video generator into a controllable physics simulator? . We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. 🧵(1/n)
0
0
4
@GillmanLab
Nate Gillman
1 month
This was a collaboration between @BrownCSDept and @GoogleDeepMind, joint work with @mik3fr33man @dakshces @DeqingSun @jesu9 + two other collaborators who aren't on X yet, Charles and Evan!.
1
0
7
@GillmanLab
Nate Gillman
1 month
We've released all datasets, code, and model weights to foster further research on interactive and physically plausible video generation models. Explore more on our project page including fun interactive demos like this 👶 (n/n)
1
0
8
@GillmanLab
Nate Gillman
1 month
Force prompting can recreate some demos for prior works that use a physics simulator at inference! Our method doesn’t require any 3D asset, or simulator, at inference time–just upload your image, and choose wind force/angle, or poke-ing location and force/angle. (7/n)
1
0
11
@GillmanLab
Nate Gillman
1 month
Beyond mass, Force Prompting also seems to understand physical affordances (e.g. a train follows its track when pushed) and object atomicity (poke any part of a train, and the whole train moves). This opens up exciting avenues for interactive world models. (6/n)
2
1
9
@GillmanLab
Nate Gillman
1 month
Two important design choices we uncovered during our ablation studies: use of motion-specific keywords during training (e.g. "wind"/"blow"/"breeze"), as well as strategic diversity of synthetic training data (e.g. multiple backgrounds, balls/flags with different colors) (5/n)
1
0
9
@GillmanLab
Nate Gillman
1 month
And we've seen some emergent "intuitive physics" too! Our model demonstrates a degree of mass understanding, where a lighter object will move farther than a heavier one under the same force. Check out the examples! 👇 (4/n)
1
1
11
@GillmanLab
Nate Gillman
1 month
Force Prompting offers diverse control! One model handles localized point forces (like a precise poke 👆) and another handles global wind force fields (like wind sweeping across a scene 💨). Full control, right in your hands. (3/n)
1
1
9
@GillmanLab
Nate Gillman
1 month
How does it work? We found that even with limited but strategically generated synthetic training data (think: Blender videos of flags + rolling balls) video generation models can learn to generalize and apply physical forces across diverse scenes. It's surprisingly robust! (2/n)
1
1
17