
Chris Rockwell
@_crockwell
Followers
622
Following
1K
Media
25
Statuses
136
PhD student in #ComputerVision at @UmichCSE Views are my own.
Ann Arbor, MI
Joined March 2013
Excited to share โ๏ธLightspeedโก, a photorealistic, synthetic dataset with ground truth pose used for benchmarking alongside DynPose-100K!. Now available for download: Paper accepted to #CVPR2025:
Ever wish YouTube had 3D labels?. ๐Introducing๐ฅDynPose-100K๐ฅ, an Internet-scale collection of diverse videos annotated with camera pose!. Applications include camera-controlled video generation๐คฉand learned dynamic pose estimation๐ฏ. Download:
3
31
197
RT @jin_linyi: Hello! If you are interested in dynamic 3D or 4D, don't miss the oral session 3A at 9 am on Saturday:. @zhengqi_li .will beโฆ.
0
6
0
RT @ayshrv: Excited to share our CVPR 2025 paper on cross-modal space-time correspondence!. We present a method to match pixels across diffโฆ.
0
28
0
RT @_YimingDou: Ever wondered how a scene sounds๐ when you interact๐ with it?. Introducing our #CVPR2025 work "Hearing Hands: Generating Soโฆ.
0
34
0
RT @jespark0: Can AI image detectors keep up with new fakes?. Mostly, no. Existing detectors are trained using a handful of models. But theโฆ.
0
9
0
RT @dangengdg: Hello! If you like pretty images and videos and want a rec for CVPR oral session, you should def go to Image/Video Gen, Fridโฆ.
0
16
0
RT @chenhsuanlin: Cameras are key to modeling our dynamic 3D visual world. Can we unlock the ๐ฅ๐บ๐ฏ๐ข๐ฎ๐ช๐ค 3๐ ๐๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ฆ๐ต?! ๐. ๐ธ ๐๐๐ป๐ฃ๐ผ๐๐ฒ-๐ญ๐ฌ๐ฌ๐ is ourโฆ.
0
10
0
More results + webpage: abs: Thanks to my great collaborators @jt_tung, @TsungYiLinCV, @liu_mingyu, David Fouhey and @chenhsuanlin.
0
0
6
RT @chenhsuanlin: ๐ NVIDIA Cosmos -- our World Foundation Model platform! Super excited to have made core contributions in multiple aspectsโฆ.
0
18
0
RT @yen_chen_lin: Video generation models exploded onto the scene in 2024, sparked by the release of Sora from OpenAI. I wrote a blog postโฆ.
0
109
0
RT @jin_linyi: Introducing ๐Stereo4D๐. A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *โฆ.
0
106
0
RT @dangengdg: What happens when you train a video generation model to be conditioned on motion?. Turns out you can perform "motion promptiโฆ.
0
148
0
RT @ayshrv: We present Global Matching Random Walks, a simple self-supervised approach to the Tracking Any Point (TAP) problem, accepted toโฆ.
0
23
0
RT @SarahJabbour_: ๐ขPresenting ๐๐๐๐๐๐: Diffusion-Enabled Permutation Importance for Image Classification Tasks #ECCV2024. We use permutatioโฆ.
0
12
0
Excited to present our #CVPR2024 *Highlight* FAR on Friday at 10:30 a.m, Arch 4A-E Poster #31. Please feel free to stop by! . FAR significantly improves correspondence-based methods using end-to-end pose prediction, making it applicable to many SOTA approaches!.
๐ข Presenting ๐
๐๐: ๐
๐ฅ๐๐ฑ๐ข๐๐ฅ๐, ๐๐๐๐ฎ๐ซ๐๐ญ๐ ๐๐ง๐ ๐๐จ๐๐ฎ๐ฌ๐ญ ๐๐๐จ๐
๐๐๐ฅ๐๐ญ๐ข๐ฏ๐ ๐๐๐ฆ๐๐ซ๐ ๐๐จ๐ฌ๐ ๐๐ฌ๐ญ๐ข๐ฆ๐๐ญ๐ข๐จ๐ง #CVPR2024. FAR builds upon complimentary Solver and Learning-Based works yielding accurate *and* robust pose!.
0
2
13
RT @tiangeluo: We've curated a 1-million 3D-Captioning dataset for Objaverse(-XL), correcting 200k potential misalignments in the originalโฆ.
0
19
0