
Basile Van Hoorick
@basilevanh
Followers
189
Following
33
Media
9
Statuses
21
Joined August 2018
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: Paper: 🧵(1/5)
5
30
135
We are grateful to be awarded an oral presentation -- please come by Wed 10/2 at 1:30pm (I believe we are the first talk in the oral session) as well as the poster session afterward (number 156) at 4:30pm! #ECCV2024 🎉.
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: Paper: 🧵(1/5)
3
3
27
Visit our webpage at for many more results! Datasets, code, and pretrained models coming soon. Many thanks to my amazing collaborators: @ChrisWu6080, @EgeOzguroglu, @KyleSargentAI, @ruoshi_liu, @ptokmakov, @achalddave, Changxi Zheng, @cvondrick. 🧵(5/5).
2
0
8
Feel free to come by our poster (number 104 in Nord room) Thursday morning starting at 10:30am! 😃.
Happy to share our #ICCV2023 paper on 3D reconstruction from a single image!. In Zero-1-to-3, we teach diffusion models to control the camera viewpoint, which enables novel view synthesis applications. Website: Paper: 🧵(1/n)
0
3
15
Check out our #zero123 live demo here! Big shoutout to collaborators: @ruoshi_liu (first author), @ChrisWu6080, @ptokmakov, @ZakharovSergeyN, and @cvondrick. Hope to see you all next week at ICCV! ;-). 🧵(4/4).
huggingface.co
0
1
2
Happy to share our #ICCV2023 paper on 3D reconstruction from a single image!. In Zero-1-to-3, we teach diffusion models to control the camera viewpoint, which enables novel view synthesis applications. Website: Paper: 🧵(1/n)
3
40
204
After some polishing, the code has been published on GitHub ;-).
github.com
Tracking through Containers and Occluders in the Wild (CVPR 2023) - Official Implementation - basilevh/tcow
Excited to share our #CVPR2023 paper on tracking with object permanence in video!. In TCOW, we propose both a model and a dataset for localizing objects regardless of their visibility. Website: Paper: 🧵 (1/n)
0
0
1
Visit our project webpage at for many more results, as well as links to the datasets, code, and pretrained models!. Joint work with @ptokmakov, Simon Stent, Jie Li, and @cvondrick. 🧵 (6/n).
1
0
0
Excited to share our #CVPR2023 paper on tracking with object permanence in video!. In TCOW, we propose both a model and a dataset for localizing objects regardless of their visibility. Website: Paper: 🧵 (1/n)
2
17
69