Basile Van Hoorick Profile
Basile Van Hoorick

@basilevanh

Followers
189
Following
33
Media
9
Statuses
21

Joined August 2018
Don't wanna be here? Send us removal request.
@basilevanh
Basile Van Hoorick
1 year
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: Paper: 🧵(1/5)
5
30
135
@basilevanh
Basile Van Hoorick
10 months
We are grateful to be awarded an oral presentation -- please come by Wed 10/2 at 1:30pm (I believe we are the first talk in the oral session) as well as the poster session afterward (number 156) at 4:30pm! #ECCV2024 🎉.
@basilevanh
Basile Van Hoorick
1 year
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: Paper: 🧵(1/5)
3
3
27
@basilevanh
Basile Van Hoorick
1 year
Visit our webpage at for many more results! Datasets, code, and pretrained models coming soon. Many thanks to my amazing collaborators: @ChrisWu6080, @EgeOzguroglu, @KyleSargentAI, @ruoshi_liu, @ptokmakov, @achalddave, Changxi Zheng, @cvondrick. 🧵(5/5).
2
0
8
@basilevanh
Basile Van Hoorick
1 year
Apart from robotics and related scenes, it also works quite well on driving scenarios! In general, we believe our framework can help unlock powerful applications in rich dynamic scene understanding, perception for embodied AI, and interactive 3D video viewing. 🧵(4/5)
2
0
7
@basilevanh
Basile Van Hoorick
1 year
Although out-of-distribution generalization is highly challenging, we show promising zero-shot results on real-world examples. In particular, our model exhibits object permanence capabilities, which can be observed by shifting the virtual camera upward in this video. 🧵(3/5)
1
1
7
@basilevanh
Basile Van Hoorick
1 year
GCD works by equipping a video-to-video generative model with camera perspective controls. We condition the Stable Video Diffusion architecture on an input video along with a relative pose, such that it can be chosen, and finetune it on paired synthetic multi-view data. 🧵(2/5)
1
0
8
@basilevanh
Basile Van Hoorick
2 years
Feel free to come by our poster (number 104 in Nord room) Thursday morning starting at 10:30am! 😃.
@basilevanh
Basile Van Hoorick
2 years
Happy to share our #ICCV2023 paper on 3D reconstruction from a single image!. In Zero-1-to-3, we teach diffusion models to control the camera viewpoint, which enables novel view synthesis applications. Website: Paper: 🧵(1/n)
Tweet media one
0
3
15
@basilevanh
Basile Van Hoorick
2 years
Check out our #zero123 live demo here! Big shoutout to collaborators: @ruoshi_liu (first author), @ChrisWu6080, @ptokmakov, @ZakharovSergeyN, and @cvondrick. Hope to see you all next week at ICCV! ;-). 🧵(4/4).
Tweet card summary image
huggingface.co
0
1
2
@basilevanh
Basile Van Hoorick
2 years
Specifically, we finetune Stable Diffusion, which already has useful 2D image priors thanks to being trained on billion-scale data. This pipeline allows us to successfully achieve strong zero-shot performance on objects with complex geometry and artistic styles. 🧵(3/n).
1
0
3
@basilevanh
Basile Van Hoorick
2 years
We leverage a recently released large-scale dataset of 3D objects, called Objaverse, from which we render images with random perspectives. We then train an image-to-image translation network with the task of converting one viewpoint to another. 🧵(2/n).
1
0
1
@basilevanh
Basile Van Hoorick
2 years
Happy to share our #ICCV2023 paper on 3D reconstruction from a single image!. In Zero-1-to-3, we teach diffusion models to control the camera viewpoint, which enables novel view synthesis applications. Website: Paper: 🧵(1/n)
Tweet media one
3
40
204
@basilevanh
Basile Van Hoorick
2 years
Come by poster number 138 at #CVPR2023 tomorrow afternoon at 4:30 pm! 🎉.
@basilevanh
Basile Van Hoorick
2 years
Excited to share our #CVPR2023 paper on tracking with object permanence in video!. In TCOW, we propose both a model and a dataset for localizing objects regardless of their visibility. Website: Paper: 🧵 (1/n)
0
1
2
@basilevanh
Basile Van Hoorick
2 years
After some polishing, the code has been published on GitHub ;-).
github.com
Tracking through Containers and Occluders in the Wild (CVPR 2023) - Official Implementation - basilevh/tcow
@basilevanh
Basile Van Hoorick
2 years
Excited to share our #CVPR2023 paper on tracking with object permanence in video!. In TCOW, we propose both a model and a dataset for localizing objects regardless of their visibility. Website: Paper: 🧵 (1/n)
0
0
1
@basilevanh
Basile Van Hoorick
2 years
P.S. Also check out our earlier related work on Revealing Occlusions with 4D Neural Fields (! This paper is essentially about video-to-4D generation, but requires depth input. On the other hand, we demonstrate that TCOW works in the wild too. 🧵 (7/7).
0
1
0
@basilevanh
Basile Van Hoorick
2 years
Visit our project webpage at for many more results, as well as links to the datasets, code, and pretrained models!. Joint work with @ptokmakov, Simon Stent, Jie Li, and @cvondrick. 🧵 (6/n).
1
0
0
@basilevanh
Basile Van Hoorick
2 years
Still, since object permanence remains far from solved, we release our benchmarks and invite the research community to continue working on this intriguing problem. 🧵 (5/n).
1
0
0
@basilevanh
Basile Van Hoorick
2 years
However, TCOW shines when it comes to handling total occlusion and/or containment, which are highly challenging scenarios that require advanced spatiotemporal reasoning skills. Cup shuffling games are especially tricky, yet we seem to be beginning to tackle them. 🧵 (4/n)
1
1
2
@basilevanh
Basile Van Hoorick
2 years
Despite being trained only on synthetic data (using the Kubric simulator), TCOW performs quite well in complex real-world scenes. For example, see the rhino below which maintains its nose and horns (i.e. does amodal completion) throughout the partial occlusion. 🧵 (3/n)
1
0
0
@basilevanh
Basile Van Hoorick
2 years
Our framework is capable of distinguishing containment from occlusion events by predicting different segmentation masks for each of them, as visualized in the video above. 🧵 (2/n).
1
0
0
@basilevanh
Basile Van Hoorick
2 years
Excited to share our #CVPR2023 paper on tracking with object permanence in video!. In TCOW, we propose both a model and a dataset for localizing objects regardless of their visibility. Website: Paper: 🧵 (1/n)
2
17
69