Haotong Lin Profile
Haotong Lin

@HaotongLin

Followers
353
Following
144
Media
4
Statuses
24

A PhD student in State Key Laboratory of CAD & CG, Zhejiang University.

Joined July 2021
Don't wanna be here? Send us removal request.
@sainingxie
Saining Xie
8 days
papers are kind of like movies: the first one is usually the best, and the sequels tend to get more complicated but not really more exciting. But that totally doesnโ€™t apply to the DepthAnything series. @bingyikang's team somehow keeps making things simpler and more scalable each
@bingyikang
Bingyi Kang
8 days
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! ๐Ÿš€ Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
5
40
520
@bingyikang
Bingyi Kang
8 days
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! ๐Ÿš€ Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
80
491
4K
@HaotongLin
Haotong Lin
1 month
Thank you for sharing our work! Marigold is really cool! However, itโ€™s somewhat limited by the image VAE โ€” many flying points appear just after encoding a perfect ground-truth depth. Pixel-space diffusion to the rescue ๐Ÿš€
@AntonObukhov1
Anton Obukhov
1 month
Pixel-Perfect-Depth: the paper aims to fix Marigold's loss of sharpness induced by VAE by using VFMs (VGGT/DAv2) and a DiT-based pixel decoder to refine the predictions and achieve clean depth discontinuities. Video by authors.
2
3
56
@AntonObukhov1
Anton Obukhov
1 month
Pixel-Perfect-Depth: the paper aims to fix Marigold's loss of sharpness induced by VAE by using VFMs (VGGT/DAv2) and a DiT-based pixel decoder to refine the predictions and achieve clean depth discontinuities. Video by authors.
3
55
427
@YuxiXiaohenry
Yuxi Xiao
5 months
๐Ÿš€ We release SpatialTrackerV2: the first feedforward model for dynamic 3D reconstruction and 3D point tracking โ€” all at once! Reconstruct dynamic scenes and predict pixel-wise 3D motion in seconds. ๐Ÿ”— Webpage: https://t.co/B8widtJ6DT ๐Ÿ” Online Demo: https://t.co/sY9iO7wCgT
5
90
465
@zhenjun_zhao
Zhenjun Zhao
4 months
Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation @realzhenxu, Hongyu Zhou, @pengsida, @HaotongLin, @ghy990324, @jiahaoshao1, Peishan Yang, Qinglin Yang, Sheng Miao, @XingyiHe1, Yifan Wang, Yue Wang, @ruizhen_hu, @yiyi_liao_, @XiaoweiZhou5, Hujun Bao
0
12
63
@HaotongLin
Haotong Lin
7 months
Wow, thank you for crediting our work! Thrilled to see our project PromptDepthAnything being used in your latest release. This is awesome! Best of luck with the new version! !
@ChrisAtKIRI
Chris make some 3D scans
7 months
Are you tired of the low quality of iPhone lidar scans? I am! And that is why we are bringing this cutting-edge iPhone lidar scan enhancement function into production! With the guidance of normal and depth, the geometry can now reach the next level! Showcases:
2
1
33
@pablovelagomez1
Pablo Vela
10 months
Recently, I've been playing with my iPhone ToF sensor, but the problem has always been the abysmal resolution (256x192). The team behind DepthAnything released PromptDepthAnything that fixes this. Using @Polycam3D to collect the raw data, @Gradio to generate a UI, and
29
214
2K
@XingyiHe1
Xingyi He
10 months
Excited to share our work MatchAnything: We pre-train strong universal image matching models that exhibit remarkable generalizability on unseen multi-modality matching and registration tasks. Project page: https://t.co/o5GisUJ7RT Huggingface Demo: https://t.co/qbz33QBulI
19
160
817
@HaotongLin
Haotong Lin
11 months
(2/2) Something interesting I found is that recent monocular depth methods like Depth Pro can reconstruct highly detailed depth, but these depths are inconsistent in 3D, leading to poor reconstruction. Instead, our approach with low-cost LiDAR guidance yields 3D-consistent depth.
0
1
6
@HaotongLin
Haotong Lin
11 months
Check out our new work, Prompt Depth Anything, which achieves accurate metric depth estimation at up to 4K resolution! Thanks to all our collaborators!
@bingyikang
Bingyi Kang
11 months
Want to use Depth Anything, but need metric depth rather than relative depth? Thrilled to introduce Prompt Depth Anything, a new paradigm for accurate metric depth estimation with up to 4K resolution. ๐Ÿ‘‰Key Message: Depth foundation models like DA have already internalized rich
2
6
42
@bingyikang
Bingyi Kang
11 months
Want to use Depth Anything, but need metric depth rather than relative depth? Thrilled to introduce Prompt Depth Anything, a new paradigm for accurate metric depth estimation with up to 4K resolution. ๐Ÿ‘‰Key Message: Depth foundation models like DA have already internalized rich
9
78
456
@realzhenxu
Zhen Xu
1 year
(1/8) Ever wanted to create an avatar of yourself that interacts realistically with different lighting? In our CVPR 2024 Highlight๐ŸŒŸpaper, we present a method for creating relightable and animatable avatars from only sparse/monocular video. Project Page: https://t.co/95CmyibzIh
3
24
116
@HaotongLin
Haotong Lin
2 years
A really cool project!
@realzhenxu
Zhen Xu
2 years
๐ŸŒŸ Introducing EasyVolcap - Our Python & PyTorch library for neural volumetric video! ๐Ÿ› Features: - Easy to organize volumetric video pipelines - 4D data management system - High-performance 4D viewer - More to come ... ๐Ÿ”— Code: https://t.co/ltte6r3qLY #EasyVolcap #4K4D
0
0
4
@HaotongLin
Haotong Lin
2 years
Excited to unveil Im4D at #SIGGRAPHAsia2023! Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes; Visit our project page to explore more cool demos! https://t.co/PsGEEoQxxI
0
1
18
@HaotongLin
Haotong Lin
3 years
0
0
0
@HaotongLin
Haotong Lin
3 years
Photorealistic rendering of dynamic scenes at interactive frame rates. Check our SIGGRAPH Asia 2022 paper: Efficient Neural Radiance Fields for Interactive Free-viewpoint Video Project page: https://t.co/yfNSBygsoL Github: https://t.co/lhlqNnt7tC
1
1
13