Changkun (David) Liu
@liu_changkun
Followers
20
Following
87
Media
0
Statuses
23
CSE PhD @ HKUST; BE @ SJTU
Hong Kong
Joined March 2023
PLANA3R: Zero-shot Metric Planar 3D Reconstruction via Feed-Forward Planar Splatting @liu_changkun, Bin Tan, Zeran Ke, Shangzhan Zhang, Jiachen Liu, Ming Qian, @NanXue7, Yujun Shen, Tristan Braud tl;dr: ViT-base; depth and normal->supervision; PlanarSplatting->rendered depth
0
5
46
Paper: https://t.co/DSUD3wOaSp Project: https://t.co/gnJWrYoBcp Code: https://t.co/0FFTTAvCJR Demos: https://t.co/yEO0YOh3MP
0
3
10
PLANA3R: Zero-shot Metric Planar 3D Reconstruction via Feed-Forward Planar Splatting Abstract: This paper addresses metric 3D reconstruction of indoor scenes by exploiting their inherent geometric regularities with compact representations. Using planar 3D primitivesโa
3
16
104
Led by Changkun Liu, Bin Tan, and myself, with amazing collaborators Zeran Ke, Shangzhan Zhang, Jiachen Liu, Ming Qian, Yujun Shen, and Tristan Braud. Big thanks to everyone who made PLANA3R happen! ๐ ๐ Project page: https://t.co/WwV4xiyzSp ๐พ Code:
github.com
[NeurIPS 2025] the official project page of a paper, "PLANA3R: Zero-shot Metric Planar 3D Reconstruction via Feed-Forward Planar Splatting" - lck666666/plana3r
0
1
7
Our PLANA3R sets new SOTA on ScanNetV2, Matterport3Dand NYUv2-Plane. Beyond metrics, it emerges high-level semantics from onlydepth and normal supervision, producing 3D plane segmentations that often look better than GT.
1
1
6
Happy to announce our NeurlPS 2025 paper, PLANA3R, isnow public! lt's a continuation of our PlanarSplatting (CVPR 2025),but this time, we go feed-forward for structured 3D reconstruction using planar primitives fortwo-view images, specializing in structured 3D reconstruction.
1
11
70
As an AC of NeurIPS this year, Iโve seen that over 95% of reviewers engaged in author discussions after three rounds of reminders. However, one of our NeurIPS 2025 (main track) submissions received all positive reviews โ except for a single negative one (reject). Weโve responded
3
2
18
๐ซย Animating 4D objects is complex: traditional methods rely on handcrafted, category-specific rigging representations. ๐กย What if we could learn unified, category-agnostic, and scalable 4D motion representations โ from raw, unlabeled data? ๐ Introducing CANOR at #CVPR2025: a
2
23
97
Code dropped and accepted at ICLR 2025. Note: the paper was renamed during the review process to: "GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting" Links in comments.
GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting Paper: https://t.co/3kC3Toy9Zl Project: https://t.co/nFiihGcwXR 1 | 2
1
17
112
Paper (openreview, arxiv is not updated): https://t.co/xbrcUWDDls Project: https://t.co/xxvAQVFtHv Code:
github.com
[ICLR 2025] Official repo of "GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting" - XRIM-Lab/GS-CPR
1
2
11
๐ข๐ข๐ข Code for our #ICLR2025 paper is now available: https://t.co/WbMxeazQIm Note: During the ICLR review process, we changed the name of our framework from GSLoc to GS-CPR in the camera-ready version according to the comments of reviewers.
github.com
[ICLR 2025] Official repo of "GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting" - XRIM-Lab/GS-CPR
๐ข Paper accepted to #ICLR2025 ๐ "GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting" TL;DR: a novel test-time camera pose refinement framework leveraging 3DGS as the scene representation and MASt3R for 2D matching. ๐: https://t.co/zK99FUQhCy
0
1
5
I'm excited that our paper got into ICLR 2025! Great work done by @DavidLA05031686! This is another great work on camera localisation from our group @AVLOxford (other works are mostly from @ShuaiC8 and @wenjing_bian)
๐ข Paper accepted to #ICLR2025 ๐ "GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting" TL;DR: a novel test-time camera pose refinement framework leveraging 3DGS as the scene representation and MASt3R for 2D matching. ๐: https://t.co/zK99FUQhCy
0
2
8
๐ข Paper accepted to #ICLR2025 ๐ "GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting" TL;DR: a novel test-time camera pose refinement framework leveraging 3DGS as the scene representation and MASt3R for 2D matching. ๐: https://t.co/zK99FUQhCy
4
13
96
A colleague alerted me to an ICLR 2025 submission @iclr_conf that is directly plagiarized from our CVPR 2023 @CVPR paper. #plagiarism #ICLR2025 Our paper: https://t.co/5vVlXKNknB ICLR 2025 submission:
openreview.net
Point cloud registration is a critical and challenging task in computer vision. It is difficult to avoid poor local minima since the cost function is significantly non-convex. Correspondences...
8
18
92
tl;dr: query->pre-trained pose estimator->initial pose-> pre-trained 3DGS+exposure-adaptive affine color transformation->rendered image+depth; rendered image->MASt3R->2D-2D matching->rendered depth->PnP+RANSAC->refine initial pose
0
2
3
GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting Changkun Liu, @ShuaiC8, @ysbhalgat, Siyan Hu, @ziruiwang_, Ming Cheng, @viprad, Tristan Braud https://t.co/2UTdrKWxrH
1
11
60
GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting Paper: https://t.co/3kC3Toy9Zl Project: https://t.co/nFiihGcwXR 1 | 2
2
29
149