Changil Kim
@_ChangilKim
Followers
344
Following
172
Media
0
Statuses
29
I am hiring research interns for 2025 at Meta. Please apply via the following link and drop me a line if you're interested in working on cutting-edge research problems in neural rendering, 3D reconstruction, gen AI, etc. https://t.co/Yxcz7CvOPo
5
33
216
3D Gaussian Splatting is awesome! BUT, the 3D Gaussian primitives often have difficulty reconstructing fine appearance details. π© Meet Textured Gaussians! β Our RGBA Textured Gaussians enhance 3D scene appearance modeling. Small change, BIG improvement!
4
41
274
Check out our work on Textured Gaussians that I've been cooking with my brilliant intern @BrianCChao and other amazing collaborators! Paper: https://t.co/xyUbGxHnjr Webpage:
arxiv.org
3D Gaussian Splatting (3DGS) has recently emerged as a state-of-the-art 3D reconstruction and rendering technique due to its high-quality results and fast training and rendering time. However,...
Excited to share our new paper on improving Gaussian splatting novel-view synthesis quality through augmenting Gaussians with RGBA texture maps! (1/n)
0
2
17
Beyond photos/videos, what's the next medium for sharing visual experiences? π€ NeRF/3DGS are great but only capture a static 3D world and ignore ambient scene dynamics. In this paper, we show how we can reconstruct/render these essential scene elements even with a single
4
20
131
We're looking for summer interns 2024 in Reality Labs at Meta. Apply via the link below and reach out to me if you're excited about neural rendering, scene reconstruction, view synthesis, and generative models. https://t.co/5HGzzz1KME
0
1
9
Check out our work on real-time NeRF rendering for VR! https://t.co/SxreTyoOah
Introducing VR-NeRF (@SIGGRAPHAsia 2023): π€© multi-camera rig for multi-view HDR capture π€ perceptual HDR optimization + level of detail π real-time multi-GPU VR rendering Project: https://t.co/2Kjpez8mcK Paper: https://t.co/5ZSQ0vp1qv Dataset: https://t.co/nngHuWpRjX
0
0
11
Time flies!
Best Paper Award at #3DV2016 Depth from Gradients in Dense Light Fields for Object Reconstruction Kaan Yucer, Changil Kim @_ChangilKim, Alexander Sorkine-Hornung, Olga Sorkine-Hornung @OlgaSorkineH
#TBThursday #3DV2024
0
0
4
Andreas is presenting our work on Local Radiance Fields at CVPR this morning! Come to Poster 6 to check our work! Project page:
Turn your casual videos into immersive 3D rendering! How? π Modeling scenes with multiple LOCAL radiance fields π Optimizing poses PROGRESSIVELY See you at AM poster session today! π Video πΌ: https://t.co/Si1yj3fBVz Web π: https://t.co/4k5bPZxlKs
#CVPR2023
0
0
2
Ben (@imarhombus) is presenting our CVPR highlight paper about HyperReel, a high-fidelity 6-DoF neural volumetric video this morning in the exhibition hall (Poster 13). Come check out our work! Project webpage:
π₯ New video π₯ Check out HyperReel: High-Fidelity 6-DoF Video! (#CVPR2023 highlight) HyperReel achieves β
Fast rendering β
High-fidelity free-view synthesis β
Memory efficiency https://t.co/oK3BxOtj2w
0
0
4
We presented our work on joint optimization of spatiotemporal RF and camera poses of dynamic scenes on Tuesday at CVPR. The code is available now! See the project webpage:
SUPER excited to present ROBUST Dynamic Radiance Fields! Existing methods cannot handle casual videos where SfM does not work. Our work improves the robustness and can create 3D videos from ANY video. Come find us in the AM session! π π https://t.co/AQW3fmPxU4
#CVPR2023
0
1
8
#CVPR2023 is mostly posters this year, which is good! Poster sessions can be great. But a lot of people are not so great at presenting their poster. Here's some of the advice I give grad students when preparing for poster sessions.
2
94
492
We just released the code, pretrained models (with MIT license π₯) π» https://t.co/z0YnzEZOx3 as well as all the video data/results used in our work π https://t.co/KeLjy2T0Db Check out the explainer video πΉ
π New video π Check out the video to learn how to create immersive 3D rendering from casual videos! π€© https://t.co/Si1yj3fBVz
6
67
261
π New video π Check out the video to learn how to create immersive 3D rendering from casual videos! π€© https://t.co/Si1yj3fBVz
4
19
131
Check out β¨HyperReelβ¨ (#CVPR2023 highlight)! HyperReel is a 6-DoF video representation with π€© high-fidelity quality, π fast rendering speed, and π small memory footprint! Project: https://t.co/mKbAUcIULW Paper: https://t.co/RW7VevMFlV Code: https://t.co/N9xjvdITVR
1
32
165
Check out our fun work on robust view synthesis! π€© Using a casual video as input, our method jointly 1β£estimates accurate camera poses and 2β£reconstructs multiple local radiance fields for a large scene. ProjectπΈοΈ: https://t.co/4k5bPZxlKs
2
27
264
Excited to share our #CVPR2023 on synthesizing new views along a camera trajectory from a **single image**! How? π‘ The good old epipolar constraints in a pose-guided diffusion model! Paper: https://t.co/B5rGOHRfcE Project: https://t.co/ududk1pDpv
8
60
421
We've just released the code & all results for our Neural Light Fields work that will appear at CVPR 2022! Code & Results: https://t.co/UUsYmcFYmL Project: https://t.co/OzBehwHtgJ
@imarhombus @jbhuang0604 @MZollhoefer @JPKopf
github.com
This repository contains the code for Learning Neural Light Fields with Ray-Space Embedding Networks. - facebookresearch/neural-light-fields
3
40
171
We are hiring research interns for 2022. Apply in the linked webpage and drop me a line!
Interested in vision/graphics/ML? The Computational Photography group is looking for research interns!! Come work with us! It would be so much fun! π€© Please retweet/share with students who may be interested! π https://t.co/VFknJAIiAO
1
1
20