Tianchang Shen Profile
Tianchang Shen

@TianchangS

Followers
341
Following
47
Media
29
Statuses
60

Joined May 2020
Don't wanna be here? Send us removal request.
@TianchangS
Tianchang Shen
1 year
Generating nice meshes in AI pipelines is hard. Our #SIGGRAPHAsia2024 paper proposes a new representation which guarantees manifold connectivity, and even supports polygonal meshes -- a big step for downstream editing and simulation. (1/N) SpaceMesh: https://t.co/Puw7atMeW5
4
43
153
@HuanLing6
Huan Ling
15 days
1/ #NVIDIAGTC We’re excited to share that ChronoEdit-14B model and 8-step Distillation LoRA (4s/image on H100) are released today. 🤗 Model https://t.co/X3diGAY42p 🤗 Demo https://t.co/2xfiRo6wij 💡ChronoEdit brings temporal reasoning to image editing task. It achieves STOA
5
35
106
@HuanLing6
Huan Ling
1 month
🕹️We are excited to introduce "ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation" ChronoEdit reframes image editing as a video generation task to encourage temporal consistency. It leverages a temporal reasoning stage that denoises with “video
6
37
140
@sherwinbahmani
Sherwin Bahmani
2 months
📢 Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation Got only one or a few images and wondering if recovering the 3D environment is a reconstruction or generation problem? Why not do it with a generative reconstruction model! We show that a
19
71
251
@huangjh_hjh
Jiahui Huang
3 months
[1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsics, and dense metric depth from casual videos! Running at 3–5 FPS, ViPE handles cinematic shots, dashcams, and even 360° panoramas. 🔗 https://t.co/1mGDxwgYJt
13
105
452
@zianwang97
Zian Wang
5 months
🚀 We just open-sourced Cosmos DiffusionRenderer! This major upgrade brings significantly improved video de-lighting and re-lighting—powered by NVIDIA Cosmos and enhanced data curation. Released under Apache 2.0 and Open Model License. Try it out! 🔗 https://t.co/h87zZhodp0
@zianwang97
Zian Wang
10 months
🚀 Introducing DiffusionRenderer, a neural rendering engine powered by video diffusion models. 🎥 Estimates high-quality geometry and materials from videos, synthesizes photorealistic light transport, enables relighting and material editing with realistic shadows and reflections
9
123
525
@HuanLing6
Huan Ling
5 months
We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:
11
44
107
@TianchangS
Tianchang Shen
5 months
📢 GEN3C is now open-sourced, with code released under Apache 2.0 and model weights under the NVIDIA Open Model License! 🚀 Along with it, we're releasing a GUI tool that lets you specify your desired video trajectory in 3D — come play with it and generate your own! The
@xuanchi13
Xuanchi Ren
8 months
🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.
1
28
137
@TianchangS
Tianchang Shen
7 months
FlexiCubes is now under Apache 2.0! 🎉 We've been excited to see FlexiCubes extracting high-quality meshes across the community in projects like TRELLIS and TripoSF --- now it's available with a more permissive license. Let's keep building. 💙 👉 Flexicubes is in NVIDIA
0
24
121
@_akhaliq
AK
8 months
Nvidia just released Cosmos-Transfer1 on Hugging Face Conditional World Generation with Adaptive Multimodal Control
8
97
515
@TianchangS
Tianchang Shen
8 months
Want precise control over the camera trajectory in your generated videos? Need to edit or remove objects in the scene? Check out how we leverage 3D in video models to make it happen! 🎉
@xuanchi13
Xuanchi Ren
8 months
🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.
0
1
15
@jayzhangjiewu
Jay Wu
8 months
Excited to share our #CVPR2025 paper: Difix3D+ Difix3D+ reimagines 3D reconstruction with single-step diffusion, distilling 2D generative priors for realistic novel view synthesis from large viewpoint shifts. 📄Paper: https://t.co/2qk0LP16Di 🌐Website: https://t.co/5O5XZWoJ5E
6
49
224
@_akhaliq
AK
8 months
Nvidia just dropped GEN3C 3D-Informed World-Consistent Video Generation with Precise Camera Control
6
74
387
@nmwsharp
Nick Sharp
1 year
We found a way to generate manifold, polygonal meshes from feature vectors at points -- even if the vectors are random, you are still guaranteed to get a manifold mesh! How? Halfedge meshes, permutations, spacetimes, and more! Check out the 🧵. This project was a blast!
@TianchangS
Tianchang Shen
1 year
Generating nice meshes in AI pipelines is hard. Our #SIGGRAPHAsia2024 paper proposes a new representation which guarantees manifold connectivity, and even supports polygonal meshes -- a big step for downstream editing and simulation. (1/N) SpaceMesh: https://t.co/Puw7atMeW5
4
34
176
@TianchangS
Tianchang Shen
1 year
We’re excited about the possibilities that new representations offer for learning with meshes—there is still much to do! Come see our talk at SIGGRAPH Asia in Tokyo to learn more! https://t.co/Puw7atMeW5 (9/N)
0
0
1
@TianchangS
Tianchang Shen
1 year
Our point cloud-to-mesh model can also be applied to mesh repair by casting it as “mesh inpainting,” without fine-tuning.
1
0
6
@TianchangS
Tianchang Shen
1 year
We further evaluate our model on the ShapeNet dataset. Our method generates sharp and compact polygonal meshes that match the input conditions and are guaranteed to be manifold. (7/N)
1
0
3
@TianchangS
Tianchang Shen
1 year
Trained on the ABC dataset, our model generates high-quality meshes with vertices and edges that align accurately with sharp features, highlighting the advantage of directly generating meshes as the output representation. (6/N)
1
0
7
@TianchangS
Tianchang Shen
1 year
We integrate SpaceMesh with a diffusion model to generate meshes conditioned on geometry provided as a point cloud. Given the same input geometry, our model can generate different styles of meshes depending on the distribution it was trained on. (5/N)
1
0
4
@TianchangS
Tianchang Shen
1 year
An exciting finding from our project is that the recently proposed spacetime distance [Law and Lucas 2023] is highly effective in representing mesh connectivity. Here, we compare it with other commonly used alternatives for representing graph connectivity. (4/N)
1
0
5
@TianchangS
Tianchang Shen
1 year
The big idea is to embed discrete halfedge connectivity via a continuous vector space. We carefully define a mapping which translates continuous feature vectors per-vertex into mesh connectivity. Now generating connectivity just means generating feature vectors at points! (3/N)
1
0
4