leo_bringer Profile Banner
Leo Bringer Profile
Leo Bringer

@leo_bringer

Followers
59
Following
777
Media
6
Statuses
55

ML Researcher @CraftyApesVFX 🦧 - Prev Associate Researcher @UMich 〽️ - 3D Vision, Diffusion Models, VideoGen & Character Animation 🤼‍♀️

New York, NY
Joined May 2023
Don't wanna be here? Send us removal request.
@leo_bringer
Leo Bringer
1 month
🚀 Our paper **MDMP** has been accepted at CVPR’25 - HuMoGen 🚀. We propose a multi-modal diffusion model that fuses textual action descriptions and 3D skeletal data to generate long-term human motion predictions, with interpretable uncertainty — paving the way for safer and
1
4
12
@leo_bringer
Leo Bringer
1 day
Such a cool feature!! Being able to import a 3d reconstructed scene through a Mesh format in Blender is a game changer. I guess the next step is 4D dynamic scenes with deformable meshes.
@Almorgand
Alexandre Morgand
2 days
"MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient.Surface Reconstruction". TL;DR: differentiably extract a mesh including both vertex locations and connectivity only from Gaussian parameters. Gradient flow from the mesh to GS
Tweet media one
0
0
1
@leo_bringer
Leo Bringer
23 days
RT @eustachelb: Transformers support is coming today! 🤗
Tweet media one
0
5
0
@leo_bringer
Leo Bringer
26 days
Inspired by:.• Gen3C → • Diffusion-as-Shader →
0
0
1
@leo_bringer
Leo Bringer
26 days
5. Rendering: Finally, we feed the depth video into a video diffusion model – basically using it as the renderer. This neural renderer “paints over” each coarse frame with higher quality details, adding realistic textures and lighting. Because it’s guided by the depth and the.
1
0
1
@leo_bringer
Leo Bringer
26 days
4. Depth Video (Coarse Render): From the animated 3D scene, we add a precise camera trajectory and render a depth map for each frame. This “depth video” looks rough, but it encodes the exact 3D structure and motion. These depth frames act as a blueprint, providing geometric.
1
0
1
@leo_bringer
Leo Bringer
26 days
3. Animation: To bring the scene to life we rig and animate the 3D models. Essentially, we leverage standard 3D animation tools and open source ML rigging & animation models to choreograph how everything moves. For example, we can make a character walk or have a car drive across.
1
0
1
@leo_bringer
Leo Bringer
26 days
2. 3D Scene Reconstruction: We use images or prompts to generate the 3D scene. Using single-view 3D reconstruction, we can generate a rough 3D mesh of the entire environment and its main assets. Foreground elements (like characters or cars) can be extracted into separate 3D.
1
0
1
@leo_bringer
Leo Bringer
26 days
1. The building point of this project is the observation that using accurate depth-videos generated from 3D reconstructed scenes allows to provide rich and structured conditioning signals that makes the process of video generation more "3D-aware" without being over-constraining.
1
0
1
@leo_bringer
Leo Bringer
26 days
Controllable 3D-aware Video Generation: I'm developing a pipeline that builds a 3D scene and uses a controllable video model as a renderer. Why? To improve the video generation controllability over the scene composition as well as the camera motion. The goal is to use the
2
0
6
@leo_bringer
Leo Bringer
28 days
RT @Michael_J_Black: It's clear that video diffusion models know a lot about the 3D world, material properties, and lighting. The trick is….
0
11
0
@leo_bringer
Leo Bringer
29 days
Presented our poster at CVPR 2025 / HuMoGen this week in Nashville!.Amazing to share ideas with the community and see our work in motion🤸. Thanks to the @humogen11384 organizers for making it all happen. 🔗 #CVPR2025 #HuMoGen #AI #MotionPrediction
Tweet media one
@leo_bringer
Leo Bringer
1 month
🚀 Our paper **MDMP** has been accepted at CVPR’25 - HuMoGen 🚀. We propose a multi-modal diffusion model that fuses textual action descriptions and 3D skeletal data to generate long-term human motion predictions, with interpretable uncertainty — paving the way for safer and
0
2
11
@leo_bringer
Leo Bringer
1 month
RT @forgegfx: Open Sourcing Forge: 3D Gaussian splat rendering for web developers!. 3DGS has become a dominant paradigm for differentiable….
0
58
0
@leo_bringer
Leo Bringer
2 months
RT @DeemosTech: 🚨 Paper Alert. Our recent breakthrough CAST: Component-Aligned 3D Scene Reconstruction from an RGB Image has been accepted….
0
64
0
@leo_bringer
Leo Bringer
2 months
RT @OdinLovis: I am really super happy to show you my research that transform 3D volumetric capture of man capture with @kartel_ai and with….
0
11
0
@leo_bringer
Leo Bringer
3 months
very interesting article on Spatial Intelligence and 3D Awareness through Point Tracking: From my experience with current Point Tracking technics, the main issue that I've run into for generating a video by animating an image conditioned on Point
0
0
1
@leo_bringer
Leo Bringer
4 months
I have been working on testing it in Blender and Nuke and the estimations of 3d camera trajectories of DPVO+SLAM are pretty impressive, could be very useful for matchmoving.
@davsca1
Davide Scaramuzza
9 months
Check out our #IROS2024 paper "Deep Visual Odometry with Events and Frames," the new state of the art in Visual Odometry, which outperforms learning-based image methods (DROID-SLAM, DPVO), model-based methods (ORB-SLAM, DSO) and event-based methods (DEVO, EDS) by up to 60%
0
0
0
@leo_bringer
Leo Bringer
4 months
RT @akanazawa: Exciting news! MegaSAM code is out🔥 & the updated Shape of Motion results with MegaSAM are really impressive! A year ago I d….
0
170
0
@leo_bringer
Leo Bringer
4 months
RT @sidahuj: 🛠️ Blender MCP update:. Bring in high-quality assets to Blender through just prompts, thanks to an integration with @polyhave….
0
222
0