Chongjie(CJ) Ye
@ychngji6
Followers
2K
Following
1K
Media
49
Statuses
484
Ph.D. student Supervised by Prof. Xiaoguang Han
Joined November 2022
Hi3DGen has been accepted to #ICCV2025. See you all in Hawaii !🌴🥥
Excited to announce Hi👋3DGen v0.1 - my first 3D generation project! Released at https://t.co/7w3yMJAG3V. Huge thanks to the teams of @DeemosTech, @VastAIResearch, and @BytedanceTalk for their guidance. We sincerely believe open collaboration drives AI innovation forward!
3
15
140
Tested several scenes Using SAM-3D, Very nice instance quality! Have seen much efforts on curating the training dataset. A questions come to my mind. How many training dataset do we need for a 3D foundation model? Do we see a clear scaling law on 3D?
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://t.co/tIwymSSD89 2️⃣ SAM 3D
0
1
6
MetaがSAM3使った3D推定モデル公開したので、概要の確認とデモのテストしました。商用利用も可能です。 Metaの新しい3D生成モデル「SAM3 3D」シリーズを試す https://t.co/zNk4rek8Q7
note.com
Metaから新しい3D生成のモデルが公開されました。コードもモデルも商用利用可能です。 今回は概要の確認と、公式のデモを使って感触を試してみました。 2つのモデル 公開されたのは汎用的なセグメンテーション用のSegment Anywhere Model 3(SAM3)をベースにした「SAM3 3D Objects」と「SAM3 3D Body」の2モデル。 SAM3 3D Objectsは物...
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models: 🛋️ SAM 3D Objects for object and scene reconstruction 🧑🤝🧑 SAM 3D Body for human pose and shape estimation Both models achieve
0
104
436
📍GeoGuessr isn't just a game; it's a massive test of complicated visual reasoning and world knowledge. Very satisfied to see my efforts helped Gemini pass this test and beat human pros for the first time! Still a long way to go, but a dream milestone just unlocked 🔓
2
1
44
⚡ So excited to share what we've been cooking up at Meta Superintelligence Labs! Introducing SAM 3D 🗿🗿🗿 We’ve officially taken SAM to the next dimension: 2D images ➡️ detailed 3D scene reconstructions (with full instances segmented out). Yes it’s open-weights, open-recipe.
2
7
62
Introducing 📦𝗔𝗿𝘁𝗶𝗟𝗮𝘁𝗲𝗻𝘁🔧 (SIGGRAPH Asia 2025) — a high-quality 3D diffusion model that explicitly models object articulation, paving the way for richer, more realistic assets in embodied AI and simulation: – Generates fully articulated 3D objects – Physically
2
36
162
Look what we have been cooking for you #Gemini3 ! ✨ Beyond other capabilities, Gemini 3's spatial understanding and world knowledge are also truly next-level! Incredible to see the progress, and proud to have helped chart some of those new territories!!🚀
This is Gemini 3: our most intelligent model that helps you learn, build and plan anything. It comes with state-of-the-art reasoning capabilities, world-leading multimodal understanding, and enables new agentic coding experiences. 🧵
1
2
60
Had a lot of fun chatting with @lennysan !
Dr. Fei-Fei Li (@drfeifei) is known as the “godmother of AI.” For the past two decades, she’s been at the center of AI’s most significant breakthroughs, including: - Spearheading ImageNet, the dataset that sparked the AI explosion we’re living through right now. - Leading work
15
60
500
We were curious about our false positive rate, so we ran all ICLR 2022 reviews (pre-ChatGPT) as a baseline. Lightly AI-edited FPR: 1 in 1,000 Moderately AI-edited FPR: 1 in 5,000 Heavily AI-edited FPR: 1 in 10,000 Fully AI-generated: No false positives
ICLR authors, want to check if your reviews are likely AI generated? ICLR reviewers, want to check if your paper is likely AI generated? Here are AI detection results for every ICLR paper and review from @pangramlabs! It seems that ~21% of reviews may be AI?
12
31
386
🚀 GaussianArt (3DV 2026) is here! A single-stage unified geometry–motion model that finally scales articulated reconstruction to 20+ parts with order-of-magnitude higher accuracy. Evaluated on MPArt-90, the largest articulated benchmark to date. Code + project page below 👇 🔗
1
23
158
papers are kind of like movies: the first one is usually the best, and the sequels tend to get more complicated but not really more exciting. But that totally doesn’t apply to the DepthAnything series. @bingyikang's team somehow keeps making things simpler and more scalable each
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! 🚀 Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
5
40
519
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! 🚀 Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
80
490
4K
Beautiful work Depth Anything 3 by @HaotongLin, @bingyikang , and the team! btw I thought it would be named as Depth Anything v3 😃
4
14
193
Forget old datasets, Kinematify to turn any image or text into a 3D model of a movable object.
1
23
109
Today we are launching Marble – a multimodal world model that lets you create and edit 3D worlds.
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
8
89
May I build a world in the world built by World Labs?
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
1
1
6
I'm so proud of our team @theworldlabs !!! This the beginning of a new era for interactive, editable, consistent, and beautiful 3D world models 🚀🚀🚀
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
2
46
Marble is now generally available. Experience seamless world building, starting from text, images, videos or 3D - to build and create or just to enjoy.
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
5
44