Chongjie(CJ) Ye Profile
Chongjie(CJ) Ye

@ychngji6

Followers
2K
Following
1K
Media
49
Statuses
484

Ph.D. student Supervised by Prof. Xiaoguang Han

Joined November 2022
Don't wanna be here? Send us removal request.
@ychngji6
Chongjie(CJ) Ye
5 months
Hi3DGen has been accepted to #ICCV2025. See you all in Hawaii !🌴🥥
@ychngji6
Chongjie(CJ) Ye
8 months
Excited to announce Hi👋3DGen v0.1 - my first 3D generation project! Released at https://t.co/7w3yMJAG3V. Huge thanks to the teams of @DeemosTech, @VastAIResearch, and @BytedanceTalk for their guidance. We sincerely believe open collaboration drives AI innovation forward!
3
15
140
@LuLing26466911
Lu Ling
1 day
Tested several scenes Using SAM-3D, Very nice instance quality! Have seen much efforts on curating the training dataset. A questions come to my mind. How many training dataset do we need for a 3D foundation model? Do we see a clear scaling law on 3D?
@AIatMeta
AI at Meta
2 days
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://t.co/tIwymSSD89 2️⃣ SAM 3D
0
1
6
@koguGameDev
kogu
2 days
MetaがSAM3使った3D推定モデル公開したので、概要の確認とデモのテストしました。商用利用も可能です。 Metaの新しい3D生成モデル「SAM3 3D」シリーズを試す https://t.co/zNk4rek8Q7
Tweet card summary image
note.com
Metaから新しい3D生成のモデルが公開されました。コードもモデルも商用利用可能です。 今回は概要の確認と、公式のデモを使って感触を試してみました。 2つのモデル 公開されたのは汎用的なセグメンテーション用のSegment Anywhere Model 3(SAM3)をベースにした「SAM3 3D Objects」と「SAM3 3D Body」の2モデル。 SAM3 3D Objectsは物...
@AIatMeta
AI at Meta
2 days
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models: 🛋️ SAM 3D Objects for object and scene reconstruction 🧑‍🤝‍🧑 SAM 3D Body for human pose and shape estimation Both models achieve
0
104
436
@songyoupeng
Songyou Peng
2 days
📍GeoGuessr isn't just a game; it's a massive test of complicated visual reasoning and world knowledge. Very satisfied to see my efforts helped Gemini pass this test and beat human pros for the first time! Still a long way to go, but a dream milestone just unlocked 🔓
@scaling01
Lisan al Gaib
3 days
Gemini 3 Pro is the first LLM to beat professional human players at GeoGuessr
2
1
44
@iamsashasax
Sasha Sax
2 days
⚡ So excited to share what we've been cooking up at Meta Superintelligence Labs! Introducing SAM 3D 🗿🗿🗿 We’ve officially taken SAM to the next dimension: 2D images ➡️ detailed 3D scene reconstructions (with full instances segmented out). Yes it’s open-weights, open-recipe.
2
7
62
@XingangP
Xingang Pan
2 days
Introducing 📦𝗔𝗿𝘁𝗶𝗟𝗮𝘁𝗲𝗻𝘁🔧 (SIGGRAPH Asia 2025) — a high-quality 3D diffusion model that explicitly models object articulation, paving the way for richer, more realistic assets in embodied AI and simulation: – Generates fully articulated 3D objects – Physically
2
36
162
@songyoupeng
Songyou Peng
3 days
Look what we have been cooking for you #Gemini3 ! ✨ Beyond other capabilities, Gemini 3's spatial understanding and world knowledge are also truly next-level! Incredible to see the progress, and proud to have helped chart some of those new territories!!🚀
@GoogleDeepMind
Google DeepMind
3 days
This is Gemini 3: our most intelligent model that helps you learn, build and plan anything. It comes with state-of-the-art reasoning capabilities, world-leading multimodal understanding, and enables new agentic coding experiences. 🧵
1
2
60
@drfeifei
Fei-Fei Li
5 days
Had a lot of fun chatting with @lennysan !
@lennysan
Lenny Rachitsky
5 days
Dr. Fei-Fei Li (@drfeifei) is known as the “godmother of AI.” For the past two decades, she’s been at the center of AI’s most significant breakthroughs, including: - Spearheading ImageNet, the dataset that sparked the AI explosion we’re living through right now. - Leading work
15
60
500
@max_spero_
Max Spero
5 days
We were curious about our false positive rate, so we ran all ICLR 2022 reviews (pre-ChatGPT) as a baseline. Lightly AI-edited FPR: 1 in 1,000 Moderately AI-edited FPR: 1 in 5,000 Heavily AI-edited FPR: 1 in 10,000 Fully AI-generated: No false positives
@gneubig
Graham Neubig
6 days
ICLR authors, want to check if your reviews are likely AI generated? ICLR reviewers, want to check if your paper is likely AI generated? Here are AI detection results for every ICLR paper and review from @pangramlabs! It seems that ~21% of reviews may be AI?
12
31
386
@HaoZhao_AIRSUN
Hao Zhao
5 days
🚀 GaussianArt (3DV 2026) is here! A single-stage unified geometry–motion model that finally scales articulated reconstruction to 20+ parts with order-of-magnitude higher accuracy. Evaluated on MPArt-90, the largest articulated benchmark to date. Code + project page below 👇 🔗
1
23
158
@sainingxie
Saining Xie
7 days
papers are kind of like movies: the first one is usually the best, and the sequels tend to get more complicated but not really more exciting. But that totally doesn’t apply to the DepthAnything series. @bingyikang's team somehow keeps making things simpler and more scalable each
@bingyikang
Bingyi Kang
7 days
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! 🚀 Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
5
40
519
@bingyikang
Bingyi Kang
7 days
After a year of team work, we're thrilled to introduce Depth Anything 3 (DA3)! 🚀 Aiming for human-like spatial perception, DA3 extends monocular depth estimation to any-view scenarios, including single images, multi-view images, and video. In pursuit of minimal modeling, DA3
80
490
4K
@_akhaliq
AK
7 days
Depth Anything 3 Recovering the Visual Space from Any Views
7
112
762
@jianyuan_wang
Jianyuan
7 days
Beautiful work Depth Anything 3 by @HaotongLin, @bingyikang , and the team! btw I thought it would be named as Depth Anything v3 😃
4
14
193
@robotsdigest
Robots Digest 🤖
9 days
Forget old datasets, Kinematify to turn any image or text into a 3D model of a movable object.
1
23
109
@jcjohnss
Justin Johnson
9 days
Today we are launching Marble – a multimodal world model that lets you create and edit 3D worlds.
@theworldlabs
World Labs
9 days
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
8
89
@drfeifei
Fei-Fei Li
8 days
Check out how we used Marble to make the launch video for Marble! It’s so fun to work behind the scenes to use this technology to help create and tell stories!
@theworldlabs
World Labs
8 days
Today, we’re sharing a behind-the-scenes look at how we used Marble to create the worlds you see in our launch video.
27
71
457
@atasteoff
Shilong Liu
8 days
May I build a world in the world built by World Labs?
@theworldlabs
World Labs
9 days
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
1
1
6
@_akhaliq
AK
8 days
apply_texture_qwen_image_edit_2509
5
16
195
@DanielChao4
Daniel Chao
9 days
I'm so proud of our team @theworldlabs !!! This the beginning of a new era for interactive, editable, consistent, and beautiful 3D world models 🚀🚀🚀
@theworldlabs
World Labs
9 days
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
2
46
@chlassner
Christoph Lassner
9 days
Marble is now generally available. Experience seamless world building, starting from text, images, videos or 3D - to build and create or just to enjoy.
@theworldlabs
World Labs
9 days
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
5
5
44