CMHungSteven Profile Banner
Min-Hung (Steve) Chen Profile
Min-Hung (Steve) Chen

@CMHungSteven

Followers
2K
Following
31K
Media
58
Statuses
752

Senior Research Scientist, NVR TW @NVIDIAAI @NVIDIA (Project Lead: DoRA, EoRA | Ph.D. @GeorgiaTech | Multimodal AI | https://t.co/dKaEzVoTfZ

Taipei City, Taiwan
Joined July 2011
Don't wanna be here? Send us removal request.
@CMHungSteven
Min-Hung (Steve) Chen
3 years
(1/N) Are you looking for #Vision #Transformer papers in various areas? Check out this list of papers including a broad range of different tasks! https://t.co/kMThHeO7Gg Feel free to share with others๐Ÿ˜€ @Montreal_AI @machinelearnflx @hardmaru @ak92501 @arankomatsuzaki @omarsar0
Tweet card summary image
github.com
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites - cmhungsteve/Awesome-Transformer-Attention
3
31
167
@JosmyFaure1
Josmy Faure
5 days
๐ŸŽฌ ICCV'25 just wrapped and we're rolling into EMNLP'25! Our paper "MovieCORE: Cognitive Reasoning in Movies" was accepted as an Oral presentation ๐Ÿš€ #EMNLP2025 #NLP #ComputerVision #VideoUnderstanding #VisionLanguageModels #AI #MachineLearning #DeepLearning #VLM #LLM
1
1
6
@SimonXinDong
X. Dong
14 days
We, at NVIDIA, presents - Length Penalty Done Right - Cut CoT length by 3/4 without sacrificing accuracy using only RL - This makes DeepSeek-R1-7B running ~8 times faster on AIME-24 while maintaining the same accuracy.
8
29
245
@zhoubolei
Bolei Zhou
19 days
Welcome to the workshop at ICCV. In the afternoon session, I will give a talk of our effort towards learning physical AI for sidewalk autonomy.
@chen_yiting_TW
Yi-Ting Chen
19 days
๐Ÿ“ฃ Join us for the ICCVโ€™25 X-Sense Workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20!! Link: https://t.co/FV7wCU92sY
0
2
18
@chen_yiting_TW
Yi-Ting Chen
19 days
๐Ÿ“ฃ Join us for the ICCVโ€™25 X-Sense Workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20!! Link: https://t.co/FV7wCU92sY
6
6
18
@katielulula
Katie Luo
20 days
If you're at #ICCV2025, Hawaii, make sure to drop by the X-Sense workshop at Hawai'i Convention Center @ Room 323C on Monday, Oct. 20. Join us for a discussion on the future of x-modal sensing! ๐Ÿ“ธ๐Ÿ“ Link:
2
2
6
@CMHungSteven
Min-Hung (Steve) Chen
20 days
#ICCV2025 is around the corner! Don't hesitate to visit @HsukuangChiu's V2V-GoT poster @ X-Sense Workshop to learn our latest LLM-based Cooperative Driving work! Workshop: https://t.co/WaIl0uuJij V2V-GoT: https://t.co/u9tFpmoxuB @ICCVConference #V2V #LLM #iccv25 #NVIDIA #CMU
@HsukuangChiu
Hsu-kuang Chiu
20 days
Excited to have a poster presentation for our latest research V2V-GoT at #ICCV2025 X-Sense Workshop! ๐Ÿ—“ Date & Time: Oct 20th, Monday, 11:40am ~ 12:30pm ๐Ÿ“ Location: Exhibition Hall II (No 188 ~ 210) ๐ŸŒ Paper, code, and dataset: https://t.co/GTnWrShw80 #NVIDIA #CMU
0
0
9
@HsukuangChiu
Hsu-kuang Chiu
20 days
Excited to have a poster presentation for our latest research V2V-GoT at #ICCV2025 X-Sense Workshop! ๐Ÿ—“ Date & Time: Oct 20th, Monday, 11:40am ~ 12:30pm ๐Ÿ“ Location: Exhibition Hall II (No 188 ~ 210) ๐ŸŒ Paper, code, and dataset: https://t.co/GTnWrShw80 #NVIDIA #CMU
0
2
4
@CMHungSteven
Min-Hung (Steve) Chen
20 days
#ICCV2025 is around the corner! Don't hesitate to visit @JosmyFaure1's HERMES poster to learn our latest efficient video understanding work! ๐ŸŒ Website: https://t.co/LFQgh9mbfC
@JosmyFaure1
Josmy Faure
22 days
๐ŸŽ‰ Excited for #ICCV2025! Weโ€™ll present HERMES, our cognitive-inspired framework that makes video models both faster and smarter. ๐Ÿ“ Poster Session: ๐Ÿ—“๏ธ Thu. 23 Oct. ๐Ÿ•“ 11:15 AM - 1:15 PM local time ๐Ÿ“Œ Honolulu Convention Center, Exhibit Hall I #2114
0
0
13
@JosmyFaure1
Josmy Faure
22 days
๐ŸŽ‰ Excited for #ICCV2025! Weโ€™ll present HERMES, our cognitive-inspired framework that makes video models both faster and smarter. ๐Ÿ“ Poster Session: ๐Ÿ—“๏ธ Thu. 23 Oct. ๐Ÿ•“ 11:15 AM - 1:15 PM local time ๐Ÿ“Œ Honolulu Convention Center, Exhibit Hall I #2114
1
1
7
@KBlueleaf
็ฅ็€้’่‘‰@LyCORIS
25 days
(1/6) I built KohakuHub โ€” a fully self-hosted HF alternative, with HF compatibility and a familiar experience ๐Ÿง Host your own data, Keep your workflow Check more information in the repository and our community! ๐Ÿ”— https://t.co/7ZBOkDK5FJ ๐Ÿ’ฌ https://t.co/HVjx3Rg9XA
7
20
114
@gm8xx8
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
30 days
Tina proved that LoRA can match or surpass full-parameter RL. Tora builds directly on that result, turning it into a full framework. Built on torchtune, it extends RL post-training to LoRA, QLoRA, DoRA, and QDoRA under one interface with GRPO, FSDP, and compile support. QLoRA
@gm8xx8
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
7 months
Tina: Tiny Reasoning Models via LoRA LoRA-RL tuned 1.5B models on curated reasoning data, achieving +20% gains and 43% Pass@1 (AIME24) at $9 total cost. Outperforms full-parameter RL on DeepSeek-R1-Distill-Qwen-1.5B. - LoRA-based RL yields better performance with less compute.
4
29
302
@CMHungSteven
Min-Hung (Steve) Chen
1 month
[#hiring] I'm seeking PhD #Interns for 2026 at #NVIDIAResearch Taiwan! If interested, please send your CV and cover letter to minhungc [at] nvidia [dot] com ๐Ÿ”ŽResearch topics: Efficient Video/4D Understanding & Reasoning. ๐Ÿ“Location: Taiwan / Remote (mainly APAC) #internships
1
17
112
@CMHungSteven
Min-Hung (Steve) Chen
1 month
[#hiring] I'm seeking PhD #Interns for 2026 at #NVIDIAResearch Taiwan! If interested, please send your CV and cover letter to minhungc [at] nvidia [dot] com ๐Ÿ”ŽResearch topics: Efficient Video/4D Understanding & Reasoning. ๐Ÿ“Location: Taiwan / Remote (mainly APAC) #internships
1
17
112
@CMHungSteven
Min-Hung (Steve) Chen
2 months
[#EMNLP2025] Super excited to share MovieCORE @emnlpmeeting (Oral) โ€” New #VideoUnderstanding Benchmark on System-2 Reasoning! ๐Ÿ‘‰Check the original post from @JosmyFaure1 for more details! ๐Ÿ“ท Project: https://t.co/pmR8WCunyW #VLM #LLM #Video #multimodal #AI #NVIDIA #NTU #NTHU
@JosmyFaure1
Josmy Faure
2 months
๐Ÿš€ New Benchmark Alert! Our paper MovieCORE: COgnitive REasoning in Movies is accepted at #EMNLP2025 (Oral) ๐ŸŽ‰ Movies arenโ€™t just โ€œwhat happenedโ€, theyโ€™re why it happened, how characters feel, and what it means. MovieCORE tests Vision-Language Models on System-2 reasoning.
0
1
18
@JosmyFaure1
Josmy Faure
2 months
๐Ÿš€ New Benchmark Alert! Our paper MovieCORE: COgnitive REasoning in Movies is accepted at #EMNLP2025 (Oral) ๐ŸŽ‰ Movies arenโ€™t just โ€œwhat happenedโ€, theyโ€™re why it happened, how characters feel, and what it means. MovieCORE tests Vision-Language Models on System-2 reasoning.
1
1
9
@CMHungSteven
Min-Hung (Steve) Chen
2 months
๐Ÿ“ฃ Still Open for Submissions - X-Sense Workshop @ICCVConference! ๐Ÿ“… Deadline: September 8, 2025, 09:59 AM GMT ๐Ÿ“ Submission Portal: https://t.co/l2y89nLVx4 ๐ŸŒ More info: https://t.co/3LRd386Bkm #ICCV2025 #ICCV #ICCV25 #CFP #NYCU #Cornell #NVIDIA #USYD #MIT #UCSD #TUDelft #UCLA
openreview.net
Welcome to the OpenReview homepage for ICCV 2025 Workshop X-Sense
@zhoubolei
Bolei Zhou
3 months
Call for submission and welcome to join us for this ICCV workshop at Hawaii!
0
2
12
@miran_heo
Miran Heo
2 months
Thanks @_akhaliq for sharing our work! Check out more details:
@_akhaliq
AK
2 months
Nvidia presents Autoregressive Universal Video Segmentation Model
0
8
46
@miran_heo
Miran Heo
2 months
We connect the autoregressive pipeline of LLMs with streaming video perception. Introducing AUSM: Autoregressive Universal Video Segmentation Model. A step toward unified, scalable video perception โ€” inspired by how LLMs unified NLP. ๐Ÿ“
Tweet card summary image
arxiv.org
Recent video foundation models such as SAM2 excel at prompted video segmentation by treating masks as a general-purpose primitive. However, many real-world settings require unprompted segmentation...
2
28
142
@weichiuma
Wei-Chiu Ma
2 months
Ego-exo sensing is definitely the future and has a lot of potentials. Join us and explore this together!
@chen_yiting_TW
Yi-Ting Chen
2 months
๐Ÿ“ฃ Call for Submissions - X-Sense Workshop #ICCV2025! We have extended the submission ddl!! Feel free to submit your accepted papers. Papers are โ€œnon-archivedโ€! Deadline: Sep. 8 09:59 AM GMT
0
4
15