anh_ng8 Profile Banner
Anh Totti Nguyen Profile
Anh Totti Nguyen

@anh_ng8

Followers
2K
Following
2K
Media
204
Statuses
821

ISO a trustworthy and explainable AI. Deep Learning, human-machine interaction, and Javascript. Associate Professor @AuburnEngineers. Hanoi 🇻🇳

Auburn, AL
Joined December 2007
Don't wanna be here? Send us removal request.
@anh_ng8
Anh Totti Nguyen
8 days
#GPT5 is STILL having a severe confirmation bias like prev SOTA models! 😜. Try yourselves (images, prompts avail in 1 click):. It's fast to test for such biases in images. Similar biases should still exist in non-image domains as well.
Tweet media one
11
12
119
@anh_ng8
Anh Totti Nguyen
7 days
RT @mrnuu: @anh_ng8 @taesiri @an_vo12 GPT5 Pro, the most advanced version, reasoned for 1 minute 19 seconds and came up with the wrong answ….
0
2
0
@anh_ng8
Anh Totti Nguyen
16 days
RT @PetarV_93: ok! part 2 of my early-stage ai research lore - ask and you shall receive!. so now i found myself in a research group as a f….
0
22
0
@anh_ng8
Anh Totti Nguyen
22 days
RT @2prime_PKU: Anyone knows adam?
Tweet media one
0
463
0
@anh_ng8
Anh Totti Nguyen
27 days
@grok . How many chess pieces are there on this board?
Tweet media one
1
0
2
@anh_ng8
Anh Totti Nguyen
27 days
@grok How many points are there on the star in the logo of this car?
Tweet media one
1
0
2
@anh_ng8
Anh Totti Nguyen
1 month
RT @gdb: We've published a position paper, with many across the industry, calling for work on chain-of-thought faithfulness. This is an opp….
0
61
0
@anh_ng8
Anh Totti Nguyen
1 month
RT @Cohere_Labs: Supported by one of our grants, @an_vo12, Mohammad Reza Taesiri, and @anh_ng8 from @kaist_ai, tackled bias in LLMs. Their….
0
3
0
@anh_ng8
Anh Totti Nguyen
1 month
RT @an_vo12: 🚨 Our latest work shows that SOTA VLMs (o3, o4-mini, Sonnet, Gemini Pro) fail at counting legs due to bias⁉️. See simple cases….
0
41
0
@anh_ng8
Anh Totti Nguyen
1 month
RT @MichLieben: This isn't hallucination in the traditional sense. Grok's math was nearly correct. But it confidently applied PhD-level t….
0
9
0
@anh_ng8
Anh Totti Nguyen
1 month
RT @savvyRL: The opportunity gap in AI is more striking than ever. We talk way too much about those receiving $100M or whatever for their j….
0
120
0
@anh_ng8
Anh Totti Nguyen
1 month
RT @giangnguyen2412: Today I finished my PhD at @AuburnEngineers with @anh_ng8 . What’s next? Off to @guidelabsai to build foundation AI….
0
2
0
@anh_ng8
Anh Totti Nguyen
2 months
RT @samim: That @cursor_ai silently downgrades the working model from Claude4 to Claude3.5 during an active coding session, is borderline c….
0
1
0
@anh_ng8
Anh Totti Nguyen
2 months
Asking GPT-4o to generate images in the style of @DiegoCusano_ .shows an existing gap between the real Cusano vs. GPT-4o in creativity / wittiness. (random samples) .Sometimes it signs "Cusano" at the bottom.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
0
3
@anh_ng8
Anh Totti Nguyen
2 months
RT @yacineMTB: I got fired today. I'm not sure why, I personally don't think there is a reason, or that it's important. When I joined twi….
0
804
0
@anh_ng8
Anh Totti Nguyen
2 months
2015 in Boston: I attended my first #CVPR (1 paper in the main conference). 2025 in Nashville: my 3rd grader has his first #CVPR (no workshop or conference papers though 😜). #CVPR2025
Tweet media one
Tweet media two
Tweet media three
1
1
31
@anh_ng8
Anh Totti Nguyen
2 months
TAB is currently applied to 2-image image difference captioning. But could one apply it to general VLMs? 🙂. Work led by the amazing ☘ @Pooyanrg (catch him at #CVPR2025 ).w/ Hung Nguyen, @savvyRL , Long Mai. Paper: Code:
Tweet media one
0
1
3
@anh_ng8
Anh Totti Nguyen
2 months
🌟 Bonus: Training VLMs with TAB on image-difference captioning over 3 datasets (MS COCO, CLEVR-Change, Spot-the-Diff) also, consistently improve captioning performance over the baseline (MHSA). Less -> More!
Tweet media one
1
0
2
@anh_ng8
Anh Totti Nguyen
2 months
Similarly, not letting VLMs to see anything by zero-ing out all patch attention (i.e. all attention mass now is on the CLS token) causes VLMs to be *see no changes* and therefore outputs "there is no change" ✅.
Tweet media one
1
0
3
@anh_ng8
Anh Totti Nguyen
2 months
Editing the bottleneck attention causes the attention to change, which flips the incorrect caption from window [❌] to wheel [✅], which is correct. → empirical causal relationships between VLM attention and text outputs.
Tweet media one
1
0
2