ngqm_ Profile Banner
Quang Minh Nguyen Profile
Quang Minh Nguyen

@ngqm_

Followers
10
Following
231
Media
7
Statuses
14

MS Data Science @ KAIST | NLP, Reasoning

Joined September 2024
Don't wanna be here? Send us removal request.
@ngqm_
Quang Minh Nguyen
26 days
❓ Can external information from Wikipedia or web search enhance LLMs’ performance in stance detection? Our Findings paper, to be presented at ACL 2025 next week, answers this question through an evaluation of 8 popular LLMs on 3 datasets containing 12 targets. 🧵(1/n).#acl2025.
1
2
5
@ngqm_
Quang Minh Nguyen
20 days
🚀Presenting today (July 28th) at Hall 4/5 from 18:00 to 19:30 @aclmeeting. Let's connect and chat about bias, uncertainty, and interaction in LLM reasoning!.#ACL2025 #ACL2025NLP
Tweet media one
0
1
5
@grok
Grok
5 days
Turn old photos into videos and see friends and family come to life. Try Grok Imagine, free for a limited time.
707
1K
5K
@ngqm_
Quang Minh Nguyen
25 days
RT @LanceYing42: A hallmark of human intelligence is the capacity for rapid adaptation, solving new problems quickly under novel and unfami….
0
109
0
@ngqm_
Quang Minh Nguyen
26 days
This was my first publication, and I am grateful for my advisor @TaegyoonK's valuable feedback during the project. I am actively looking for collaborators to work on LLM reasoning with bias, uncertainty, and interaction. Chat with me at ACL or online if you're interested!.
0
1
1
@ngqm_
Quang Minh Nguyen
26 days
Take-home message: There should be more consideration of information biases in LLM reasoning! Join us for our poster presentation in Session 5 (Monday 18:00 — 19:30) at Hall 4/5. Paper: Code:
1
0
2
@ngqm_
Quang Minh Nguyen
26 days
Fine-tuning models make them more robust to some extent under information biases but does not fully resolve the problem. (6/n)
Tweet media one
1
0
2
@ngqm_
Quang Minh Nguyen
26 days
This performance degradation persists despite chain-of-thought prompting which specifically instructs models NOT to uncritically adopt information stance and sentiment. (5/n)
Tweet media one
Tweet media two
1
0
2
@ngqm_
Quang Minh Nguyen
26 days
But how? Our further inspections determined that LLMs frequently adopt the stance and sentiment of external information, and such adoptions lead more more incorrect than correct predictions. (4/n)
Tweet media one
Tweet media two
Tweet media three
1
0
2
@ngqm_
Quang Minh Nguyen
26 days
Results are negative: we found many cases of performance degradation, which becomes more severe where synthetic biases are introduced in the external information. This behavior contrasts with BERT models, for which performance often stays stable or only slightly decreases. (3/n)
Tweet media one
1
0
2
@ngqm_
Quang Minh Nguyen
26 days
Previous literature suggested that external information could enhance stance detection with BERT-based models. Given the wide adoption of LLMs in reasoning tasks, including stance detection itself, we ask whether such information can also help LLM stance detection. (2/n)
Tweet media one
1
0
2
@ngqm_
Quang Minh Nguyen
3 months
RT @svlevine: Goal-conditioned RL (GCRL) is great - unsupervised, can use data (in offline mode), flexibility to define tasks at test time.….
0
20
0
@ngqm_
Quang Minh Nguyen
3 months
RT @guyd33: New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt….
0
36
0
@ngqm_
Quang Minh Nguyen
7 months
Kucing in Kampung Baru, KL
Tweet media one
Tweet media two
0
0
1
@ngqm_
Quang Minh Nguyen
8 months
RT @Francis_YAO_: Don’t race. Don’t catch up. Don’t play the game. Instead, do rigorous science. Do controlled experiments. Formulate clear….
0
151
0