joshminwookang Profile Banner
Minwoo (Josh) Kang Profile
Minwoo (Josh) Kang

@joshminwookang

Followers
26
Following
37
Media
4
Statuses
15

CS PhD Student @UCBerkeley @berkeley_ai | @WilliamsCollege '20

California, USA
Joined February 2024
Don't wanna be here? Send us removal request.
@kayo_yin
Kayo Yin
2 months
I’m in Montréal this week for #COLM2025! 🇨🇦 Lately I’m thinking about how AI profiles and manipulates humans. Broadly interested in AI societal impacts, gradual disempowerment, sign language, NLP x cogsci / psychology, poutine, pastries, ice wine. Hmu! And come to our workshop!
@kayo_yin
Kayo Yin
6 months
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🧠🎉 How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach? 🌐 https://t.co/LMWcqtOSDG 📅 Submit by June 23rd
5
5
117
@joshminwookang
Minwoo (Josh) Kang
5 months
✨ Key takeaways • Detailed and longer backstories condition models to faithful in-group personas, beyond mere caricatures • Enables rapid, lower-cost pilot studies on polarization & conflict • 40 K backstories + code on GitHub (stay-tuned!) 👉
Tweet card summary image
github.com
Contribute to CannyLab/alterity development by creating an account on GitHub.
0
0
0
@joshminwookang
Minwoo (Josh) Kang
5 months
🧪 What drives deeper binding? • More backstories: scaling virtual population from 10K → 41K • Longer backstories: 500 → 2500 tokens • Consistency control: LLM-as-a-critic to ensure narrative quality Rich, longer ✕ coherent narratives → more human-like virtual subjects.
1
0
0
@joshminwookang
Minwoo (Josh) Kang
5 months
📊 Evaluation On classic partisan-misperception items our deep LLM personas: • cut Wasserstein-distance to human responses distribution vs. baselines • reproduce key findings of human studies (e.g., notable misperception of support for political violence by the out-group)
1
0
0
@joshminwookang
Minwoo (Josh) Kang
5 months
@SuhongMoon @JosephJSSuh @_dmchan 🛠️ Our approach | We don’t just slap on “You’re a Democrat.” Instead, we generate multi-turn, interview-style backstories covering rich details of individuals, e.g. upbringing, values, etc. Backstories are then utilized as prefix to condition the LM to a single, coherent persona.
1
0
0
@joshminwookang
Minwoo (Josh) Kang
5 months
💡 While prior work has focused on asking LLMs about self-opinions, we argue that reproducing social perceptions (nuanced and even biased) is critical for models to serve as virtual subjects for studies on today's key societal problems: polarization, democratic backsliding, etc.
1
0
0
@joshminwookang
Minwoo (Josh) Kang
5 months
🤔 Do LLMs exhibit in-group↔out-group perceptions like us? ❓ Can they serve as faithful virtual subjects of human political partisans? Excited to share our paper on taking LLM virtual personas to the *next level* of depth! 🔗 https://t.co/LzeDAMtrEV 🧵
2
9
16
@AnneLe222
Heekyung Lee
6 months
🔍 Just dropped: “Puzzled by Puzzles: When Vision-Language Models Can’t Take a Hint” 👉 https://t.co/CZxI6c6FA4 Puns + pictures + positioning = a nightmare for today’s AI. These models just don’t get it (yet).😵‍💫 Check out the 🧵 to see our findings (1/4) #AI #Multimodal #VLM
Tweet card summary image
arxiv.org
Rebus puzzles, visual riddles that encode language through imagery, spatial arrangement, and symbolic substitution, pose a unique challenge to current vision-language models (VLMs). Unlike...
2
4
18
@kayo_yin
Kayo Yin
6 months
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🧠🎉 How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach? 🌐 https://t.co/LMWcqtOSDG 📅 Submit by June 23rd
6
22
93
@JosephJSSuh
Joseph Jeesung Suh
9 months
Can LLMs assist public opinion survey designs by predicting responses? We fine-tune LLMs on our new large-scale survey response dataset, SubPOP, which reduces the distributional gap between human-LLM predictions by up to 46% 📊 A 🧵 on our findings: 👇
2
10
33
@aypan_17
Alex Pan
1 year
LLMs have behaviors, beliefs, and reasoning hidden in their activations. What if we could decode them into natural language? We introduce LatentQA: a new way to interact with the inner workings of AI systems. 🧵
4
25
146
@SuhongMoon
Suhong Moon
1 year
(1/n) 🧵 Can Large Language Models simulate different individuals' beliefs and opinions? Checkout our paper on conditioning LLMs to virtual personas for approximating individual human samples at #EMNLP2024! Paper: https://t.co/xT7PD1dxmq Code: https://t.co/i6WBoBO4IK
1
18
42