Haeun Yu Profile
Haeun Yu

@hayu204

Followers
231
Following
91
Media
7
Statuses
143

PhD student @CopeNLU

대한민국 서울
Joined February 2022
Don't wanna be here? Send us removal request.
@hayu204
Haeun Yu
1 year
It was fun/happy to collaborate with you, Sara🤟🙌 Our paper/dataset on Knowledge Conflict is here! See you in Miami🌴
@saraveramarjano
Sara Vera Marjanović
1 year
🚨New Benchmark Alert🚨 Our paper accepted to Findings of EMNLP 2024🌴 introduces a new dataset, DynamicQA! DynamicQA contains inherently conflicting data (both disputable🤷‍♀️ & temporal🕰️) crucial to studying LM’s internal memory conflict. Work with @hayu204 🥳 #EMNLP2024 #NLProc
0
2
17
@luke_ch_song
Chan Hee (Luke) Song | On the Job Market
1 month
🤖 If you are at @corl_conf, I highly recommend a Korean 🇰🇷 delicacy you cannot get anywhere else in the world: Marinated raw crab 🦀 Conveniently my favorite restaurant for that is near the venue: https://t.co/pBAu8RS0SH Trust me, it is not fishy at all 😍 #CoRL2025
0
1
11
@IAugenstein
Isabelle Augenstein
2 months
- Fully funded PhD fellowship on Explainable NLU: apply by 31 October 2025, start in Spring 2026: https://t.co/zCcfJSus5W - Open-topic PhD positions: express your interest through ELLIS by 31 October 2025, start in Autumn 2026: https://t.co/f63KBIZWY3 #NLProc #XAI
0
5
13
@IAugenstein
Isabelle Augenstein
2 months
📣 Looking for PhD opportunities in Natural Language Processing? Our group has several openings for a start in Spring or Autumn 2026 -- apply by 31 October: https://t.co/rxkZVEP6fs #NLProc #XAI @atanasovapepa @CopeNLU @AiCentreDK @DIKU_Institut
2
30
179
@hayu204
Haeun Yu
3 months
This work was done in collaboration between @CopeNLU and UILab. Cannot thank the co-authors, @sgjeong_evelyn, @whoSiddheshp, Jisu Shin, @jin__jiho, @JunhoMyung_ , and supervisors @IAugenstein, @aliceoh enough!
1
0
4
@hayu204
Haeun Yu
3 months
🌐 What we found 1️⃣ LLMs encode Western-dominance bias and cultural flattening in their internals. 2️⃣ Internalized cultural biases do not necessarily align with extrinsic biases. 3️⃣ Low-resource cultures are less affected by these biases, likely due to limited training data.
1
0
4
@hayu204
Haeun Yu
3 months
✨ What we did - Propose Culturescope, an MI-based method that probes the internal cultural knowledge space of LLMs - Create multiple-choice questions with cultural hard-negatives to test how internalized cultural biases affect downstream tasks
1
0
4
@hayu204
Haeun Yu
3 months
🙋 How do Large Language Models internally process cultural knowledge? 🌐 Happy to share our new preprint "Entangled in Representations: Mechanistic Investigation of Cultural Biases in Large Language Models" 📃 Paper: https://t.co/Gt14mjMiQg
3
28
201
@gretawarren_
Greta Warren
3 months
Presenting this today at #ACL2025NLP! Come to Board 26 in Hall 5X from 11am to talk about the relationship between community notes and fact-checking and the future of content moderation 🔎 🤔 #ACL2025
@gretawarren_
Greta Warren
8 months
📢New preprint! : https://t.co/ZMtcmVvW1h Fact-checkers play a vital role in combating misinformation on social media, but have come under intense scrutiny in the current political climate.
0
1
6
@IAugenstein
Isabelle Augenstein
3 months
Thanks to everyone who joined us @AiCentreDK for the Pre-ACL 2025 workshop! We had 7 inspiring keynotes, 26 poster presentations, and there was time for informal mingling as well. See some of you in Vienna for #acl2025nlp! #NLProc https://t.co/YI3Z0dSZm8
2
3
24
@atanasovapepa
Pepa Atanasova
5 months
⏰ TODAY is the poster submission deadline for the pre-@aclmeeting Workshop 🇩🇰! 📅 We've also finalised our schedule and speakers: @thamar_solorio, Smaranda Muresan, @david__jurgens, Nanyun Peng, @danish037, @kaiwei_chang, and @mishumausam https://t.co/mgcK4k8yUF @AiCentreDK
@IAugenstein
Isabelle Augenstein
5 months
Join us in Copenhagen for the Pre-@aclmeeting Workshop! 🇩🇰 We’re excited to welcome researchers and practitioners in #NLProc, Generative AI & Language Technology to a 1-day workshop on 26 July – just ahead of ACL 2025 in Vienna. Learn more: https://t.co/YI3Z0dSZm8 @AiCentreDK
0
4
8
@IAugenstein
Isabelle Augenstein
5 months
Join us in Copenhagen for the Pre-@aclmeeting Workshop! 🇩🇰 We’re excited to welcome researchers and practitioners in #NLProc, Generative AI & Language Technology to a 1-day workshop on 26 July – just ahead of ACL 2025 in Vienna. Learn more: https://t.co/YI3Z0dSZm8 @AiCentreDK
0
5
31
@z_eunie
Jieun Han
6 months
💃Our paper DREsS finally got accepted to ACL main! This was the very first research I started in grad school.. It's finally coming to life after 2 years, but it was totally worth it!👵 #ACL2025NLP #ACL2025 @aclmeeting
5
5
161
@IAugenstein
Isabelle Augenstein
7 months
I'm so grateful to @bcs_irsg @TechAtBloomberg for honouring me with the Karen Spärck Jones Award 🙏 I gave the award lecture on LLMs’ Utilisation of Parametric & Contextual Knowledge at #ECIR2025 today (slides: https://t.co/ThwR9hhfQd) https://t.co/Ab4eqs3o3I #NLProc @CopeNLU
3
7
60
@IAugenstein
Isabelle Augenstein
7 months
Massive congrats to @_kire_kara_ for having successfully defended his PhD thesis on mitigating reasoning inconsistencies! 🎉 🍾 👏 👏👏 https://t.co/Ms7DQLUenb Thanks to @delliott, @barbara_plank & @iatitov for serving on the committee. @CopeNLU @DIKU_Institut @AiCentreDK
1
3
18
@_kire_kara_
Erik Arakelyan
7 months
I defended my PhD at the University of Copenhagen ☺️ What a journey It was! I want to give massive thanks to my amazing supervisors, @IAugenstein and @PMinervini, who were there with me throughout the whole process. Thesis on: https://t.co/Qz2a2QlTSH Arxiv version coming soon!
osoblanco.github.io
Machine Learning Researcher
@IAugenstein
Isabelle Augenstein
7 months
Massive congrats to @_kire_kara_ for having successfully defended his PhD thesis on mitigating reasoning inconsistencies! 🎉 🍾 👏 👏👏 https://t.co/Ms7DQLUenb Thanks to @delliott, @barbara_plank & @iatitov for serving on the committee. @CopeNLU @DIKU_Institut @AiCentreDK
8
4
67
@saraveramarjano
Sara Vera Marjanović
7 months
Models like DeepSeek-R1 🐋 mark a fundamental shift in how LLMs approach complex problems. In our preprint on R1 Thoughtology, we study R1’s reasoning chains across a variety of tasks; investigating its capabilities, limitations, and behaviour. 🔗: https://t.co/Cyy18kYQ45
3
63
229
@_kire_kara_
Erik Arakelyan
7 months
Hey 👋 I will be having my PhD defense on “Reasoning Inconsistencies and How to Mitigate Them in Deep Learning” 🧠tomorrow at 10AM CET. For anyone interested please join in person or online: Details: https://t.co/qzYdzCTQwg Online link:
Tweet card summary image
us06web.zoom.us
Zoom is the leader in modern enterprise cloud communications.
0
1
6
@gretawarren_
Greta Warren
9 months
How can explainable AI empower fact-checkers to tackle misinformation? 📰🤖 We interviewed fact-checkers & identified explanation needs & design implications for #NLP, #HCI, and #XAI. Excited to present this work with @IAugenstein @quillis at #chi2025! https://t.co/RQJzeEhlH2
1
6
10
@jiyeonkimd
jiyeon kim
9 months
🎉 Excited to share that Knowledge Entropy has been accepted to #ICLR2025 as an oral presentation! Check out if you are interested in why LLMs lose their ability to acquire new knowledge during pretraining. See you in Singapore!
@jiyeonkimd
jiyeon kim
1 year
❓Do LLMs maintain the capability of knowledge acquisition throughout pretraining? If not, what is driving force behind it? ❗Our findings reveal that decreasing knowledge entropy hinders knowledge acquisition and retention as pretraining progresses. 📄 https://t.co/t4mW2VmObw
1
9
74
@jinulee_v
Jinu Lee
9 months
I am happy to announce that my first-author paper is accepted to NAACL 2025 Main! Existing backward chaining (top-down reasoning) methods are incomplete, leading to suboptimal performance. We build SymBa, a complete neuro-symbolic backward chaining method using SLD-Resolution.
2
8
70