Joan Nwatu
@_agirlyengineer
Followers
512
Following
8K
Media
4
Statuses
255
PhD Student @michigan_AI | Vision + NLP | Advocate for Inclusive AI
Ann Arbor, MI
Joined January 2019
New Paper Alert!‼️ 🌍 Culture Affordance Atlas: Teaching AI to See the World as People Actually Live It (Joint work with Longju Bai @Longju_Bai, Oana Ignat @OanaIgnatRo, and Rada Mihalcea @radamihalcea)
1
8
13
Very happy to be elected as a fellow of my home research community, ACL. And so very proud to be in such amazing company—congratulations to all the fellows!👏 I am grateful to the many collaborators and students who have shaped my work and made me who I am today.❤️
16
12
134
Check out our website https://t.co/UQMZSCULjD. Read the full paper on arXiv at https://t.co/2WPlNUL1Ka.
arxiv.org
Culture shapes the objects people use and for what purposes, yet mainstream Vision-Language (VL) datasets frequently exhibit cultural biases, disproportionately favoring higher-income, Western...
0
0
2
Our research shows that this approach significantly reduces AI performance gaps between high- and low-income communities and helps AI work more reliably for people the tech world has historically overlooked. Work to appear at AAAI 2026, the AAAI Social Impact Track.
1
0
1
✅that many tools can perform the same human function, and ✅ that capturing this variation reduces performance disparities across users. 🌐📖We document our atlas using ethnographic references from the eHRAF World Cultures database.
1
0
0
We reannotate Dollar Street and map 46 universal functions to 288 culturally and economically diverse objects so that AI can learn ✅ what people use depends on both region and income, ✅ how to notice long-tail objects with universal roles,
1
0
0
Our new paper shows why and how to address this 👇 📜Our paper, "The Culture Affordance Atlas", draws from anthropology and human-computer interaction and teaches AI to focus on function rather than appearance.
1
0
0
Different objects, in different contexts, perform the same function, but AI often mislabels them. Because they are trained mostly on Western, higher-income internet imagery, AI often overlooks everyday objects used by billions of people in poorer and non-Western communities.
1
0
0
🛏️ What if a “bed” isn’t always a bed? Across the world, people sleep on very different objects: charpais, straw mats, king-sized frames, but AI systems trained on Western images rarely recognize their “bed” function.
1
0
1
🔥Welcome to joining our #NeurIPS2025 ResponsibleFM Workshop today! 🗓️Nov 30th 1pm-8pm CST 📍Hilton Mexico City Reforma (Room: Don Alberto 1) 🌐 https://t.co/usoiJUW914 🥳Looking forward to learning fresh insights on Socially Responsible and Trustworthy Foundation Models from
2
9
29
Excited to announce that TODAY PhD candidate @_agirlyengineer Joan Nwatu joins the panel discussion at the African Women Film Festival! 🎬Panel 5 (12:45–2:00): Technologies, #AI, & the Future of Filmmaking in Africa 📍Michigan League, Vandenberg Room 🔗 https://t.co/RYieMy08fG
0
2
11
We present Eeyore—a language model that simulates depression for mental‑health training. It leverages curated real-world conversations, expert-verified psychological profiles, instruction tuning, and preference optimization. Link: https://t.co/HiBBNwsxm4
https://t.co/xhm1FIANco
1
5
12
Very proud of my lab’s work to be presented at #NAACL2025, all aligned with this year’s special theme on NLP in a multicultural world 🌎 Come find us throughout the conference!
1
9
47
Tomorrow at #AAAI25 we’ll present our paper “Why AI is W.E.I.R.D.* and why it shouldn’t be”— a position paper on challenges & opportunities in developing AI that works equally well for everyone. 🌍🌎🌏 https://t.co/cqqSLpLX5s This is the result of a truly global collaboration,
5
37
137
Klarna was the company that went all-on replacing customer support with an AI bot and went on to brag about the cost savings. Now they are reversing course. Easy to see more companies blindly replacing quality customer support with a worse AI implementation will follow...
247
427
3K
She exposed the truth about AI bias. Google gave her an ultimatum: Retract or leave. She chose integrity. Here's how she exposed AI's biggest problem:
72
1K
4K
Ecstatic 😁 to share that our work on improving vision-language model performance for lower-income communities was accepted to #NAACL2025 main track!! Check out our paper:
arxiv.org
Recent work has demonstrated that the unequal representation of cultures and socioeconomic groups in training data leads to biased Large Multi-modal (LMM) models. To improve LMM model performance...
Vision-language models often miss the mark on non-Western, low-income data, reinforcing AI's lack of diversity.🧵(1/8)
5
12
37
Why AI is W.E.I.R.D.(Western, Educated, Industrialized, Rich, Democratic—a concept from psychology) and why it shouldn’t be.🌍🌎🌏 Very proud of our new paper outlining challenges & opportunities in developing AI for everyone, with everyone, by everyone. https://t.co/yR9yD1rYVx
5
25
134
Do models personalize results when we ask them to and avoid stereotypes otherwise? No. Well are they at least transparent about it? Also no… ⚠️ If the model can infer your race you might get racially biased recommendations! 📄 Preprint: https://t.co/vzCadLiG1D 🧵1/8
2
19
98
"The Invisible Minority" – Older Adults 👵👴 Age bias is often overlooked compared to gender or race, yet by 2030, 1 in 6 people will be over 60! Our study at #EMNLP2024 reveals LLMs tend to align with younger values. Let's explore to make AI helpful and harmless for all ages!
1
15
49