NickPowdthavee Profile Banner
Nick Powdthavee Profile
Nick Powdthavee

@NickPowdthavee

Followers
1K
Following
2K
Media
109
Statuses
1K

Professor of Economics@Nanyang Technological University, author of The Happiness Equation, and a major contributor of research on useless advice

Singapore
Joined September 2013
Don't wanna be here? Send us removal request.
@NickPowdthavee
Nick Powdthavee
8 months
Why do people pay for transparently useless advice? An update. Over ten years ago, we showed that people—most of whom have advanced statistical knowledge—were willing to pay for predictions of fair coin flips after witnessing a streak of correct predictions live in the lab.
1
1
3
@NickPowdthavee
Nick Powdthavee
8 days
Paper:.Large Language Models Predict Human Well-being—But Not Equally Everywhere. By Pat Pataranutaporn, Nattavudh Powdthavee, Chayapatr Achiwaranguprok, and Pattie Maes. 📄 💻
0
0
1
@NickPowdthavee
Nick Powdthavee
8 days
So while LLMs can generate plausible-sounding estimates of well-being, they’re often wrong in meaningful and systematic ways. Especially in the very settings where high-quality well-being data are hardest to get.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
To see if this could be corrected, we injected a simple factual prompt:. “Life satisfaction tends to be lower in Sub-Saharan Africa.”. Claude adjusted its predictions, but the change also affected non-African countries.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
We tested this explicitly. Using fictional variables (e.g., “listens to unicorn voices”), we injected fabricated evidence and watched how LLMs extrapolated. LLMs responded not to empirical strength but to linguistic similarity, interpreting surface cues as conceptual truth.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
Why?. Two reasons:. – Training data bias: LLMs are trained mostly on text from high-income, English-speaking countries. – Semantic generalisation: LLMs overweight variables that sound important (like education or democracy), even when they don’t explain much variance.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
LLMs also failed to capture real cross-country differences. Countries like Mexico and Colombia, which report high life satisfaction, were underestimated. Sub-Saharan countries were often overestimated, sometimes predicted to be happier than the US.
1
0
1
@NickPowdthavee
Nick Powdthavee
8 days
LLMs were less accurate overall, and their errors were not random. Prediction errors were significantly larger in countries with lower HDI, lower GDP per capita, and lower internet penetration. The global digital divide shows up clearly in model performance.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
The models performed reasonably well on average. They picked up familiar correlates—income, health, freedom—and produced plausible estimates, particularly in higher-income settings. Still, the comparison with traditional approaches revealed notable performance gaps.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
Using World Values Survey data from 64,000 individuals, we asked each model to predict life satisfaction scores based on detailed demographic and attitudinal profiles. We then benchmarked those predictions against the respondents’ own self-reported scores.
1
0
0
@NickPowdthavee
Nick Powdthavee
8 days
Can AI predict how satisfied people are with their lives?. In a new paper with colleagues from MIT and NTU, we tested four leading LLMs—GPT, Claude, LLaMA, and Gemma—on their ability to estimate actual life satisfaction across 64 countries. 🧵
Tweet media one
1
0
1
@NickPowdthavee
Nick Powdthavee
5 months
1
0
2
@NickPowdthavee
Nick Powdthavee
5 months
RT @patpat_mit: My new paper shows that AI is secretly judging us based on our last names. Analyzing 72,000 AI-driven evaluations, we found….
0
4
0
@NickPowdthavee
Nick Powdthavee
5 months
10/ What are your thoughts on AI in peer review? Can we trust machines to make fair evaluations? Let's discuss! ⬇️ #AcademicTwitter #EconTwitter #AI #PeerReview.
0
0
2
@NickPowdthavee
Nick Powdthavee
5 months
9/ Want to dive deeper? Read the full paper here: The paper's authors include @NickPowdthavee, @patpat_mit and @PattieMaes.
1
0
2
@NickPowdthavee
Nick Powdthavee
5 months
8/ The peer review crisis isn’t just about speed—it’s about fairness. AI could help, but only if we proactively mitigate its biases. Our findings have implications not just for economics but for academia as a whole.
1
0
2
@NickPowdthavee
Nick Powdthavee
5 months
7/ Policy recommendations:.🔹 Train AI models on anonymised submissions to reduce bias. 🔹 Implement post-hoc bias correction techniques. 🔹 Use AI for pre-screening, not final decisions. 🔹 Maintain transparency in AI-assisted peer review.
1
0
3
@NickPowdthavee
Nick Powdthavee
5 months
6/ The efficiency-equity trade-off:. AI can expedite desk rejections, reducing the burden on human referees. However, relying solely on AI risks reinforcing systemic biases. Hybrid models—combining AI screening with human judgment—may offer a path forward.
1
0
2
@NickPowdthavee
Nick Powdthavee
5 months
5/ Even when AI was prompted to focus solely on research quality, these biases persisted. This suggests that AI models trained on human-generated data may inherit and amplify existing biases in academia.
1
0
4
@NickPowdthavee
Nick Powdthavee
5 months
Top-ranked authors in RePEc were rated more favourably than lower-ranked ones. AI exhibited a gender bias, slightly disadvantaging female economists.
1
0
3
@NickPowdthavee
Nick Powdthavee
5 months
4/ AI’s bias is striking:. Despite being the same papers, authors from Harvard, MIT, and LSE received significantly higher scores than those from less prestigious institutions.
1
0
5