Scott Hale
@computermacgyve
Followers
2K
Following
2K
Media
63
Statuses
2K
Associate Prof @oiioxford, Director of Research @meedan, Fellow @turinginst.・widening access to quality info・multilingualism・mobilization・NLP・agenda setting
Joined February 2009
I'm humbled to have been selected as the next director of @oiioxford . I look forward to working with colleagues and friends to shape the next saga in our department's story
Exciting news for the OII! We’re delighted to announce that @computermacgyve will be our new Director and @KathrynEccles Deputy Director from January 2026. https://t.co/OyCASKzeY3 1/3
1
0
5
Tip for #travel on @British_Airways @HeathrowAirport Terminal 5 today. All is moving, but my home printed boarding didn't work. I had to reprint it at the airport. FYI @BBCNews > The BBC understands that British Airways has continued to operate as normal using a back-up system
0
0
1
Social media platforms operate globally, but do they allocate human moderation equitably across languages? Answer: no! -Millions of users post in languages with 0 mods -Where mods exist, mod count relative to content volume varies widely across langs https://t.co/VPfgRnKraM
1
1
12
Great opportunity for #impact ful #AI and #datasci research with a talented team led by a soon to be @oiioxford alumna star ⭐ @AISecurityInst. #jobs #hiring
Listen up all talented early-stage researchers! 👂🤖 We're hiring for a 6-month residency in my team at @AISecurityInst to assist cutting-edge research on how frontier AI influences humans! It's an exciting & well-paid role for MSc/PhD students in ML/AI/Psych/CogSci/CompSci 🧵
0
1
1
Congratulations to @ManuelTonneau and co-authors on our paper receiving an Outstanding Paper Award @aclmeeting . Read it here:
1
2
11
Using #LLMs to improve claim matching and #misinformation response. Come see our poster at #ACL2025NLP now (#430) or read the paper. https://t.co/JGF3mRnyfT
0
4
16
So excited our paper is out in the wild! Reward models are where the rubber meets the road of human values in modern AI, but until now, they’ve faced little scrutiny. We uncover unusual patterns and concerning biases, that are misaligned with external data on what humans value ⬇️
Reward models (RMs) are the moral compass of LLMs – but no one has x-rayed them at scale. We just ran the first exhaustive analysis of 10 leading RMs, and the results were...eye-opening. Wild disagreement, base-model imprint, identity-term bias, mere-exposure quirks & more: 🧵
2
7
63
Why do human–AI relationships need socioaffective alignment? As AI evolves from tools to companions, we must seek systems that enhance rather than exploit our nature as social & emotional beings. Published today in @Nature Humanities & Social Sciences! https://t.co/y92riRuvDF
8
58
279
📈Out today in @PNASNews!📈 In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of LLMs yields sharply diminishing persuasive returns for static political messages. 🧵:
6
36
130
Are LLMs biased when they write about political issues? We just released IssueBench – the largest, most realistic benchmark of its kind – to answer this question more robustly than ever before. Long 🧵with spicy results 👇
3
37
206
Humans are social creatures and AI agents are increasingly capable of relationship-building behaviors. What does this mean for safe and aligned AI? 🧐 We ask and answer this question in our new preprint w/ @IasonGabriel @summerfieldlab, @bertievidgen, @computermacgyve FAQs in 🧵
3
23
126
Amazing work @hannahrosekirk . It's been amazing working together on this, and to see you grow as leader and scholar. Congratulations 🎉 cc:@oiioxford
A real honour and career dream that PRISM has won a @NeurIPSConf best paper award! 🌈 One year ago I was sat in a 13,000+ person audience of NeurIPs '23 having just finished data collection. Safe to say I've gone from feeling #stressed to very #blessed 😁
0
0
8
Check out my blog post on @meedan's recent work on using LLMs for running a multi language classification service called ClassyCat🐈⬛ details below🧵
💻 At Meedan, we’re focused on leveraging emerging technology, such as generative AI, to promote efficient, ethical, and culturally informed decision-making. Find out how we’re putting it to work with the development of our new internal tool ClassyCat. 🐈⬛ https://t.co/8s075PaskT
1
7
14
My twitter feed this weekend is full of people praising Roberta and recommending this internship opportunity 😂 I agree so I’m adding one more tweet to this list!
I’m looking for a PhD intern for next year to work at the intersection of LLM-based agents and open-ended learning, part of the Llama Research Team in London. If interested please send me an email with a short paragraph with some research ideas and apply at the link below.
3
1
36
Two great fully funded #PhD opportunities. #studentship . One on #elections and #misinformation and one on technology-facilitated gender-based violence and #hate
🎓 We’re excited to announce that we’re funding two doctoral awards in partnership with @MyBCU. 🧑🎓 Learn more about our call for applications and submit your expression of interest by Sept. 30! https://t.co/ywESsI8IWA
0
2
5
‼️New preprint: Scaling laws for political persuasion with LLMs‼️ In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of language models yields sharply diminishing persuasive returns: https://t.co/Re2Z8bKUFo 1/n
9
55
172
🌎Introducing LINGOLY, our new reasoning benchmark that stumps even top LLMs (best models only reach ~35% accuracy)🥴 In a colab between @UniofOxford, @Stanford and UK Linguistic Olympiad puzzle authors, we stress test LLMs on over 90 low-resource and extinct languages...
3
35
141
🚨ANNOUNCEMENT🚨 Meedan is launching the 2024 #InvestigativeJournalismFellowship today! We’re working with @agenciapublica, @ThePublicSource, and NIRIJ to promote #ElectionIntegrity through investigative reporting. Read about it: https://t.co/RtKdRGk0cd
#JournalismForDemocracy
meedan.org
Select partner organizations will work with Meedan to recruit fellows who will report on election topics during a pivotal year for global democracies.
0
4
4
I'm excited to share that our Demo paper, "SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks," has been accepted to #SIGIR #SIGIR2024! With @mshlis @ashkankazemi @computermacgyve: Preprint is now available. 🧵 https://t.co/JwzRRydkyS
arxiv.org
Diaspora communities are disproportionately impacted by off-the-radar misinformation and often neglected by mainstream fact-checking efforts, creating a critical need to scale-up efforts of...
2
7
13