summerfieldlab @summerfieldlab.bsky.social
@summerfieldlab
Followers
8K
Following
1K
Media
95
Statuses
786
Investigating the mechanisms that underpin human learning, perception and cognition, headed by Chris Summerfield
Oxford University
Joined January 2013
it's nearly 2 years since I downed pen on this book, but the main predictions - that the main impacts of AI will be from its increasingly anthropomorphic and and agentic features - are proving correct.
0
0
0
Chinese translation of These Strange New Minds has a very funky cover, but I quite like it! The duck and the parrot represent the deflationist (stochastic parrot) vs. functionalist (if it walks like a duck...) perspectives on LLMs that are discussed in the book.
1
2
12
AI grows up. My review in today’s @WSJ @WSJBooks of “These Strange New Minds” by @summerfieldlab and “The Scaling Era” by @dwarkesh_sp. Link is in reply.
1
2
9
New preprint! Growing numbers of people turn to AI chatbots for information, sparking debates about their potential to mislead voters and shape public discourse. But what's the real impact of AI on political beliefs? Our latest research dives into this critical question 👇
1
2
10
Pleased to share our new piece @Nature titled: "We Need a New Ethics for a World of AI Agents". AI systems are undergoing an ‘agentic turn’ shifting from passive tools to active participants in our world. This moment demands a new ethical framework.
32
160
548
📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project ▶️ Compute access, venture capital investment, and expert support Learn more and apply ⬇️
7
64
191
it was so fun to work with this great team to understand what makes AI persuasive and why!
Today (w/ @UniofOxford @Stanford @MIT @LSEnews) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19 LLMs, 707 political issues. We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more 🧵
0
2
12
please apply for this: https://t.co/HxNRYFn7Se it's a strategy and delivery manager role (non-technical) in our "AI and Human Influence" team at AISI. Would suit someone who cares about AI policy, and wants to work in a fast-paced environment in the shadow of Big Ben
0
2
5
We're hiring a Senior Researcher for the Science of Evaluation team! We are an internal red-team, stress-testing the methods and evidence behind AISI’s evaluations. If you're sharp, methodologically rigorous, and want shape research and policy, this role might be for you! 🧵
1
4
10
This means designing studies that are theoretically well-motivated, include appropriate controls, and avoid excessive reliance on anecdote. With many great people from @AISecurityInst. Paper:
arxiv.org
We examine recent research that asks whether current AI systems may be developing a capacity for "scheming" (covertly and strategically pursuing misaligned goals). We compare current research...
0
0
11
Researchers should take risks from misaligned AI seriously. But to understand how those risks might play out – both now in an in the future – we need rigorous research approaches.
1
0
7
We examine the methods in AI ‘scheming’ papers, and show how they often rely on anecdotes, fail to rule out alternative explanations, lack control conditions, or rely on vignettes that sound superficially worrying but in fact test for expected behaviours.
1
1
12
This reliance on anecdote – coupled with poor experimental design, and a lack of construct validity - led many researchers to falsely believe that apes could learn language. In our paper, we argue that many of the same problems plague research into AI ‘scheming’ today.
1
0
6
We draw on a historical parallel. In the 1970s, researchers asked whether apes were capable of learning sign language. They did this by carefully monitoring signing, noting down exceptional behaviours, and writing academic papers about their findings.
1
0
4
Misaligned AI is a serious and credible threat, both now and in the future. Recent reports of AI ‘scheming’ have been cited as an existential risk to humanity. But is the evidence sufficiently rigorous to support the claims?
1
0
5
In a new paper, we examine recent claims that AI systems have been observed ‘scheming’, or making strategic attempts to mislead humans. We argue that to test these claims properly, more rigorous methods are needed.
4
25
84
a mere 13 months after it was first submitted, this review article led by Andrea Tachetti and others has finally seen the light of day! I have no recollection of what it says https://t.co/nMeS9w5uAw
pnas.org
Human society is coordinated by mechanisms that control how prices are agreed, taxes are set, and electoral votes are tallied. The design of robust...
0
4
35
Amid rising AI companionship, this work w/ @iasongabriel @summerfieldlab @bertievidgen @computermacgyve (@UniofOxford @oiioxford @AISecurityInst @GoogleDeepMind) explores how human-AI relationships create feedback loops that reshape preferences - and why it matters for AI safety
1
1
13
AISI has published its research agenda... https://t.co/6tmzvlrXCl There is so much amazing work going on at AISI. Expect much more in the way of published outputs over the coming months!
aisi.gov.uk
View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AI governance.
0
3
10
THESE STRANGE NEW MINDS by @summerfieldlab is one of @lithub's Ten Nonfiction Books to Check Out in March!
lithub.com
Each month, we here at Lit Hub pore over literally hundreds of nonfiction titles—here are ten coming out in March that are worth your time. (Sign up to our weekly nonfiction newsletter for evidence…
0
2
12