summerfieldlab Profile Banner
summerfieldlab @summerfieldlab.bsky.social Profile
summerfieldlab @summerfieldlab.bsky.social

@summerfieldlab

Followers
8K
Following
1K
Media
95
Statuses
786

Investigating the mechanisms that underpin human learning, perception and cognition, headed by Chris Summerfield

Oxford University
Joined January 2013
Don't wanna be here? Send us removal request.
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
13 days
it's nearly 2 years since I downed pen on this book, but the main predictions - that the main impacts of AI will be from its increasingly anthropomorphic and and agentic features - are proving correct.
0
0
0
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
13 days
Chinese translation of These Strange New Minds has a very funky cover, but I quite like it! The duck and the parrot represent the deflationist (stochastic parrot) vs. functionalist (if it walks like a duck...) perspectives on LLMs that are discussed in the book.
1
2
12
@TalkinBaseball_
Talkin’ Baseball
3 days
Eugenio Suarez had an adventurous Game 6 at third base Presented by @NutrafolMen
8
5
73
@SilverJacket
Matthew Hutson
1 month
AI grows up. My review in today’s @WSJ @WSJBooks of “These Strange New Minds” by @summerfieldlab and “The Scaling Era” by @dwarkesh_sp. Link is in reply.
1
2
9
@LLuettgau
Lennart Luettgau
2 months
New preprint! Growing numbers of people turn to AI chatbots for information, sparking debates about their potential to mislead voters and shape public discourse. But what's the real impact of AI on political beliefs? Our latest research dives into this critical question 👇
1
2
10
@IasonGabriel
Iason Gabriel
3 months
Pleased to share our new piece @Nature titled: "We Need a New Ethics for a World of AI Agents". AI systems are undergoing an ‘agentic turn’ shifting from passive tools to active participants in our world. This moment demands a new ethical framework.
32
160
548
@AISecurityInst
AI Security Institute
3 months
📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project ▶️ Compute access, venture capital investment, and expert support Learn more and apply ⬇️
7
64
191
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
3 months
it was so fun to work with this great team to understand what makes AI persuasive and why!
@KobiHackenburg
Kobi Hackenburg
3 months
Today (w/ @UniofOxford @Stanford @MIT @LSEnews) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues. We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more 🧵
0
2
12
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
3 months
please apply for this: https://t.co/HxNRYFn7Se it's a strategy and delivery manager role (non-technical) in our "AI and Human Influence" team at AISI. Would suit someone who cares about AI policy, and wants to work in a fast-paced environment in the shadow of Big Ben
0
2
5
@CUdudec
Cozmin Ududec
3 months
We're hiring a Senior Researcher for the Science of Evaluation team! We are an internal red-team, stress-testing the methods and evidence behind AISI’s evaluations. If you're sharp, methodologically rigorous, and want shape research and policy, this role might be for you! 🧵
1
4
10
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
This means designing studies that are theoretically well-motivated, include appropriate controls, and avoid excessive reliance on anecdote. With many great people from @AISecurityInst. Paper:
Tweet card summary image
arxiv.org
We examine recent research that asks whether current AI systems may be developing a capacity for "scheming" (covertly and strategically pursuing misaligned goals). We compare current research...
0
0
11
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
Researchers should take risks from misaligned AI seriously. But to understand how those risks might play out – both now in an in the future – we need rigorous research approaches.
1
0
7
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
We examine the methods in AI ‘scheming’ papers, and show how they often rely on anecdotes, fail to rule out alternative explanations, lack control conditions, or rely on vignettes that sound superficially worrying but in fact test for expected behaviours.
1
1
12
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
This reliance on anecdote – coupled with poor experimental design, and a lack of construct validity - led many researchers to falsely believe that apes could learn language. In our paper, we argue that many of the same problems plague research into AI ‘scheming’ today.
1
0
6
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
We draw on a historical parallel. In the 1970s, researchers asked whether apes were capable of learning sign language. They did this by carefully monitoring signing, noting down exceptional behaviours, and writing academic papers about their findings.
1
0
4
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
Misaligned AI is a serious and credible threat, both now and in the future. Recent reports of AI ‘scheming’ have been cited as an existential risk to humanity. But is the evidence sufficiently rigorous to support the claims?
1
0
5
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
In a new paper, we examine recent claims that AI systems have been observed ‘scheming’, or making strategic attempts to mislead humans. We argue that to test these claims properly, more rigorous methods are needed.
4
25
84
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
4 months
a mere 13 months after it was first submitted, this review article led by Andrea Tachetti and others has finally seen the light of day! I have no recollection of what it says https://t.co/nMeS9w5uAw
Tweet card summary image
pnas.org
Human society is coordinated by mechanisms that control how prices are agreed, taxes are set, and electoral votes are tallied. The design of robust...
0
4
35
@hannahrosekirk
Hannah Rose Kirk
5 months
Amid rising AI companionship, this work w/ @iasongabriel @summerfieldlab @bertievidgen @computermacgyve (@UniofOxford @oiioxford @AISecurityInst @GoogleDeepMind) explores how human-AI relationships create feedback loops that reshape preferences - and why it matters for AI safety
1
1
13
@summerfieldlab
summerfieldlab @summerfieldlab.bsky.social
6 months
AISI has published its research agenda... https://t.co/6tmzvlrXCl There is so much amazing work going on at AISI. Expect much more in the way of published outputs over the coming months!
Tweet card summary image
aisi.gov.uk
View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AI governance.
0
3
10