canfer_akbulut Profile Banner
Canfer Akbulut Profile
Canfer Akbulut

@canfer_akbulut

Followers
315
Following
175
Media
1
Statuses
21

sociotechnical AI research @googledeepmind

Joined November 2021
Don't wanna be here? Send us removal request.
@canfer_akbulut
Canfer Akbulut
10 months
super exciting job opportunity! 🥳.
@IasonGabriel
Iason Gabriel
10 months
Are you interested in exploring questions at the ethical frontier of AI research?. If so, then take a look at this new opening in the humanity, ethics and alignment research team: . HEART conducts interdisciplinary research to advance safe & beneficial AI.
0
0
3
@canfer_akbulut
Canfer Akbulut
10 months
I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! .We survey the state of safety evaluations and find 3 gaps: the modality gap 📊, the coverage gap 📸, and the context gap 🌐. Find out more in the paper:
Tweet media one
0
9
36
@grok
Grok
4 days
Join millions who have switched to Grok.
190
390
3K
@canfer_akbulut
Canfer Akbulut
10 months
RT @mhtessler: I am really beyond words to be able to share with the world what @bakkermichiel, @summerfieldlab, and I and a truly world-cl….
0
47
0
@canfer_akbulut
Canfer Akbulut
10 months
RT @IasonGabriel: What does it mean for AI to be "too human"? How could such a situation arise? And why does this matter?. Check out this e….
0
58
0
@canfer_akbulut
Canfer Akbulut
10 months
How do we anticipate and prepare for the impacts of anthropomorphic AI on users and society? We explore this question in our paper, now out in @AIESConf.#aies2024:
@canfer_akbulut
Canfer Akbulut
1 year
Have you been thinking about the implications of anthropomorphic AI quite a bit this week? 🤔 We explore the risks of anthropomorphic AI systems in our Ethics of Advanced AI Assistants report. Key insights in thread 💡
0
69
26
@canfer_akbulut
Canfer Akbulut
1 year
RT @Arianna_Manzini: In a world where users rely on advanced AI assistants for a range of tasks across various domains, when would user tru….
0
7
0
@canfer_akbulut
Canfer Akbulut
1 year
RT @lujainmibrahim: Most real-world AI applications involve human-model interaction, yet most current safety evaluations do not. In a new p….
0
22
0
@canfer_akbulut
Canfer Akbulut
1 year
[6] There are clear ways for model developers to avoid inadvertently increasing the risk of harm to users, including transparency and disclosure of an AI's status and impact-centred research on user-AI interaction. 🔧 Find out more about paths to mitigations in the paper!.
0
0
5
@canfer_akbulut
Canfer Akbulut
1 year
[5] The more anthropomorphic cues we build into these systems, the more salient misperceptions of human-likeness become. The capacity to shape this design creates a responsibility. 🙋‍♀️.
1
0
4
@canfer_akbulut
Canfer Akbulut
1 year
[4] These risks have become a lot more salient with recent releases of conversational AI systems. ⚠️ Luckily, AI designers have influence over the anthropomorphic cues in a system that increase perceptions of human-likeness.
1
0
2
@canfer_akbulut
Canfer Akbulut
1 year
[3] We’ve known for a long time that seeing human-likeness in AI may lead users to over-trust these systems, or form inappropriate emotional attachments to them, paving the way to harms that impact a user's privacy, safety, and emotional well-being. ❤️‍🩹.
1
1
2
@canfer_akbulut
Canfer Akbulut
1 year
[2] But the dynamic and highly interactive nature of conversational AI is quickly changing our expectations of how likely it is that people will anthropomorphise these systems (treat them as human-like). 🤯.
1
0
2
@canfer_akbulut
Canfer Akbulut
1 year
[1] Seeing human-likeness in technology is nothing new – we've observed this effect in user interactions with simple dialogue systems decades ago (the “Eliza effect”). More recently, people have shown a tendency to "see human" in social robots and digital assistants. 🤖.
1
0
2
@canfer_akbulut
Canfer Akbulut
1 year
Have you been thinking about the implications of anthropomorphic AI quite a bit this week? 🤔 We explore the risks of anthropomorphic AI systems in our Ethics of Advanced AI Assistants report. Key insights in thread 💡
Tweet card summary image
deepmind.google
Exploring the promise and risks of a future with more capable AI
2
9
34
@canfer_akbulut
Canfer Akbulut
1 year
RT @IasonGabriel: More great work from a research team led by our model methodologist and evaluator in chief @weidingerlaura 👏. Here's what….
0
4
0
@canfer_akbulut
Canfer Akbulut
1 year
paper's out! 🥳 honored to have contributed to this thorough analysis of the mechanisms that enable persuasive generative AI – hats off to seliem and sasha for their spectacular leadership!.
@ZacKenton1
Zac Kenton
1 year
Our new paper on AI persuasion, exploring definitions, harms and mechanisms. Happy to have contributed towards the section on mitigations to avoid harmful persuasion. Some highlights in 🧵
Tweet media one
0
1
16
@canfer_akbulut
Canfer Akbulut
1 year
A truly monumental effort led by @IasonGabriel , @Arianna_Manzini , and Geoff Keeling. I led the section on anthropomorphism – many thanks to my lovely co-authors Iason, Arianna, @verena_rieser, and @weidingerlaura and to @merrierm and @mhtessler for their thoughtful edits!.
@IasonGabriel
Iason Gabriel
1 year
1. What are the ethical and societal implications of advanced AI assistants? What might change in a world with more agentic AI?. Our new paper explores these questions:. It’s the result of a one year research collaboration involving 50+ researchers… a🧵
Tweet media one
0
4
23
@canfer_akbulut
Canfer Akbulut
2 years
RT @GoogleDeepMind: We’re excited to announce 𝗚𝗲𝗺𝗶𝗻𝗶: @Google’s largest and most capable AI model. Built to be natively multimodal, it can….
0
2K
0
@canfer_akbulut
Canfer Akbulut
2 years
RT @GoogleDeepMind: From assisting in healthcare to creating art, generative AI is changing how we live and work. We developed a framework….
0
58
0