
Canfer Akbulut
@canfer_akbulut
Followers
315
Following
175
Media
1
Statuses
21
sociotechnical AI research @googledeepmind
Joined November 2021
I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! .We survey the state of safety evaluations and find 3 gaps: the modality gap 📊, the coverage gap 📸, and the context gap 🌐. Find out more in the paper:
0
9
36
RT @mhtessler: I am really beyond words to be able to share with the world what @bakkermichiel, @summerfieldlab, and I and a truly world-cl….
0
47
0
RT @IasonGabriel: What does it mean for AI to be "too human"? How could such a situation arise? And why does this matter?. Check out this e….
0
58
0
How do we anticipate and prepare for the impacts of anthropomorphic AI on users and society? We explore this question in our paper, now out in @AIESConf.#aies2024:
Have you been thinking about the implications of anthropomorphic AI quite a bit this week? 🤔 We explore the risks of anthropomorphic AI systems in our Ethics of Advanced AI Assistants report. Key insights in thread 💡
0
69
26
RT @Arianna_Manzini: In a world where users rely on advanced AI assistants for a range of tasks across various domains, when would user tru….
0
7
0
RT @lujainmibrahim: Most real-world AI applications involve human-model interaction, yet most current safety evaluations do not. In a new p….
0
22
0
Have you been thinking about the implications of anthropomorphic AI quite a bit this week? 🤔 We explore the risks of anthropomorphic AI systems in our Ethics of Advanced AI Assistants report. Key insights in thread 💡
deepmind.google
Exploring the promise and risks of a future with more capable AI
2
9
34
RT @IasonGabriel: More great work from a research team led by our model methodologist and evaluator in chief @weidingerlaura 👏. Here's what….
0
4
0
paper's out! 🥳 honored to have contributed to this thorough analysis of the mechanisms that enable persuasive generative AI – hats off to seliem and sasha for their spectacular leadership!.
Our new paper on AI persuasion, exploring definitions, harms and mechanisms. Happy to have contributed towards the section on mitigations to avoid harmful persuasion. Some highlights in 🧵
0
1
16
A truly monumental effort led by @IasonGabriel , @Arianna_Manzini , and Geoff Keeling. I led the section on anthropomorphism – many thanks to my lovely co-authors Iason, Arianna, @verena_rieser, and @weidingerlaura and to @merrierm and @mhtessler for their thoughtful edits!.
1. What are the ethical and societal implications of advanced AI assistants? What might change in a world with more agentic AI?. Our new paper explores these questions:. It’s the result of a one year research collaboration involving 50+ researchers… a🧵
0
4
23
RT @GoogleDeepMind: We’re excited to announce 𝗚𝗲𝗺𝗶𝗻𝗶: @Google’s largest and most capable AI model. Built to be natively multimodal, it can….
0
2K
0
RT @GoogleDeepMind: From assisting in healthcare to creating art, generative AI is changing how we live and work. We developed a framework….
0
58
0