Carina Prunkl
@carinaprunkl
Followers
1K
Following
413
Media
12
Statuses
140
Assistant Professor for Ethics of Technology | Utrecht University | AI Ethics and Policy | Philosophy and Physics
Joined March 2019
#AIEthics guidelines often emphasise the "protection of human autonomy". But what does this mean?š¤ Here's my brief account in @NatMachIntell of what autonomy is, how AI could interfere with it, and why we need to adapt policy solutions accordingly. Link https://t.co/X8AjrVBkej
2
12
59
Thank you and congrats to the lead writers of this yearās International AI Safety Report, @stephenclare_ and @carinaprunkl who co-authored this op-ed with me in @ReadTransformer on the first Key Update to the Report, published earlier today.
Very exciting to have an op-ed in @ReadTransformer today from "godfather of AI" @Yoshua_Bengio (co-authored with @stephenclare_ and @carinaprunkl). It's about the astonishing speed of AI development, and the need for institutions to keep up.
4
6
84
Pleased to share that I will participate in the drafting of the EU's new Code of Practice for General-purpose AI. It will lay out the rules for GPAI providers to comply with the #AIAct. Exciting times!
0
0
8
And finally, a must-read Comment from ethicist @carinaprunkl stresses the need for AI education and scientific community governance, in order to mitigate the risks of advanced AI approaches. https://t.co/uwIunryPLy
nature.com
Nature Methods - Risks from AI in basic biology research can be addressed with a dual mitigation strategy that comprises basic education in AI ethics and community governance measures that are...
0
1
6
Delighted to talk to the UNDP about autonomy, agency and AI in the context of human development. To read some of my work on autonomy and AI, click here: https://t.co/wPI7Rrt2lu Short version, here:
link.springer.com
Minds and Machines - Autonomy is a core value that is deeply entrenched in the moral, legal, and political practicesĀ of many societies. The development and deployment of artificial...
We thank @carinaprunkl @EthicsInAI for a riveting discussion on the potential opportunities and challenges that #AI could pose to human autonomy and agency, as well as policy implications on how to mitigate negative impacts.
0
0
2
šØ new paper alert! @carinaprunkl and I investigate how algorithmic profiling can be a source of hermeneutical injustice: https://t.co/wpBThiujwQ A short š§µ 1/6
1
4
19
What are the effects of online personalisation on our epistemic agency? Find out! Delighted to share my new article with @SilviaMilano1 on algorithmic profiling and epistemic injustice, forthcoming in Philosophical Studies: https://t.co/uglv26Vhbc
0
1
7
Huge congratulations to our 3 post docs on their big job successes. Dr Carina Prunkl will be Assistant Professor at the Ethics Institute at Utrecht University from December 1/4
2
6
56
Our lunchtime research seminar with Milo Phillips-Brown and @carinaprunkl arguing - against @CassSunstein & colleagues - that algorithmic systems can be noisy.
1
2
9
The Veil of Ignorance is a foundational thought experiment in political philosophy used to identify principles of justice for a society. In our new PNAS paper w. @weidingerlaura, @empiricallykev, @saffronhuang, and others, we explore how it applies to AI: https://t.co/myLvo566WO
7
27
116
What does it mean for conversation with an AI system to be good or even ideal? And what does it mean for speech to be false, biased or problematic? This new paper w. @Dr_Atoosa explores these questions through the lens of pragmatics and philosophyš https://t.co/gD635xPo5O
How can conversational agents be aligned with human values? New research from @Dr_Atoosa and @IasonGabriel explores this question using philosophy and linguistics: https://t.co/8rNaheosiP
5
7
35
I spoke to @AndrewMarr9 this evening about AI risks and what kind of regulation is needed. This video cuts off just before I spoke about the need for greater government oversight of how these technologies are being developed - especially very large, general purpose models
āThereās this feeling that it's happening anyway - so maybe should get there first and we can do it safer than everyone else.ā Following the government's regulation plans, AI Policy expert @jesswhittles speaks to @AndrewMarr9 about the 'rapid and scary' advances in technology.
3
18
78
Ready and motivated for today's amazing schedule š„ Starting with a keynote by @carinaprunkl
0
2
8
LUCID: Exposing Algorithmic Bias through Inverse Design https://t.co/jNIF2e5jn6 by @CarmenMazijn et al. including @carinaprunkl, @AndresAlgaba23, @VincentGinis
#ComputerScience #Learning
deepai.org
08/26/22 - AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both n...
0
2
15
Governing AI is gonna be hard. So we're growing @GovAI_, looking for many more people to join the field. We just opened applications for 4 opportunities:
4
16
66
What does responsible #AI in #LawEnforcement look like? UNICRI & @EthicsInAI convened a workshop to explore this and many more challenges and opportunities of AI at the historical @UniofOxford. #ResponsibleAI #ResponsiblePolicing #AI4SC
@INTERPOL_IC @EU_Commission @moiuae
0
5
10
Discussing @AISaferChildren initiative at the @UniofOxford, including its ethical and legal process ! @UNICRI @carinaprunkl multi stakeholder discussion on ethical use and development of AI tools is the theme of the day !
0
3
4
Itās a pleasure to kick off our joint @EthicsInAI workshop with @UNICRI on #AI and law enforcement. What are the main challenges? How can we ensure democratic legitimacy? How can we ensure thereās no disparate impact? When is the use of #AI appropriate?
0
4
36
I am really delighted that Prof Josiah Ober (Stanford) will deliver the @EthicsInAI inaugural annual lecture on June 16th on the topic āEthics in AI with Aristotleā. There will also be a panel discussion on the next day. Register for the lecture here https://t.co/im6s1G9Src
eventbrite.co.uk
Inaugural Annual Lecture | Ethics in AI with Aristotle
4
32
83
#TransatlanticAIRegulation @carinaprunkl is asking whether risk-based and fundamental rights frameworks are compatible, also wrt #AIA. FWIW, I have a recent (mostly positive) critique of the act here: https://t.co/B0eVOfTQjL But "risk-based" s/b proportional, not levels.
0
1
1