c_russl Profile Banner
Chris Russell Profile
Chris Russell

@c_russl

Followers
689
Following
1K
Media
6
Statuses
225

Associate professor of ai, government, and policy and the Oxford Internet Institute. ELLIS fellow. Formerly AWS, and the Alan Turing Institute

Joined September 2018
Don't wanna be here? Send us removal request.
@c_russl
Chris Russell
6 years
One important part of this paper is that we show a common test for indirect discrimination in the EU is biased against minorities. This is particularly bad for smaller groups, Roma, LGBTQ+, various religions, and many races. 1/
@SandraWachter5
Sandra Wachter [email protected]
6 years
My work Why fairness cannot be automated: Bridging the gap between EU non-discrimination law & AI https://t.co/0RpAfKXRWI on compatibility of fairness metrics used by the ECJ & CS. We show which parts of AI fairness can & cannot (& should not) be automated +ideas 4 bias audits
4
10
26
@WillHawkins3
Will Hawkins
1 year
🚀 Paper news! Excited that our paper, with @b_mittelstadt & @c_russl was accepted at the NeurIPS Safe GenAI workshop! In this work we explore how fine-tuning can impact toxicity rates in language models... 🧵
1
6
21
Another example of what I @b_mittelstadt @c_russl termed careless speech. Subtle hallucinations are dangerous & developers are not (yet) liable for them. We argue they should. See paper: Do LLMs have a legal duty to tell the truth? https://t.co/oVM3etcaJZ
Tweet card summary image
apnews.com
Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said.
2
12
27
@LuizaJarovsky
Luiza Jarovsky, PhD
1 year
🚨 [AI REGULATION] The paper "Do Large Language Models Have a Legal Duty to Tell the Truth?" by @SandraWachter5, @b_mittelstadt & @c_russl is a MUST-READ for everyone in AI governance. Quotes: "Free and unthinking use of LLMs undermine science, education and public discourse in
1
17
52
Such an honour to be featured in this @Nature @metricausa article w/@b_mittelstadt @c_russl on our work on GenAI, truth & hallucinations
@metricausa
Vivien Marx🐘📬⛅️
1 year
And @SandraWachter5 talks about LLMs and truthfinding. (Photo credit: P. Hind)
1
4
23
@b_mittelstadt
Brent Mittelstadt @bmittelstadt.bsky.social
1 year
New open access paper on hallucinations in LLMs out now in Royal Society Open Science: 'Do large language models have a legal duty to tell the truth?' w/ @SandraWachter5 and @c_russl https://t.co/FJljkbIYCw @royalsociety @oiioxford @UniofOxford @OxGovTech
Tweet card summary image
royalsocietypublishing.org
Abstract. Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and sha
3
25
58
@EoinDelaney_
Eoin Delaney
1 year
🚨New paper and fairness toolkit alert🚨 Announcing OxonFair: A Flexible Toolkit for Algorithmic Fairness w/@fuzihaofzh, @SandraWachter5, @b_mittelstadt and @c_russl toolkit - https://t.co/jSE0hhsYAw paper -
Tweet card summary image
github.com
Fairness toolkit for pytorch, scikit learn and autogluon - oxfordinternetinstitute/oxonfair
1
13
27
Congrats Algorithm Audit for this important work & for uncovering systemic discrimination in access to education. I am thrilled that my paper "Why Fairness Cannot Be Automated" https://t.co/inItgL10WK w/@b_mittelstadt @c_russl was useful for the study!
0
21
64
@SandraWachter5
Sandra Wachter [email protected]
2 years
My new paper w/@b_mittelstadt @c_russl "Do LLMs have a legal duty to tell the truth?" We explore if developers need to reduce hallucinations, inaccurate & harmful outputs or what we term "careless speech"? We show who is liable for GenAI outputs.
6
32
106
@cvssp_research
CVSSP Research
2 years
📢 We're calling upon all #Monocular #Depth enthusiasts to join the #challenge and partake in our #CVPR workshop. Dive deeper into the details of the workshop and challenge on our website:  https://t.co/73B3EUVhUa #computervision #ai #mde #mdec #monodepth #cvpr24
0
3
4
@pierrepinna
Pinna Pierre
2 years
#AI #AIEthics A must-read research paper, by Sandra Wachter [@SandraWachter5], Brent Mittelstadt [@b_mittelstadt] & Chris Russell [@c_russl], on how to make Large Language Models (LLMs) more Truthful and Counter #misinformation 👇 To Protect Science, We Must Use LLMs as
4
38
44
@c_russl
Chris Russell
2 years
This is one of my favourite baselines. Instead of augmenting your dataset with synthetic images, you can use the prompt to search the dataset used to train the diffusion model. It's cheap, and there's significant improvements in performance.
0
0
9
@jamiespencer06
Jaime Spencer
2 years
The 3rd edition of MDEC will be starting in just two days! See 👇🏻 for details on submitting to the challenge and look forward to some great keynotes at #CVPR24! @CVPR @cvssp_research
@DepthChallenge
Monocular Depth Estimation Challenge
2 years
We’re very excited to announce that the 3rd edition of MDEC will be taking place at @CVPR in Seattle! The challenge, accepting both supervised and self-supervised approaches, will be running from 1st Feb - 25th Mar on CodaLab ( https://t.co/yV3kq33vWH) 1/2
0
5
9
@oiioxford
Oxford Internet Institute
2 years
Great to see work by Profs @SandraWachter5, @b_mittelstadt and Chris Russell, all @oiioxford, referenced as a case study for cross-disciplinary impact in this new @AcadSocSciences report.
@AcadSocSciences
Academy of Social Sciences
2 years
Published today our new report emphasises the vitally important yet poorly-understood role of the #socialsciences to the UK’s current research, development & innovation system. Hear from @jameswilsdon below for more. Read the report ➡️ https://t.co/PES4cFvmrd @Sage_Publishing
0
7
11
@SandraWachter5
Sandra Wachter [email protected]
2 years
Excited for my keynote @NeurIPSConf tmr 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governance of AI, foundation models, & generative AI" incl my new paper in @NatureHumBehav https://t.co/K01orMUyRF w/@b_mittelstadt @c_russl
2
1
4
@SandraWachter5
Sandra Wachter [email protected]
2 years
Cant wait for my keynote @NeurIPSConf 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governance of AI, foundation models, & generative AI" incl my new paper in @NatureHumBehav https://t.co/K01orMUyRF w/@b_mittelstadt @c_russl
regulatableml.github.io
Towards Bridging the Gaps between Machine Learning Research and Regulations
0
4
20
@SandraWachter5
Sandra Wachter [email protected]
2 years
SO incredibly excited to give a keynote @NeurIPSConf at the Regulatable ML Workshop on my new @NatureHumBehav paper https://t.co/K01orMUyRF w/@b_mittelstadt @c_russl alongside so many outstanding speakers! See you 16 Dec 9.30am CST! https://t.co/IZs19PONhp @oiioxford @BKCHarvard
regulatableml.github.io
Towards Bridging the Gaps between Machine Learning Research and Regulations
0
5
15
@b_mittelstadt
Brent Mittelstadt @bmittelstadt.bsky.social
2 years
How can we use LLMs like ChatGPT safely in science, research & education? In our new @NatureHumBehav paper we advocate for prompting AI with true information using zero-shot translation to avoid hallucinations. Paper: https://t.co/D8rqsUyx0N @oiioxford @SandraWachter5 @c_russl
0
2
19
@SandraWachter5
Sandra Wachter [email protected]
2 years
Fresh off the press my new paper @Nature @NatureHumBehav w/@b_mittelstadt @c_russl "To protect science, we must use LLMs as zero-shot translators" where we show how GenAI poses huge risks to science & society & what can be done to stop it https://t.co/PTEnQyLyqB @oiioxford
Tweet card summary image
nature.com
Nature Human Behaviour - Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are...
2
19
56