
Chris Russell
@c_russl
Followers
696
Following
1K
Media
6
Statuses
225
Associate professor of ai, government, and policy and the Oxford Internet Institute. ELLIS fellow. Formerly AWS, and the Alan Turing Institute
Joined September 2018
One important part of this paper is that we show a common test for indirect discrimination in the EU is biased against minorities. This is particularly bad for smaller groups, Roma, LGBTQ+, various religions, and many races. 1/.
My work Why fairness cannot be automated: Bridging the gap between EU non-discrimination law & AI on compatibility of fairness metrics used by the ECJ & CS. We show which parts of AI fairness can & cannot (& should not) be automated +ideas 4 bias audits.
4
10
27
RT @WillHawkins3: š Paper news! Excited that our paper, with @b_mittelstadt & @c_russl was accepted at the NeurIPS Safe GenAI workshop!ā¦.
0
6
0
RT @SandraWachter5: Another example of what I @b_mittelstadt @c_russl termed careless speech. Subtle hallucinations are dangerous & developā¦.
apnews.com
Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said.
0
12
0
RT @LuizaJarovsky: šØ [AI REGULATION] The paper "Do Large Language Models Have a Legal Duty to Tell the Truth?" by @SandraWachter5, @b_mitteā¦.
0
17
0
RT @SandraWachter5: Such an honour to be featured in this @Nature @metricausa article w/@b_mittelstadt @c_russl on our work on GenAI, trutā¦.
0
4
0
RT @b_mittelstadt: Delighted to see coverage of our new paper on truth and LLMs in @newscientist !. @oiioxford @Unā¦.
www.newscientist.com
To address the problem of AIs generating inaccurate information, a team of ethicists says there should be legal obligations for companies to reduce the risk of errors, but there are doubts about...
0
15
0
RT @b_mittelstadt: New open access paper on hallucinations in LLMs out now in Royal Society Open Science:. 'Do large language models have aā¦.
royalsocietypublishing.org
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce...
0
26
0
RT @EoinDelaney_: šØNew paper and fairness toolkit alertšØ . Announcing OxonFair: A Flexible Toolkit for Algorithmic Fairness w/@fuzihaofzh,ā¦.
github.com
Fairness toolkit for pytorch, scikit learn and autogluon - oxfordinternetinstitute/oxonfair
0
13
0
RT @SandraWachter5: Congrats Algorithm Audit for this important work & for uncovering systemic discrimination in access to education. I amā¦.
0
22
0
RT @SandraWachter5: My new paper w/@b_mittelstadt @c_russl "Do LLMs have a legal duty to tell the truth?" We explore if developers need toā¦.
0
33
0
RT @cvssp_research: š¢ We're calling upon all #Monocular #Depth enthusiasts to join the #challenge and partake in our #CVPR workshop. Dive dā¦.
0
3
0
RT @pierrepinna: #AI #AIEthics.A must-read research paper, .by Sandra Wachter [@SandraWachter5], .Brent Mittelstadt [@b_mittelstadt] .& Chrā¦.
0
38
0
RT @jamiespencer06: The 3rd edition of MDEC will be starting in just two days! See šš» for details on submitting to the challenge and look fā¦.
0
5
0
RT @oiioxford: Great to see work by Profs @SandraWachter5, @b_mittelstadt and Chris Russell, all @oiioxford, referenced as a case study forā¦.
0
7
0
RT @SandraWachter5: Excited for my keynote @NeurIPSConf tmr 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governanā¦.
0
1
0
RT @SandraWachter5: Cant wait for my keynote @NeurIPSConf 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governanceā¦.
regulatableml.github.io
Towards Bridging the Gaps between Machine Learning Research and Regulations
0
4
0
RT @SandraWachter5: SO incredibly excited to give a keynote @NeurIPSConf at the Regulatable ML Workshop on my new @NatureHumBehav paper httā¦.
regulatableml.github.io
Towards Bridging the Gaps between Machine Learning Research and Regulations
0
5
0
RT @b_mittelstadt: How can we use LLMs like ChatGPT safely in science, research & education?. In our new @NatureHumBehav paper we advocateā¦.
0
3
0
RT @SandraWachter5: Fresh off the press my new paper @Nature @NatureHumBehav w/@b_mittelstadt @c_russl "To protect science, we must use LLMā¦.
www.nature.com
Nature Human Behaviour - Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are...
0
19
0
RT @oiioxford: News release alert! Large Language Models pose risk to science with false answers, says Oxford AI experts @b_mittelstadt @Saā¦.
www.oii.ox.ac.uk
Large Language Models (LLMs) pose a direct threat to science, because of so-called āhallucinationsā and should be restricted to protect scientific truth, says a new paper from leading AI researchers...
0
10
0