Andreas Theodorou @[email protected]
@RecklessCoding
Followers
2K
Following
13K
Media
126
Statuses
4K
Research Fellow (permanent/faculty) in Responsible AI @UmeaUniversity | AI Ethics, Governance, Systems AI, and a bit of HCI/HRI
Umeå, Sverige
Joined February 2014
Our workshop (with @vdignum, @RecklessCoding among others ) @IJCAIconf on Contesting AI. Details on CFP here: https://t.co/rV7NLJ43FW Looking forward to super interesting discussions
sites.google.com
Accountability is a necessary property of Artificial Intelligence research, development, procurement, deployment and use. When a citizen directly interacts with an artificial intelligence agent, and...
1
5
7
I am deeply honored for receiving the 2024 AAAI award for Artificial Intelligence for the Benefit of Humanity. Extremely grateful to my students & postdocs, & @AAMASconf community. Excited to discuss our latest work in #AIforSocialImpact at #AAAI2024. https://t.co/3o2adb58cc
27
8
212
For reasons, it might be useful to be explicit what are the highly desirable job skills a PhD student acquires during their PhD pursuit, in addition to what your filed of expertise is, given in no particular order. Feel free to add what I missed. 1/9
1
2
11
Yes, I am being negative but I think it is crucial to mainstream and formalize further the existing efforts that the community spent decades working on setting up. Reinventing the wheel serves none other than devaluing and delaying work.
0
0
1
This sounds awfully like the OECD’s GPAI (which the UK is a member of), the multiple AI observatories, the EU discussions for AI agencies, etc. As for the CERN of AI idea, well, the EU has been funding networks for excellence in AI for some time - see @humaneainet @vision_claire
"Among the ideas under consideration in Downing Street is setting up a global AI authority in London, modelled on the International Atomic Energy Agency (IAEA)."
1
0
1
Come and join @umeauniversitet in a very interesting topic!
0
0
0
Great article but I hate the title. ChatGTP did not “take” anything. Some executives made the decision to switch to automation. Stop anthropomorphising tools, it pushes the wrong narrative on who is responsible.
AI "lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality." ChatGPT took their jobs. Now they walk dogs and fix air conditioners. (gift article) https://t.co/QEp6VL9iyy
1
11
38
The 'leaked' Google doc is fascinating, if suspect (drops the day competition investigations into GPAI are announced, looks like from a policy team). But it does raise some interesting questions for policymakers around GPAI, supply chains, and open source https://t.co/3t7dJQyARN
2
10
37
Announcing the NeurIPS Code of Ethics - a multi-year effort to help guide our community towards higher standards of ethical conduct for submissions to NeurIPS. Please read our blog post below: https://t.co/rjSII7i2CT
7
25
85
Re the "slowing AI" chain email going around – LOOK AT THE LEADING SIGNATORIES. This is more BS libertarianism. We don't need AI to be arbitrarily slowed, we need AI products to be safe. That involves following and documenting good practice, which requires regulation and audits.
27
29
145
My PhD student (external supervisor), Jack McKinlay, from @UniofBath @ARTAIBath giving a talk for his #XAI research @ResponsibleAIU1 research retreat! Jack’s research looks at the different metrics stakeholders may have when it comes to XAI and how to validate systems compliance.
0
0
6
Yup. Yet it in some humanist/social science forums this rings as gatekeeping when pointed that alarming is not enough we need to specify the problems in a precise enough away. (Very nice 🧵btw)
Fifth - I’m tired of sensationalism without solutions. The field of responsible Ai has an obligation to not only show up with problems but show up with ideas on solutions. In turn, these multi billion dollar companies have an obligation to engage.
0
1
8
Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture.
61
51
617
A reasonable policy by Nature about the use of LLMs for paper writing. - LLMs are tools with accountability and cannot be listed as coauthor. - LLMs usage should be acknowledged because the transparency of methods is necessary in science. https://t.co/51f6w6JXyH
nature.com
Nature - As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
26
93
453
📢After a year and a half of work, our "Handbook of Computational Social Science for Policy" is finally out! A joint effort of over 40 prominent scholars available as open access book published by @SpringerNature. Download your copy 👉 https://t.co/oZkUVNMg5c
#CSS4P @EU_ScienceHub
11
182
520
#AIEthics Let it RAIN for Social Good https://t.co/8gFcZ2zleS by @vdignum @RecklessCoding @sesdun v/ @UmeaUniversity Pdf 👇 https://t.co/bXzQ4i3YbY
#AI #Coding #100DaysOfCode Cc @MiaD @Lavina_rr @naeema_pasha @psb_dc @DeepLearn007 @Ym78200 @jblefevre60 @LaurentAlaus @ahier
1
33
26
I asked ChatGPT to rewrite Bohemian Rhapsody to be about the life of a postdoc, and the output was flawless:
135
2K
9K
So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case:
@Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2
32
197