
Stevie Bergman
@tvr2c
Followers
513
Following
5K
Media
37
Statuses
1K
Tech in the wild. Visiting Assistant Professor at Brown. Past: DeepMind, Meta, Princeton, TVR2C at WPRB, Peace Corps, USDOJ
New York, USA
Joined November 2013
I made this podcast miniseries on the intersection of AI & human rights framed by a conference by @PrincetonCITP and the @UN in 2019. Aimed at a general audience, it will walk you through the salient topics in AI and in human rights + 3 real case studies
2
13
35
It’s a common misconception that PhD students pay tuition. PhD students actually generally have their tuition covered by the university, which in exchange takes everything from them.
25
495
14K
📢We're hiring! We are looking for a UK Public Policy Lead to deliver our work influencing law and policy, as part of Ada’s mission to ensure data and AI work for people and society. Apply by 4 September 2024: https://t.co/EHYLldGTVk
2
46
56
LLM evaluation is a confusing mess. Standardized eval frameworks have helped, but there's a long way to go. I think we should promote evaluation as a specialized activity carried out by impartial third parties who are not also developers. Of course developers will evaluate
0
18
72
Violent anti-Syrian pogroms have broken out in several cities in Türkiye as crowds stormed onto the streets, damaging Syrian-owned property, including vehicles, shops, and houses, and hunting down Syrian residents throughout the night. At least 474 people were detained so far,
6
116
304
So very excited to see our work out in the world! Congratulations John Mellor, @weidingerlaura and team!
New paper out! Very excited that we’re able to share STAR: SocioTechnical Approach to Red Teaming Language Models. We've made some methodological advancements focusing on human red teaming for ethical and social harms. 🧵Check out https://t.co/TOTqh6HDPH
0
2
11
@BobbyAllyn man watched a Spike Jonze movie and said "I'm gonna completely miss the lesson from this film and make it real."
20
110
3K
Time to spotlight our amazing speakers! First, Stevie Bergman (@tvr2c) from Google DeepMind will be speaking at 3pm CET. We are so excited to have her join us! Check out our website for the full schedule.
0
4
8
📢 Keynotes for the Safety in Conversational AI workshop @LrecColing! 📢 Upcoming talks from Laura Weidinger, Stevie Bergman, and Maurice Jakesch! 🎉🎉 Stay tuned for details 👏 See you in Torino🥪! @weidingerlaura @tvr2c @maurice_jks @GoogleDeepMind @Bauhaus_Uni
1
6
12
Our chapter on Access got a nice shoutout by the great @jdickerson and @EricaRBrown on CBS news - as well as @nahema_marchal 's great piece on misinformation
cbsnews.com
Artificial intelligence assistants may soon be able to do much more than play your favorite music or call your mom, but some Google researchers warn about possible ethical dilemmas. CBS News reporter...
Check out our new paper on Ethics of Advanced AI Assistants, led by @IasonGabriel @Arianna_Manzini and Geoff Keeling. Chapter 15, Opportunity & Access was cowritten by myself and the magical @renee_m_shelby.
1
1
8
Check out our new paper on Ethics of Advanced AI Assistants, led by @IasonGabriel @Arianna_Manzini and Geoff Keeling. Chapter 15, Opportunity & Access was cowritten by myself and the magical @renee_m_shelby.
1. What are the ethical and societal implications of advanced AI assistants? What might change in a world with more agentic AI? Our new paper explores these questions: https://t.co/Z0jlSMBLxq It’s the result of a one year research collaboration involving 50+ researchers… a🧵
0
3
19
"One small note on the page says that it “may occasionally produce incorrect, harmful or biased content,” but there’s no way for an average user to know whether what they’re reading is false."
This is an bananas NYC story courtesy of our friends @themarkup: An AI chatbot touted by City Hall keeps telling biz people to break the law Can you take workers tips? Can you discriminate? See what chatbot says 👇 https://t.co/1TNTQM9UPI
0
0
0
It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government. ... and a short thread because there is so much awfulness in this one article. /1 https://t.co/k4AWPlZJtA
11
85
262
🚨New paper: The Illusion of Artificial Inclusion (CHI'24) Many are using LLMs to replace humans (researchers and participants) in research and development. We study this practice and present three arguements against LLM use in human subjects work: 1/10🧵 https://t.co/UZg4KOGq0s
5
85
291
gen ai people--what are the positive uses? what are the positive uses that justify the massive investments, disruptions, and harms? I'm genuinely curious
37
16
212
With a voice overwhelmed by exhaustion from hunger, Anas Al-Sharif posts a heart-wrenching video about the famine in the northern Gaza Strip. بصوت يغلب عليه الإعياء من الجوع، أنس الشريف ينشر ڤيديو يدمي القلوب عن المجاعة في شمال قطاع غزة. @AnasAlSharif0
9
56
59
If you are building technology and too insecure to acknowledge the reality of limits, guardrails & criticism, then you are likely not actually interested in creating any real value. Beware of those hyper-sensitive to scrutiny - they are likely peddling sandcastles & vaporware.
6
31
149
🚨PAPER'S OUT! 🚨Very excited that today we’re releasing a new holistic framework for evaluating the safety of generative AI systems. Big evaluation gaps remain + we suggest steps to close these. Paper: https://t.co/fPBFUjCnXi, blog: https://t.co/cPA8oV4z94 (1/n)
arxiv.org
Generative AI systems produce a range of risks. To ensure the safety of generative AI systems, these risks must be evaluated. In this paper, we make two main contributions toward establishing such...
9
63
256