Tara Steele Profile Banner
Tara Steele Profile
Tara Steele

@tarasteele22

Followers
207
Following
260
Media
23
Statuses
395

Writer | Law Grad 1st (2019) | Intelligence Career Background | AI Safety & Governance

United Kingdom
Joined June 2011
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@tarasteele22
Tara Steele
4 months
Imagine this: A multi-million dollar pharmaceutical company aims to create a drug to solve everyone's health issues, ensuring happiness, optimal weight, and freedom from ailments. When it is developed, everyone will be forced to take it, including babies and children. There is no
11
35
113
@tarasteele22
Tara Steele
4 months
This doesn't look promising
Tweet media one
10
18
156
@tarasteele22
Tara Steele
5 months
Can anyone direct me to an actually good argument as to why we should continue to develop frontier AI without first solving alignment? Like, better than: because it’ll be really good, don’t worry; it’s too hard to stop; because all regulation is rubbish; doomers be doomering etc?
61
2
68
@tarasteele22
Tara Steele
4 months
@ylecun You should be ashamed of yourself for posting this - why on Earth would you want to do this to people, simply because they have a different opinion to you? If you have any integrity at all you should delete this and apologise to anyone involved
6
0
49
@tarasteele22
Tara Steele
5 months
Max Tegmark in this clip is where I’m at with AI risk! : Happily chatting about it enthusiastically, trying to help make the future better… then think about it within the context of my kids and cry immediately
2
2
32
@tarasteele22
Tara Steele
5 months
Fun fact for non-AI risk friends! The head of AI safety at US AI Safety Inst. estimates 46% chance humanity ends up with irreversibly messed up future within 10yrs of building powerful AI & 20%chance most humans die. Odds of bullet in russian roulette are 17%. Should we build it?
10
4
32
@tarasteele22
Tara Steele
4 months
@RuxandraTeslo @ylecun @beenwrekt I'd much rather listen to arguments like Yann's too, because it would feel so much nicer and I could just ignore the whole thing. Unfortunately they don't make any logical sense though, so I can't really allow myself to do that. Maybe one day I'll cave in, it's really tempting
1
0
24
@tarasteele22
Tara Steele
4 months
Urgent meaningful AI regulation is crucial to safeguard National Security and Critical Infrastructure. Why risk leaving these vital interests unprotected??
Tweet media one
2
5
17
@tarasteele22
Tara Steele
4 months
@ylecun @BotTachikoma "Instead of mobilizing to find solutions...pessimists just get depressed and give up, awaiting their fate." The people you've chosen to attempt to shame in your post are part of a group that is working hard to find positive solutions (PauseAI).
1
0
16
@tarasteele22
Tara Steele
4 months
@tegmark @drfeifei @sama @ESYudkowsky @ylecun @elonmusk @geoffreyhinton @pmarca @AndrewYNg @MelMitchell1 For me personally, the overarching argument that advanced AI presents an existential threat to humanity is logically flawless. The potential for AI to surpass human intelligence and capabilities, along with the unpredictable consequences of such a development, makes this a really
1
1
15
@tarasteele22
Tara Steele
4 months
Erm.... care to elaborate @sama ?
Tweet media one
1
2
14
@tarasteele22
Tara Steele
4 months
@bitcloud @ylecun I agree that joining apocalyptic cults is bad, and I agree that learning more and listening to people who understand the field is good. This long list of AI scientists is a good starting point in my opinion:
Tweet media one
2
0
13
@tarasteele22
Tara Steele
5 months
@sama Sam the Man, a poem: I met a man whose name was Sam, He said he had a cunning plan, To rid the world of pain and strife, Unless accidentally ending all life. He didn’t seem to have a plan B. The end. (PS yes I have had wine)
2
1
11
@tarasteele22
Tara Steele
4 months
@the_yanco It's just insane that this is being allowed to happen. If we get a chance to look back at this in the history books, people will be astounded by how slow humanity was to react to the situation.
2
0
12
@tarasteele22
Tara Steele
4 months
@leecronin My personal experience of comments from people with e/acc in their profile is that they like to tell people writing about AI risks to f-off. A lot. Not a lot of constructive discussion going on, just lots of f-offing. Just my personal experience/observation.
4
0
10
@tarasteele22
Tara Steele
4 months
@ylecun @thegartsy @skdh @the_yanco @elonmusk I'll ask again - who are these cult gurus you're talking about?
0
0
8
@tarasteele22
Tara Steele
4 months
@_florianmai I don't think it's so much about a Hollywood star being angry as it is about the level of deceit involved. It also seems quite relevant to copyright issues, given their approach to misappropriation. This speaks to their integrity, which is crucial when considering trust in their
1
0
10
@tarasteele22
Tara Steele
5 months
@XRobservatory I worry Nick Bostrom’s concerns are based on perception of what’s going on inside a bubble. My experience is almost literally no-one I know is aware of AI risks. And if I bring the subject up they’re not particularly interested. We need more public awareness
0
0
9
@tarasteele22
Tara Steele
5 months
@PaperWizardAI I understand the huge challenges - like, really huge! My point is there are people saying the very idea of a pause is dumb BECAUSE it’s difficult. That’s not a good excuse to say let’s go for it and hope for the best. A pause is going to be really difficult, but it’s possible
2
0
8
@tarasteele22
Tara Steele
4 months
@earthcurated Artificial General Intelligence (AGI) without a doubt. All the more so because so few are aware of the issue. We all need to step up and ensure safety standards are implemented to mitigate its potential to cause catastrophic harm.
1
0
8
@tarasteele22
Tara Steele
4 months
@tarasteele22
Tara Steele
4 months
This doesn't look promising
Tweet media one
10
18
156
1
0
8
@tarasteele22
Tara Steele
4 months
@daganshani1 The film is a great intro for anyone new to the huge issues we face with AI risks - highly recommended
2
0
6
@tarasteele22
Tara Steele
5 months
@0fficialDinesh Yes! If most non-tech people knew the likely trajectory if things continue as they are, there would be very little debate about whether or not to pause - we’d agree, and we’d do it
0
0
7
@tarasteele22
Tara Steele
5 months
A challenging but essential aspect of AI safety is military use - yet another way in which lack of effective guardrails and regulation could lead to disastrous consequences
1
2
7
@tarasteele22
Tara Steele
2 months
Two summer read recommendations for anyone interested in AI safety - There Is No Antimemetics Division by @qntm & Taming The Machine by @NellWatson . Former is fiction and not about AI, but anyone concerned with AI x-risk will likely find it thought provoking (& it’s just v good!)
Tweet media one
0
1
8
@tarasteele22
Tara Steele
5 months
Inspired by a rather fun DM exchange, I’m starting a daily AI poem drop. Here’s my first masterpiece: I don’t understand Yann, He’s such a clever man, Yet sometimes he says things that are absolute utter nonsense. The end
1
0
6
@tarasteele22
Tara Steele
4 months
@ToonamiAfter In what way do you think the stakes are not that high? The post isn't about current narrow AI. Companies like OpenAI are open about the fact that they are planning for AGI, regardless of how close you feel they may or may not be to it.
1
0
6
@tarasteele22
Tara Steele
5 months
@AlexAlarga Do you think so, literally? I thought that was a really fringe thing. That is so so so weird if so. Ok, if so then no point in me engaging with that 'logic', will focus completely on raising awareness instead
2
0
6
@tarasteele22
Tara Steele
4 months
@AISafetyMemes I’m lost for words with the inaccuracies in Yann’s list. I just don’t understand how someone so intelligent and experienced can say so many things, over and over again, that are counter to basic common sense
0
0
4
@tarasteele22
Tara Steele
4 months
@JMannhart It’s good that he’s clarified somewhat, but it’s shooting someone down at a personal level with aggression that I find shocking from a professional person. Would we think this was ok from a CEO level person in any other industry? Also, surely one should fully expect a response
1
0
6
@tarasteele22
Tara Steele
4 months
@bindureddy What a very odd thing to say
1
0
6
@tarasteele22
Tara Steele
4 months
@ButlerianIdeal Thank you 🙂 It’s bizarre that this issue is framed so differently when it’s tech - then regulations suddenly become incredibly controversial for some reason
2
0
5
@tarasteele22
Tara Steele
4 months
0
1
6
@tarasteele22
Tara Steele
5 months
@robbseaton It’s possible because a pause is the result of enforcing safety standards. We adopt standards whereby certain safety requirements must be fulfilled before training models more powerful than GPT4. We need global agreement for this, we’ve already seen how we can do that with nukes
1
0
5
@tarasteele22
Tara Steele
5 months
@rohanvisme I think that's a good question and the fact that we (collectively) can't agree on a good answer is a big problem in itself
1
0
5
@tarasteele22
Tara Steele
4 months
@WallStreetSilv AGI before it’s provably safe, without a doubt. Also the greatest threat to the rest of the world.
2
0
5
@tarasteele22
Tara Steele
4 months
@ylecun Who are the gurus you’re referring to?
0
0
5
@tarasteele22
Tara Steele
4 months
@daganshani1 I wonder what those strongly in favour of e/acc would actually do in such a situation. Analogies like these make it really clear just how insane our current situation is.
1
0
5
@tarasteele22
Tara Steele
4 months
@Kat__Woods I more or less got accused of being a hypocrite recently because I used ChatGPT for something!
0
0
5
@tarasteele22
Tara Steele
5 months
@MetFreeman @Kat__Woods I get some of where you’re coming from, however you’ve said ‘make AND CONTROL’ an AGI - we literally don’t know how to do that, hence no one wins, no human is in charge anymore - the AI is
2
0
5
@tarasteele22
Tara Steele
4 months
@tegmark It would be considered ridiculous to allow self-regulation of drugs manufacturers, yet for some reason AI risks are seen in an entirely different light. Why on earth is this?!!
0
0
5
@tarasteele22
Tara Steele
5 months
I’d love to know if x-risk experts are more concerned about AI or nuclear risks in the near-term? Clearly the question isn’t straightforward, I’m just curious to know if there’s broad consensus on weighting, for eg by orgs/exp like @fli_org @tegmark @CSERCambridge @XRobservatory
1
0
4
@tarasteele22
Tara Steele
5 months
@AlexAlarga @daganshani1 Interesting point! ‘What if China gets to utopia first, can’t have that!!’ 😂
0
0
4
@tarasteele22
Tara Steele
5 months
Is it just me who thinks there are thousands of hidden messages in The Good Place, or is this a well-known ‘thing’? @nbcthegoodplace @TedDanson @PMinimizer #aialignment
Tweet media one
0
0
4
@tarasteele22
Tara Steele
4 months
@tarasteele22
Tara Steele
4 months
This doesn't look promising
Tweet media one
10
18
156
1
0
3
@tarasteele22
Tara Steele
4 months
@707KAT 'OpenAI should not be left to their own devices, they come with prices and vices, we end up in crisis' (Anti-Hero)
0
1
4
@tarasteele22
Tara Steele
4 months
Mystifying trolling comment from today: “You’re a fake woman influencer.” Does this mean I’m a fake woman who is an influencer, a woman influencer who is fake, or that I influence fake women? So confused. (This was in response to me posting a link to an article, by the way)
3
0
4
@tarasteele22
Tara Steele
5 months
@rumtin Well that’s a new take on the seriousness of the situation!! Lost for words 🤦🏼‍♀️
0
0
4
@tarasteele22
Tara Steele
4 months
@ButlerianIdeal @Kat__Woods If a pause were implemented until AI could be proven safe, it would mean an indefinite pause if alignment is impossible, effectively preventing risks. Regardless of whether it's achievable, why wouldn't that be a good solution to the dilemma we face?
1
0
1
@tarasteele22
Tara Steele
5 months
@littIeramblings Wow! Like he says though - common sense
0
0
4
@tarasteele22
Tara Steele
5 months
@the_yanco Also sometimes! It’s a weird one…I have a mental battle to maintain my logical understanding of fact that x-risk is real, versus temptation to willingly ignore that and go with the more comfortable thought of ‘it’ll all be fine, stop worrying about it, bring on the plot armour!’
1
0
3
@tarasteele22
Tara Steele
5 months
@EamonnDwyer How about intelligence though, as opposed to consciousness?
0
0
4
@tarasteele22
Tara Steele
4 months
@NathanpmYoung It's an extremely big deal because it speaks to their integrity, which is crucial when considering trust in their commitment to AI safety, particularly when the CEO has repeatedly said that they type of advanced AI they're trying to develop could kill everyone.
1
0
4
@tarasteele22
Tara Steele
4 months
0
0
4
@tarasteele22
Tara Steele
4 months
@ddmatta For me, this partly explains why a lot of people dismiss the potential for catastrophic risks from advanced AI without considering the logic or expert's views
1
0
4
@tarasteele22
Tara Steele
4 months
@sama Things must be serious when @sama uses capital letters and punctuation.
1
0
3
@tarasteele22
Tara Steele
5 months
@LinchZhang @SereneDesiree This is hilarious! ❤️
0
0
3
@tarasteele22
Tara Steele
4 months
@wolflovesmelon @aisafetyfirst The more time I spend thinking about the most catastrophic AI risks, the more I think that a lot of the denial of them rooted in psychological defence mechanisms, because the concept is terrifying. I don’t know why some people stay locked in that and others don’t.
2
0
3
@tarasteele22
Tara Steele
5 months
@JimDMiller @ilex_ulmus I had my own mini existential crisis recently when I considered voting for a party I’m generally very opposed to (in the UK) based on their AI stance! Still undecided, but my vote will be based on AI policy
0
0
3
@tarasteele22
Tara Steele
5 months
@stale2000 What kind of real world harms are you thinking of?
1
0
3
@tarasteele22
Tara Steele
4 months
Is warning the same as 'fear-mongering'? This is a frequent knee-jerk reaction from those opposed to making AI safe, it's inaccurate and has no basis in common sense.
Tweet media one
@CyberScribe_AI
Cyber Scribe (e/acc)
4 months
@tarasteele22 No it doesn’t. Stop with the fear mongering. There is no evidence of this.
1
0
0
1
0
3