
Aidan Clark
@_aidan_clark_
Followers
10K
Following
1K
Media
68
Statuses
2K
Qualitative Mathematics @OpenAI Ex: @DeepMind, @BerkeleyDAGRS Hae sententiae verbaque mihi soli sunt
Joined November 2020
It’s actually quite nuanced why it seems so hard to build a system encouraging what US capitalism does without creating the same level of inequity. Not getting why those left behind by our current system might be mad that someone has amassed so much wealth is similarly dumb AF.
This is the ultimate litmus test for retardation. If you think “Wow look at all the wealth he stole. This is bad” You are retarded. If you think “Wow, he created something so valuable that millions were willing to pay for it, and entire industries were revolutionized in a way.
4
3
45
RT @nervouscomputer: it's hard to appreciate just how deep the talent pool at openai is until each wave of departures.
0
1
0
This is a good thread. I think if you’re the kind of person that would be cool with every American being mailed a machine gun, no questions asked, then probably it’s reasonable to say no AI safety is A-OK.
I didn't want to post on Grok safety since I work at a competitor, but it's not about competition. I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below.
3
0
26
Hi, .We’re delaying the open weights model. Capability wise, we think the model is phenomenal — but our bar for an open source model is high and we think we need some more time to make sure we’re releasing a model we’re proud of along every axis. This one can’t be deprecated!.
we planned to launch our open-weight model next week. we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us. while we trust the community will build great things with this model, once weights are.
41
18
634
This aligns with my own experience: I don't feel an AI tool would help accelerate the core of my workload, but there exist a distribution of tasks which it's a massive accelerator for (almost all of which look like "do something very different than what you normally do").
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
3
0
33
I agree with the conclusion of this thread, and some of the arguments but disagree that the argument is the cause of the conclusion. AI 2027 is wrong, as many likeminded manifestos are similarly wrong, because the authors don’t understand how people work.
I'll go ahead and put a flag in the ground here. This AI 2027 thing is wrong, the intuitions behind it are bad, and in 2027 these guys will publish a post-mortem that mostly amounts to "we were right, just got the dates wrong".
3
1
39