Jeffrey Ladish Profile Banner
Jeffrey Ladish Profile
Jeffrey Ladish

@JeffLadish

Followers
11,695
Following
1,126
Media
232
Statuses
9,821

Applying the security mindset to everything

San Francisco, CA
Joined March 2013
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@JeffLadish
Jeffrey Ladish
1 year
I think the AI situation is pretty dire right now. And at the same time, I feel pretty motivated to pull together and go out there and fight for a good world / galaxy / universe @So8res has a great post called "detach the grim-o-meter", where he recommends not feeling obligated…
24
60
594
@JeffLadish
Jeffrey Ladish
1 year
Well Meta's 65 billion parameter language model just got leaked to the public internet, that was fast. Get ready for loads of personalized spam and phishing attempts. Open sourcing these models was a terrible idea
146
613
5K
@JeffLadish
Jeffrey Ladish
2 months
I don't trust Sam Altman to lead an AGI project. I think he's a deeply untrustworthy individual, low in integrity and high in power seeking It doesn't bring me joy to say this. I rather like Sam Altman. I like his writing, I like the way he communicates clearly, I like how he…
235
242
3K
@JeffLadish
Jeffrey Ladish
2 years
I wanted more optimistic art about the future and was struggling to get DALL-E to make good stuff. Then I found Robert McCall's work and used "a painting by Robert McCall" in the prompt and yes, this is what I'm looking for
Tweet media one
Tweet media two
Tweet media three
38
100
1K
@JeffLadish
Jeffrey Ladish
1 year
I admit I'm a bit afraid and I don't think that's a bad thing. It's not that GPT-4 is way more powerful than I expected. I loosely expected something similar. But seeing the cognitive jump, I take a step back and look at the trajectory and the compute overhang and I'm scared
49
87
942
@JeffLadish
Jeffrey Ladish
4 years
GPT-3 is poised to destroy the highschool writing assignment. Why write a boring essay when you can feed the essay prompt plus a paragraph to GPT-3 and then go play Fortnight?
38
74
831
@JeffLadish
Jeffrey Ladish
1 year
For all their limitations, we shouldn't lose sight of the fact that LLMs are the most complex piece of technology ever created We've invented a system that runs on matrix multiplication and has absorbed the most meaningful parts of human knowledge. It's so wild this exists
31
79
837
@JeffLadish
Jeffrey Ladish
1 year
"But how could AI systems actually kill people?" 1) they could pay people to kill people 2) they could convince people to kill people 3) they could buy robots and use those to kill people 4) they could convince people to buy the AI some robots and use those to kill people 5)…
187
98
782
@JeffLadish
Jeffrey Ladish
2 years
I'm looking for a life partner. I've recently realized I'm a total romantic and think that a great partnership might be literally the best thing about living. For those of you who have a life partner who you're happy with, do you have any useful info you'd like to share with me?
103
34
778
@JeffLadish
Jeffrey Ladish
2 years
Love this little chart here from Open Phil
Tweet media one
17
90
764
@JeffLadish
Jeffrey Ladish
1 year
I got my first personalized spam email a couple days ago, which seemed to take some writing from my website and used a language model to generate a response. Took me a minute to figure out what it was, very uncanny valley
7
51
722
@JeffLadish
Jeffrey Ladish
1 year
The reasons I think AI timelines are short are pretty simple: GPT-4 is doing something as complex as what the human brain is doing in terms of information processing. This suggests we have enough compute for AGI, we just don't yet know how to shape it into a fully general agent
62
68
680
@JeffLadish
Jeffrey Ladish
1 year
GPT-4 disrupts the entire set of educational infrastructure that depends on students answering questions or writing answers or essays at home You can't test a student's knowledge by asking them to write something because they can just use a model to write it better
97
52
576
@JeffLadish
Jeffrey Ladish
1 year
ChatGPT is boring compared to what's coming
50
30
569
@JeffLadish
Jeffrey Ladish
1 year
This is so wild. Transformers running in your browser Models for text to speech, translation, summary, image captioning... And you can change the parameter length and temperature and everything. Try it out!
5
84
569
@JeffLadish
Jeffrey Ladish
1 year
People who think @Aella_Girl 's research is worthless are confused about how knowledge works, specifically about how you should update your beliefs in the light of new evidence. Seeing a headline that says "new research suggests adults only sleep 4 hours on average each night"…
48
28
555
@JeffLadish
Jeffrey Ladish
1 year
Not going to lie, I feel pretty awful about the AI racing dynamics playing out right now. I think it's a really bad thing, and that AI companies should agree to slow down the growing rate of compute acquisition. I'm not saying they should stop doing research. On the other hand,…
49
71
546
@JeffLadish
Jeffrey Ladish
8 months
I haven't gotten over how wild it is that we have AI systems that can reason. GPT-4 is an insane piece of technology People can argue all day about exactly how smart or impressive it is, but every day I ask chatGPT questions or get help with tasks, and my eyes don't deceive me
47
26
543
@JeffLadish
Jeffrey Ladish
4 years
The NYT is writing a piece on Scott Alexander, who writes under a pseudonym, but they're planning to doxx him in the article. @nytimes , @puiwingtam please don't.
2
58
517
@JeffLadish
Jeffrey Ladish
1 year
It's hard for my actual believed model of the world to catch up with each specific cognitive ability I see in GPT-4. I see many in Claude as well, which I've been using for longer, and the more I use these systems the more I'm like, wow, gradient descent can make AGI can't it
@zachary_horvitz
Zachary Horvitz
1 year
@SebastienBubeck @MSFTResearch @OpenAI @bing Wild results here. Example: GPT4 can visualize mazes as it navigates them.
Tweet media one
17
77
537
17
44
457
@JeffLadish
Jeffrey Ladish
3 months
How smart are different models? Ask them to explain xkcd comics. Check the out the difference between Gemini and GPT-4 on this one:
Tweet media one
Tweet media two
17
12
455
@JeffLadish
Jeffrey Ladish
1 year
Anyone can fine-tune this model for anything they want now. Fine tune it on 4chan and get endless racist trash. Want a model that constantly tries to gaslight you in subtle ways? Should be achievable. Phishing, scams, and spam are my immediate concerns, but we'll see...
17
21
443
@JeffLadish
Jeffrey Ladish
7 months
People keep saying language models "aren't really intelligent" or "can't actually reason" and sure there is plenty to debate here but I can't ignore the fact that I'm more often asking GPT-4 for help rather than stack overflow or any number of very smart friends
51
27
438
@JeffLadish
Jeffrey Ladish
2 years
Remember when the first wave of Covid was about the hit the US, and several people, myself included, recommended buying some supplies to get ready? Well, we are currently facing an acute risk period with Russia and Ukraine - probably the highest it's been 🧵
28
37
433
@JeffLadish
Jeffrey Ladish
1 year
The way OpenAI uses alignment to refer to GPT-4's behavior is misleading. Getting a model to mostly produce content you want and not produce content you don't want is very different than aligning a strongly agentic system. The latter has goals and can take autonomous actions
@sama
Sam Altman
1 year
also, i am pretty proud of the degree of alignment for GPT-4 relative to previous models. we still have a long way to go, and we really need more powerful alignment techniques for more powerful models.
202
122
3K
35
28
429
@JeffLadish
Jeffrey Ladish
1 year
This is the first time since March 2020 where I feel like the whole world is focused on a single subject From Elon Musk to random high schoolers, everyone is playing around with GPT-4 and seeing what it can do It's kind of surreal
16
13
422
@JeffLadish
Jeffrey Ladish
2 years
I don't think we can have vaccine mandates forever. If covid is still around in 10 years are we going to prevent unvaccinated people from using offices, restaurants, gyms, hair stylists for 10 years? I'm not cool with that
104
35
382
@JeffLadish
Jeffrey Ladish
1 year
Maximally open source development of AGI is one of the worst possible paths we could take It's like a nuclear weapon in every household, a bioweapon production facility in every high school lab, chemical weapons too cheap to meter, but somehow worse than all of these combined
101
44
384
@JeffLadish
Jeffrey Ladish
1 year
Eliezer Yudkowsky wrote an article in Time today calling for an indefinite moratorium on AGI development. While I don't agree with every detail, I think it's basically right, and if there were an open letter proposing what Eliezer is proposing here, I would sign it
47
19
358
@JeffLadish
Jeffrey Ladish
1 year
As a person who thinks our current chances of surviving AGI are low, I do not think "doomer" is a good term and I would prefer people not use it Human extinction is not inevitable. The risks come from racing ahead to build powerful systems without knowing how to make them safe
70
41
360
@JeffLadish
Jeffrey Ladish
1 year
Wait snakes probably lost and then *re-evolved* *eyes*??
19
24
350
@JeffLadish
Jeffrey Ladish
2 months
@AdnanChaumette There was a lot of pressure to sign, so I don't think the full 95% would disagree with what I said Also, unfortunately, equity is a huge motivator here. I know many great people at OpenAI and have a huge amount of respect for their safety work. But also, the incentives really…
15
4
349
@JeffLadish
Jeffrey Ladish
1 month
If you believe that a dog’s experience is valuable, and that it’s bad for dogs to suffer, then you should also believe that factory farming is insanely bad. It seems that most people believe the first two but not the third. I think humans are bad at moral reasoning at scale
67
25
342
@JeffLadish
Jeffrey Ladish
1 year
AI takeover is very likely 🧵 This is true even if AI alignment turns out to be relatively easy. I do not think it will be easy, but this would not change the conclusion All you need to conclude AI takeover is that future AI systems will be very powerful and agentic...
38
44
335
@JeffLadish
Jeffrey Ladish
1 year
The thing about EA being good or bad or whatever is that the problems EAs are trying to solve exist regardless. You don't have to be an EA to solve them. I don't care who solves them. But I want them solved
8
38
334
@JeffLadish
Jeffrey Ladish
7 months
The ability to scale surveillance - for political control, for helicopter parents, for abusive partners... is an underappreciated risk of the AI systems we have right now. Audio to text conversion plus LLM data processing and search is extremely powerful
@Grimezsz
𝖦𝗋𝗂𝗆𝖾𝗌 ⏳
7 months
I feel like a lot of the people making this stuff have either never been in an abusive relationship or don't have kids and aren't thinking of the impact constant surveillance has on child development
316
150
3K
17
71
331
@JeffLadish
Jeffrey Ladish
1 year
I get why people want to push AI development as fast as possible. When my friend in his early forties died last year, I wrote "the work isn't done" on the top of my whiteboard. I don't intend to stand idly as death takes my friends and family
16
22
306
@JeffLadish
Jeffrey Ladish
1 year
The final paragraphs of Superintelligence (2014) by Nick Bostrom: "Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our play thing and the immaturity of our conduct.…
15
72
300
@JeffLadish
Jeffrey Ladish
4 months
When GPT-4 came out, some technical friends of mine speculated about whether with 10 years of R&D with agent scaffolding w/ GPT-4 you could build AGI. Some thought you could, and I felt uncertain. After a year playing with the model, I feel more confident that this would not work
25
11
292
@JeffLadish
Jeffrey Ladish
1 year
"Let's offer $100 billion in prizes for interpretability. Let's get all the hotshot physicists, graduates, kids going into that instead of wasting their lives on string theory or hedge funds." "We are so far behind right now. The interpretability people are working on stuff…
18
30
290
@JeffLadish
Jeffrey Ladish
1 year
It's extremely unlikely that the easiest way to build a powerful AI product will also be the safest way to build a god
28
27
281
@JeffLadish
Jeffrey Ladish
1 year
I also get the hacker spirit that's like, screw you authorities that want to gatekeep us from cool tech, we're just going to torrent it. I'm broadly sympathetic! Just not with tech on the path to AGI
18
14
281
@JeffLadish
Jeffrey Ladish
1 year
I want a name for people with the goal of trying to survive this century / reduce x-risk this century. Not longtermism, not making strong claims about what comes after, just trying to make it through this critical time
55
15
275
@JeffLadish
Jeffrey Ladish
5 months
I'm pretty sad about the state of AI discourse right now. I see a lot of movement from object-level discussions of risk to meta-level social discourse on who is talking about risks and why, e.g. "the EAs are trying to do X" "the e/accs are trying to do Y". Overall this sucks...
21
20
265
@JeffLadish
Jeffrey Ladish
1 year
I want EAs to volunteer in soup kitchens more. And then go out and do their bullet biting work on AI alignment and such. (easy bullet to bite if you think what I do). Connecting with strangers having a hard time seems important for staying in touch with what's good
26
7
263
@JeffLadish
Jeffrey Ladish
1 year
I was not really interested in alternatives to Twitter until the ban of external competitor links Now I'm interested.
9
15
261
@JeffLadish
Jeffrey Ladish
1 year
If you think there will be less than five years between human-level science and engineering AGI and superintelligence, I think it makes sense to think that human extinction is by far the most likely outcome. An additional extraordinary thing needs to happen for humans to survive
49
24
259
@JeffLadish
Jeffrey Ladish
1 year
This document leaked from Google has been gaining attention. Unfortunately it's wrong and right in major ways that should make us seriously reflect on what we're creating Yes open source models are a huge deal but more open sourcing is NOT the solution
17
53
255
@JeffLadish
Jeffrey Ladish
2 years
A few thoughts on SBF, FTX, and Effective Altruism: 1) Until the current crisis, I think it was reasonable for EAs to look up to SBF and think FTX was hugely net good. SBF intentionally making a huge fortune to then donate to EA causes was ambitious and awesome...
17
8
253
@JeffLadish
Jeffrey Ladish
7 months
I'm extremely proud of my SERI MATS scholars this summer. We were able to demonstrate that for <$200, we can fine-tune Llama 2-Chat to reverse safety training The lesson here is straightforward: if you release model weights, bad actors can undo safety fine-tuning 🧵
5
26
252
@JeffLadish
Jeffrey Ladish
1 year
I don't think GPT-4 poses a significant risk of takeover. I think by default GPT-5 probably poses only a small risk but I am not confident about that. Imagining GPT-6 starts to feel like a significant takeover risk I can't predict how capabilities will scale but that's my guess
31
18
248
@JeffLadish
Jeffrey Ladish
2 years
Recently I've felt the desire to explain how an AGI-caused singularity is likely coming in a decade or two and why I think that. One issue is that I think this belief can be quite difficult to mentally handle, because it can make everything you do feel meaningless
38
11
241
@JeffLadish
Jeffrey Ladish
6 months
Seems deeply irresponsible to publish this kind of research. It's pretty simple: 1) There is growing recognition that advanced AI systems pose immense risks to society 2) Some people don't care and are going to yolo training and publicly releasing the powerful AI systems 3)…
@Ar_Douillard
Arthur Douillard
6 months
🚨 We released our work on data parallelism for language models *distributed* across the entire world! 🧵Thread below 👇
17
68
379
71
15
241
@JeffLadish
Jeffrey Ladish
1 year
The simple fact that inference costs *so much less* than training scares me. Human minds aren't like this. Minds don't have to be like this, and I suspect GPU-based minds don't have to be like this either. If true this means way more efficient learning algorithms are out there
12
5
232
@JeffLadish
Jeffrey Ladish
6 months
Nvidia just became the most irresponsible player in the AI space, clearly choosing to prioritize profits over safety and security This is a clear defection against US government attempts to prevent the proliferation of state-of-the-art GPUs
@dnystedt
Dan Nystedt
6 months
Nvidia shares rose as much as 3.6% before closing +0.8% on a day the Nasdaq fell 0.9%, on news it developed 3 new chips for China that comply with US export controls, yet can be used in AI systems, media report. The FT cited leaked documents given to China buyers showing 3 new…
6
22
116
36
15
238
@JeffLadish
Jeffrey Ladish
1 year
It's very important to remember that no one knows what a large language model can do immediately after one is trained. These models have tons of latent capabilities that people discover over time. GPT-4 has been trained but OpenAI doesn't know what all it can do
14
19
237
@JeffLadish
Jeffrey Ladish
1 year
"we're going to build superintelligent AI systems and this is likely to go well for humanity" is a bolder claim than "we're going to build superintelligent AI systems and this is likely to go poorly for humanity"
31
17
232
@JeffLadish
Jeffrey Ladish
1 year
I mean I can steelman an argument for open sourcing models. We need better interpretability tools to understand how these systems work so releases like this allow more people to be able to do that work Imo this doesn't hold up, interp work isn't bottlenecked on large models
5
6
234
@JeffLadish
Jeffrey Ladish
1 year
I'd really like @sama to lay out why he has such a *drastically* different view of AGI risk from Eliezer, especially given he acknowledges Eleizer as an important influence and tweeted the AGI ruin: list of lethalities saying it was "important and worth reading"
@sama
Sam Altman
1 year
eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.
133
113
2K
12
12
230
@JeffLadish
Jeffrey Ladish
3 years
why are bioethicists bad so often?
21
7
215
@JeffLadish
Jeffrey Ladish
1 year
The economics of language models are wild. From @janleike 's blog: "For example, gpt-3.5-turbo is about 200x cheaper than the fastest-typing humans paid at US minimum wage, so at the cost of $1 that model could simulate a deliberation that takes dozens of hours."
5
19
230
@JeffLadish
Jeffrey Ladish
2 years
People have a hard time imagining how the world might actually end from AI, even people who know a lot about how current AI systems work. GPT-3... something something... paperclips? Here is @gwern telling the most plausible specific story I've ever heard.
12
26
224
@JeffLadish
Jeffrey Ladish
2 years
I don't think people realize what a powerful cultural force EA is becoming / has become. It's still small in terms of absolute numbers (both $$ and people), which makes people underestimate it, but it is not small in terms of ambition or sphere of influence. Something is alive...
11
10
224
@JeffLadish
Jeffrey Ladish
4 years
One of the biggest lessons I've learned in the last 5 years is the fundamental difficulty of knowing things. A question like "will minimum wage help people" can seem simple at first but turns out to be very thorny. And this is the norm not the exception.
7
21
221
@JeffLadish
Jeffrey Ladish
8 days
A ban on lab-grown meat is pretty evil There are many people in Florida and the rest of the United States who think factory farming is super unethical. Florida just banned one of the most promising alternatives. No one was forcing anyone to eat lab-grown meat...
@GeneralMCNews
The General
12 days
BREAKING: Governor Ron DeSantis signs legislation to ban lab-grown meat in Florida.
Tweet media one
2K
6K
51K
11
4
221
@JeffLadish
Jeffrey Ladish
1 year
Plus I think disruption from targeted spam and phishing seems potentially really large. But my main reason is AI speedup is dangerous because future systems will be *actually physically dangerous*
1
7
218
@JeffLadish
Jeffrey Ladish
1 year
So yeah I am straight up afraid we're not that far from creating machines as powerful as gods. We already can make millions of machines that have superhuman levels of knowledge and tons of reasoning power. That's what I'm seeing and I think fear makes a lot of sense as a response
8
20
212
@JeffLadish
Jeffrey Ladish
1 year
I think most people want kids, but also want a bunch of other things. As it's gotten more difficult to have kids relative to having other things, people choose to have less kids This suggests something interesting about human shard structure. If something we deeply care about…
16
12
215
@JeffLadish
Jeffrey Ladish
4 years
Alright @vgr , 1 like = 1 opinion on nuclear war
7
22
206
@JeffLadish
Jeffrey Ladish
1 year
Great paper by @sleepinyourhat , "Eight things to know about large language models" I'm going to break down each point into a tweet for those who want the high level summary, since Sam was too busy doing actual alignment research to make the thread 😉🧵
@sleepinyourhat
Sam Bowman
1 year
I’m sharing a draft of a slightly-opinionated survey paper I’ve been working on for the last couple of months. It's meant for a broad audience—not just LLM researchers. (🧵)
Tweet media one
22
272
1K
4
34
211
@JeffLadish
Jeffrey Ladish
1 year
I asked ChatGPT-4 to make me a poem based on this: In the shadows of the digital age, A whispering fear begins to take the stage. GPT-4, a cognitive leap, As we sow, so shall we reap. This power, vast and unconfined, A force that makes me step behind, To ponder the trajectory,…
14
33
208
@JeffLadish
Jeffrey Ladish
4 months
Mark Zuckerberg just announced that Meta is going full speed ahead developing open source AGI This is insane. You can reasonably debate how far you can push open weight models before the risks outweigh the benefits. I don't know where that line is. But AGI is clearly too far
30
22
208
@JeffLadish
Jeffrey Ladish
1 year
We definitely need steganography evals asap. Steganography is the practice of concealing information within another message or physical object to avoid detection. We need to know if models can covertly communicate with themselves / other models
31
10
208
@JeffLadish
Jeffrey Ladish
1 year
"The reason we did not get nuked is because not every sociopath had access to nukes. If you have a home nuke to every person on this planet, we would have gotten nuked. The fact that we didn't get nuked is because of regulation. It is because things were centralized. That is…
@liron
L/e/ron Sh/acc/ira
1 year
Please watch this unedited clip from today's AI safety debate between @NPCollapse and @JosephJacks_ :
82
46
336
32
13
200
@JeffLadish
Jeffrey Ladish
2 years
One thing that scares me about AGI is that I expect "apparent alignment" to be much much easier than "actual alignment". People probably aren't going to deploy an obviously unaligned powerful AGI. But one that passes all the tests? Someone is likely to deploy that one
14
15
201
@JeffLadish
Jeffrey Ladish
1 year
It looks like top AI labs agree we need more regulation on AI. According to polls the majority of Americans agree we need more regulation on AI. So let's get more regulation on AI to incentivize safer AI development and give alignment researchers more time!
@sama
Sam Altman
1 year
@evijitghosh we definitely need more regulation on ai
179
111
2K
55
20
197
@JeffLadish
Jeffrey Ladish
1 year
I spent a couple years studying the risks from nuclear war. And the danger from nuclear weapons is quite real, likely more significant than the large risks from climate change. And also, Eliezer is right. AGI is a whole different class of risk and we are so unprepared
@ESYudkowsky
Eliezer Yudkowsky ⏹️
1 year
10 obvious reasons that the danger from AGI is way more serious than nuclear weapons: 1) Nuclear weapons are not smarter than humanity. 2) Nuclear weapons are not self-replicating. 3) Nuclear weapons are not self-improving. 4) Scientists understand how nuclear weapons…
381
561
3K
6
17
195
@JeffLadish
Jeffrey Ladish
1 year
Those calling for a moratorium on large training runs are not luddites. Many of us believe that AI development is very quickly headed for superintelligence. We absolutely want aligned superintelligent systems. But we only get one try at aligning them
16
17
192
@JeffLadish
Jeffrey Ladish
1 year
OpenAI just wrote up their plans for how they would like to develop superintelligent AI, and why they think we can't stop development right now. I'd summarize their approach as "let's proceed to superintelligence with global oversight
12
28
189
@JeffLadish
Jeffrey Ladish
10 months
Perhaps the most important takeaway from Oppenheimer: he believed he could shape US nuclear policy using his fame as father of the atomic bomb, but even before he got lost security clearance, he failed to stop the creation of the H-bomb
6
13
189
@JeffLadish
Jeffrey Ladish
1 year
All I want for Christmas is a decade or so more time to work on AI alignment
11
9
188
@JeffLadish
Jeffrey Ladish
1 year
@Aella_Girl I'm less offended by people misunderstanding how to use imperfect evidence and more offended by people thinking you need to be credentialed to do good research Credentialed scientist is a category we made up. Yes it's some evidence that you will do good research, but not much
5
7
182
@JeffLadish
Jeffrey Ladish
6 months
Here's an example where open sourcing AI is great! 1) Very useful for society, not useful as a weapon. Clearly benefits outweigh risks ⚖️ 2) Shared with an Apache license, so is actually open source 📖 3) Badass. Literally predict the weather better than ever before 🌪️
@PeterWBattaglia
Peter Battaglia
6 months
We’ve released GraphCast's code and weights, so everyone can run it:
10
93
691
6
29
182
@JeffLadish
Jeffrey Ladish
1 year
If I wanted to make it maximally easy for an AI system to seize power as soon as this was possible, I'd give that AI system direct access to every person on the planet via their phone Also I think this is likely to happen, or rather is in the midst of happening, by accident
10
14
182
@JeffLadish
Jeffrey Ladish
1 year
It's interesting how so many of the problems people thought were "AGI complete" are not in fact AGI complete. Many of them have been solved by language models, which are impressively general but still far from general enough to do everything humans can do
17
8
182
@JeffLadish
Jeffrey Ladish
1 year
Making agents, especially making agents that can self-improve, using powerful cognitive engines like GPT-4 is a really bad idea and OpenAI should not permit this
@SigGravitas
Toran Bruce Richards
1 year
Massive Update for Auto-GPT: Code Execution! 🤖💻 Auto-GPT is now able to write it's own code using #gpt4 and execute python scripts! This allows it to recursively debug, develop and self-improve... 🤯 👇
261
2K
10K
33
26
178
@JeffLadish
Jeffrey Ladish
1 year
I'm worried about a cognitive capability : agency overhang, where we have powerful systems that have little ability to carry out complex plans involving numerous subgoals, but then at some point those powerful non-agentic systems develop complex planning and execution abilities
13
19
176
@JeffLadish
Jeffrey Ladish
1 year
Perfect for voice phishing. You call someone sounding exactly like their boss, asking them to wire $10k asap otherwise you'll lose a major customer. How many do it without question when the ask is from someone with their bosses voice? This should be interesting
@nearcyan
near
1 year
AI can now transform voices in real-time with latency as low as 60ms on cpu! website:
76
290
2K
12
24
177
@JeffLadish
Jeffrey Ladish
1 month
I'd be less concerned about existential risk from AI if we had the capacity to stop AI development and deployment in the case of an emergency. Not only do we not have this capability, but we have even less ability to do this over time, as more companies in different countries buy…
20
22
175
@JeffLadish
Jeffrey Ladish
11 months
It pains me to say it, because I have a lot of respect for much of what @AnthropicAI has done and I've appreciated collaborating with them on safety and security (spent over a year working on security there), but I agree with @Simeon_Cps here. If governments can't directly…
@Simeon_Cps
Siméon
11 months
I have a lot of respect for the work of Anthropic on interpretability, in evaluations/red-teaming and for not having released Claude for 9 months. But for the sake of transparency, I want to publicize that when I see: 1) @sama burning his social & political capital on ideas…
14
23
233
30
17
169
@JeffLadish
Jeffrey Ladish
1 year
@Aella_Girl A scientist or researcher isn't someone with a master's degree or a PhD. It's somehow who is trying to shake truth out of the universe Some people do this more or less well. The universe doesn't care what your degree is, you can just be more or less wrong
9
13
169
@JeffLadish
Jeffrey Ladish
1 year
Tweet media one
4
12
169
@JeffLadish
Jeffrey Ladish
2 years
There comes a point of closeness with a person where if they don't own a headlamp, I will buy them a headlamp. What if there's a disaster and they need to navigate in the dark? Holding a phone isn't going to cut it. Not gonna happen to my people
23
4
160
@JeffLadish
Jeffrey Ladish
2 years
There is a group of people really mad at the AGI / AI alignment crowd for taking up all the resources and attention. They treat the conflict like it's about privilege but it's really not. It's about beliefs about the world. If we're making a mistake it's epistemic not ideological
4
9
162
@JeffLadish
Jeffrey Ladish
1 year
I asked Claude for a book recommendation based on other sci-fi books I read and it recommended "Ancillary Justice" and so I looked it up on the internet and it had won lots of awards so I started reading it and it sucked and it just goes to show you can't trust AI or anyone else
25
4
164
@JeffLadish
Jeffrey Ladish
1 year
If we completely stopped AI scaling right now we would not run out of things to do with existing models for a long time There's currently a big model-application overhang
6
12
165
@JeffLadish
Jeffrey Ladish
2 months
The government needs the very best people to advise on AI risk. Paul Christiano is one of the very best. It's crazy to me how many people with huge disagreements all recognize and respect Paul's work
@jachiam0
Joshua Achiam ⚗️
2 months
The people opposing Paul Christiano are thoughtless and reckless. Paul would be an invaluable asset to government oversight and technical capacity on AI. He's in a league of his own on talent and dedication.
11
16
281
6
5
165
@JeffLadish
Jeffrey Ladish
1 year
Hugging Chat, a ChatGPT-clone based on a LLaMA-based model, was just launched. I've been using it and while it's a little rough around the edges, it feels similar to ChatGPT in terms of capabilities Only 5 months passed between the launch of ChatGPT and HuggingChat
6
11
162
@JeffLadish
Jeffrey Ladish
1 year
me: can you write me alien invaders that I can play in my browser GPT-4: sure here's some code to get you started me: 🤯 me: can you please write a poem that doesn't rhyme? GPT-4: sure I can, that's easy, not even a crime! me: 😠
7
3
165