
Jeffrey Ladish
@JeffLadish
Followers
14K
Following
25K
Media
311
Statuses
12K
Applying the security mindset to everything @PalisadeAI
San Francisco, CA
Joined March 2013
I think the AI situation is pretty dire right now. And at the same time, I feel pretty motivated to pull together and go out there and fight for a good world / galaxy / universe. @So8res has a great post called "detach the grim-o-meter", where he recommends not feeling obligated.
32
61
623
I agree with this take. I don’t think it will be sufficient but 1) these models are being deployed to a billion+ people so the direct impact is huge and 2) we will learn stuff in the process of trying to train them to be good people.
"Just train the AI models to be good people" might not be sufficient when it comes to more powerful models, but it sure is a dumb step to skip.
4
1
47
If you use a password manager, keep your system and browser up to date, and haven't ran any malware or malicious plugins, you probably don't need to change your passwords. This isn't a breach of any of these companies, it's a leak from scammers who stole passwords via malware.
BREAKING: 16 billion Apple, $AAPL, Facebook, $META, Google, $GOOGL, and other passwords leaked, per Forbes.
4
1
64
A lot more people are starting to understand that superintelligence is on the horizon and that it poses a serious risk of human extinction. This gives me hope that coordination is possible!.
My favorite reaction I’ve gotten when sharing some of the blurbs we’ve recently received for Eliezer and Nate’s forthcoming book: If Anyone Builds It, Everyone Dies. From someone who works on AI policy in DC:
6
3
73
RT @thlarsen: Lots of people in AI, and especially AI policy, seem to think that aligning superintelligence is the most important issue of….
0
63
0
RT @hitRECordJoe: Debates over AI would be more productive if we could stop over-simplifying. AI is not all bad, and it’s not all good. Jus….
0
56
0
RT @Grimezsz: Long story short I recommend the new book by Nate and Eliezer. I feel like the main thing I ever get cancelled/ in trouble….
0
88
0
Great post by Lawrence Chan on the Illusion of Thinking paper!.
@JeffLadish @GaryMarcus It wasn't out tomorrow, but it's out now!.
1
0
7
This is why @METR_Evals results are so interesting. They do head-to-head comparisons of models vs. experts. This is also what we do with our hacking competition contests:
1
0
9
There's a relatively easy solution to all these problems: give the same tests to human experts! Give the models and the experts access to the same tools. If a human can't do it without a calculator or python, why is it interesting that a model can't either?.
This paper doesn't show fundamental limitations of LLMs:.- The "higher complexity" problems require more reasoning than fits in the context length (humans would also take too long). - Humans would also make errors in the cases where the problem is doable in the context length. -.
6
4
88