I’m scared of AGI. It's confusing how people can be so dismissive of the risks.
I’m an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.
🧵
Yearly building permits:
SF Bay: 5k
NYC: 20k
LA: 24K
Tokyo: 142k 🔥
Tokyo housing prices have *decreased* since 2006.
The solution? Strip local control. “If you want to build a mock-Gothic castle faced in pink seashells, that is your business."
1/ Sorry New York Times, I decline to be interviewed.
The harassment of
@bariweiss
and Scott Alexander are only two of the more recent reasons I no longer trust your paper. Helping elect Donald Trump is another. I encourage others in tech to
#BoycottTheNYT
Recreating 3D scenes using the reflections in your eyes:
A single photo is enough, but it becomes much more accurate with multiple reflections from multiple angles, like a video capturing moving eyes.
2/ The data makes it plain that the NYT has abandoned its commitment to nonpartisan reporting. When the internet threatened their business they made a devil’s bargain to amplify outrage and us-vs-them psychology. Racism wasn't a new problem in 2014 but their stock being down was.
5/ How does the NYT complete with YouTube? Identity = virality was the Buzzfeed formula with articles like ‘X Things Only a Y Would Understand’. Klein: ‘outrage is deeply connected to identity’.
The New York Times is now functionally Buzzfeed with a cultured facade.
Everyone thinks the media is ok until they interact with them. They don’t give a shit about anything except driving eyeballs. The first few times reporters wrote about me were incredibly eye opening.
Playing with GPT-3 feels like seeing the future. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It's shockingly good.
11/ In short, The New York Times and Trump harness the same forces. It's working: trust is down, but the stock is up. The paper of record is now an engine for turning out-group hatred into profit. We don’t have to help them.
#BoycottTheNYT
Palantir is moving to Colorado.
One of the biggest cracks I've seen in San Francisco's status as the center of our industry. CEO Alex Karp said he was against the “increasing intolerance and monoculture of Silicon Valley”.
Of people working in AI safety, a poll of 44 people gave an average probability of about 30% for something terrible happening, with some going well over 50%.
Remember, Russian roulette is 17%.
3/ These are real problems, but do the graphs reflect a secular trend in the world or a change in business tactics? The suddenness of the spikes tells us it’s the latter, as does checking Google Trends where we don’t see corresponding spikes.
The acc / decel dichotomy is really dumb.
Like maybe we can figure out the right policies by flattening everything into tribes and yelling loudly. Just dumb. Debate specific policies and risks.
Imagine building a new type of nuclear reactor that will make free power.
People are excited, but half of nuclear engineers think there’s at least a 10% chance of an ‘extremely bad’ catastrophe, with safety engineers putting it over 30%.
What happens when we try to give laws or describe values to alien minds that are vastly smarter than us? It can’t be emphasized enough that we have no idea how to do this. This isn’t just an engineering problem. We have no idea how to do this even in theory.
4/ Ezra Klein does a great job explaining what happened: the internet means news has to compete with more entertainment options, and no longer has to appeal to any particular geographic region containing people with diverse values.
The most uncertain part has been when AGI would happen, but most timelines have accelerated. Geoffrey Hinton, one of the founders of ML, recently said he can’t rule out AGI in the next 5 years, and that AI wiping out humanity is not inconceivable.
This is an absolutely incredible video.
Hinton: “That's an issue, right. We have to think hard about how to control that.“
Reporter: “Can we?“
Hinton: “We don't know. We haven't been there yet. But we can try!“
Reporter: “That seems kind of concerning?“
Hinton: “Uh, yes!“
No one knows how to describe human values. When we write laws we do our best, but in the end the only reason they work at all is because their meaning is interpreted by other humans with 99.9% identical genes implementing the same basic emotions and cognitive architecture.
I'm trying not to get hyperbolic but I feel like I've just been shown electricity for the first time. I'm not sure what we're going to do with it, but damn if it doesn't feel like it could change everything.
Imagine if the hackers had started with Trump announcing we were under attack and he was launching nukes. It's sobering to realize how much damage Twitter's shit security can cause.
9/ Quoting
@bariweiss
's now famous resignation letter, these are lessons "about the importance of understanding other Americans, the necessity of resisting tribalism, and the centrality of the free exchange of ideas to a democratic society."
The Generative Age: AI will do for content production what the internet did for distribution.
What happens when the cost to produce a movie like The Avengers goes from $350M to 30 cents?
10/ There are outlets across the political spectrum that claim to be fair that we know aren't. The difference is that the NYT used to have a genuine claim to fairness and not everyone yet realizes that they no longer do.
@torkander
@paulg
Yes, I am holding NYT to a higher standard than Fox News, given its history and its reputation. But that increasingly seems like wishful thinking.
6/ The producers of politicized media are themselves the most voracious consumers of politicized media. The resulting distorted world view is what lead the NYT to assure us Trump had no chance in 2016.
8/ After mishandling the election the NYT vowed to “rededicate ourselves to the fundamental mission of Times journalism. That is to report America and the world honestly, without fear or favor.” Sadly it's clear the lessons of 2016 have not been learned.
7/ The NYT was *so* confident Trump had no chance that "In just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election."
Here’s what others have said on the risk:
On the risk of AGI killing everyone: “So first of all, I will say, I think that there's some chance of that. And it's really important to acknowledge it.” - Sam Altman
From where I'm sitting GPT-4 looks like its two paperclips and a ball of yarn away from being AGI. I don’t think anyone would have predicted a few years ago that a model like GPT-4, trained to predict TEXT, would with enough compute be able to do half the things it does.
My trust in the large AI labs has decreased over time. AFAICT they're starting to engage in exactly the kind of dangerous arms race dynamics they explicitly warned us against from the start.
It seems clear to me that we will see superintelligence in our lifetimes, and not at all clear that we have any reason to be confident that it will go well.
I'm generally the last person to advocate for government intervention, but I think it could be warranted.
How are things going on San Francisco? Well, the Planning Commission is hearing a Discretionary Review filed by an ice cream shop to prevent a different ice cream shop from opening across the street.
Slowly more and more credible people began to admit it was a problem. Eventually Elon read Superintelligence and wrote the tweet that launched AI risk into the mainstream:
”without AI alignment, AI systems are reasonably likely to cause an irreversible catastrophe like human extinction.” - Paul Christiano (widely considered one of the top alignment researchers)
Meanwhile we are rapidly rushing toward AGI. Microsoft Research released a paper a few days ago titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4”.
San Francisco could have been the most important city in the world.
Look at the Fortune 10. Tech is the most important industry and San Francisco was the center of it, but it has done everything in its power to destroy one of America’s most important strategic assets.
I'm told I should use
#ghostnyt
. Interesting mix of left and right people retweeting. If anyone wants to learn more, the report I link to is a great place to start:
When I first started reading about AI risk it was a weird niche concern of a small group living in the Bay Area. 10 or 15 years ago I remember telling people I was worried about AI and getting the distinct impression they thought I was a nut.
“We demonstrate that [...] GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level"
Wanted: A service that chronologically collects every interview, essay, and podcast with a given person and allows you to subscribe to them collectively. I'd use this to follow people I consistently find interesting like Peter Thiel, Brian Eno, Marc Andreessen, etc.
Gall's Law: "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
Since people seem to unable to restrain themselves from the obvious dunk I should add that the labs I invested in (and most of my friends) are explicitly focused on safety.
You don’t realize until you see it happen how much latitude there is in turning an event or conversation into a narrative - even without lying. Though I’ve also seen journalists explicitly lie multiple times.
@LauraDeming
WW2 wage freezes. Maybe not too insignificant seeming then, but completely insignificant compared to the fact that it’s responsible for how fucked our entire medical system is 80 years later.
@twitbranwell
I’m not sure. I think part of it is how sci-fi it sounds. As for researchers I think another part is a reaction to feeling your field is being attacked
Just made my 30th startup investment. Prices are amazingly high and it's easy to call it a bubble, but given the number of incredibly successful companies I've seen form I wonder if this is actually what market efficiency looks like.
Five days into Covid and it’s rough, despite having been fully vaccinated (J&J). I keep recovering and relapsing. Stay safe out there folks. Might be time to break out the masks again and look into the research on double vaccination.
The smartest people are easy to understand.
"the 100 companies whose officers used the most complex language averaged a return of 9.45% per year. The companies in the simplest language decile returned 15.4% per year."
Unexpected hazard of studying zoning:
Watched Kiki's Delivery Service last night and kept noticing how their quaint little city would be illegal in SF. Running a business out of your attic? Illegal. Living out of the back of a bakery? Illegal. Charming narrow streets? Illegal
Spent a good chunk of last week writing a program to generate the pattern from the ceiling of the Sheikh Lotfollah Mosque. Here's what I've got so far, engraved in brass.
The Biden administration just announced a plan for 30 MILLION student loan borrowers to receive debt relief.
This would allow so many people with student debt to buy houses, start a family, go back to school, and so much more.
My friends who leave San Francisco always come back. Time to make peace with it: there’s no escaping SF.
I’ve avoided getting involved in the knife-fight-in-clown-car that is SF politics for a long time, but there is no other option.
We have to fix this city.
I've heard through the grapevine and seen on social media 4 founders who moved to NYC come back to San Francisco recently.
I bet there are many more...
Seems there is a material trend of folks quietly boomeranging back to San Francisco from NYC, Miami, and other markets.
NYC…
I wrote 25 pages on the CA housing crisis after hundreds of hours of research and never finished it because 1) it made me sad, and 2) I expected it would piss off almost everyone.
GPT-3 wrote this?! Even if Arram took best-of-several, that's still scary. I spent a whole minute laughing at machine comedy. I was scared because I knew what it meant and I was still laughing.
There's a sense in which stories can never be true that I think most people don't really appreciate. To quote Philip Pullman "The story, of course, didn’t happen: the events happened. The story happened later, when I picked out certain of the events and told them.”
The feeling when AI generated text makes you skeptical of human writing because you realize how easy it is to see meaning where there isn't any.
Especially true of abstract or flowery language.
What are the best things you've done to become a better programmer? (Looking more for intentional learning techniques than things like getting a degree or a full time job.)
My friend
@joshalbrecht
just announced his company raising $200M. I’ve long considered Josh the most talented person nobody knows about. He’s also the most reasonable, conscientious person I know. There’s a crazy story I tell to explain to people what kind of person Josh is... 🧵
I'm excited to share our $200M Series B at a $1B+ valuation to develop AI systems that reason!
We believe reasoning is the main blocker to effective AI agents. We train large models tailor-made for reasoning on our ~10K GPU cluster. On top, we prototype agents we use every day.
@thelawofaverage
Yes, that's what happens when you artificially restrict supply from responding to demand. But by all means, let's blame people for living in homes.
This is a damning resignation letter for The New York Times (but unsurprising to anyone paying attention). ‘Showing up for work as a centrist at an American newspaper should not require bravery.’