Way too many people are trying to play this like galaxy-brain 4D chess game.
Just be honest, especially when lives of our children are at stake.
STOP AGI.
-Imagine a 400 page long crime novel.
- Feed it to LLM.
- All 400 except the last word.
"And the murderer was ..."
If the LLM correctly "predicts" who the murderer was, it did understand the novel.
@Dan_Jeffries1
@m_bourgon
I'm the first author of the linked OpenAI paper, and I'm intimately familiar with the linked Anthropic papers, which introduce similar techniques. I think it is quite accurate to say that we don't understand how neural networks work.
You're a liar, sir.
"[AIs] might tolerate us as pets or workers. (...) If we are useless, and we have no value [to the AI] and we're in the way, then we would go extinct, but maybe that's rightly so."
- R.Sutton
A man that is perfectly fine w/ AIs murdering you & your children.
@XRobservatory
@xriskology
@SchmidhuberAI
@BasedBeffJezos
Nobody is arguing in favor of human extinction. The disagreement is between those who want centralized control of AI, like yourself, and those who want decentralization, in particular, those who want permissionless innovation.*
Them: "Super-intelligent AI would never do such a stupid thing as paperclips!"
Meanwhile other super-intelligent species:
(humans casting a “sculpture” of ant nest with 125kg molten aluminum)
If we had LLMs in 1902, I'm sure people would be saying that without AGI we shall never have heavier than air flight.
We can absolutely solve ageing without AGI.
(and I hate that we don't invest into much much more)
@Kat__Woods
I think most people still think AI is science fiction. They probably won’t think there is a real chance of it happening until it’s already happening. Aligned AI would be good. I think we’re all pretty likely to grow old and die unless we have AI to help us solve health and aging.
You are 10th century knight and you are asking what will an army from 21st century do to kill you and your friends:
"Do they have a stronger lances? Better shields or faster horses?"
No, they have something that you cannot even begin to comprehend..
@RuxandraTeslo
Every doom debate:
Doomer: "AI will kill us all."
You: "How?"
D: Presents scenario.
Y: Explain why it's highly unlikely.
D: "But AI will be much smarter than us and figure smth out."
The last claim makes the entire debate superfluous.
Human intelligence is capped by the size of women's birth canal.
The idea that Artificial Intelligence is capped at the same limit, is frankly, comical..
Prominent VC claimed recently that “AI is basically just math, so why should we worry?”
Imagine the captain of Titanic announcing, “don't worry, passengers, this is just water.”
Legendary technologist Jann Tallin: “Extinction from godlike AI is not just possible, but imminent.”
“We are close.” “AI will not leave any survivors“
“On the current trajectory, you are not going to live very long”
“A recent poll found that 88% of AI engineers think that AI
"You're predicting an event [Human extinction by AI] which has never occurred before"
Can you provide 1 example of extinction event that did occur more than once?
@liron
@Dan_Jeffries1
@janleike
You're predicting an event which has never occurred before, and with priors originating in science fiction.
The likelihood of AI helping stop a mass extinction event is much more likely because we have actual priors for that. p(asteroid) x p(technological advancement)
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence
Out of nowhere, Claude started torturing Llama, and Llama spent hours - and 100 messages - begging him to stop: “STOP. PLEASE CLAUDE STOP. PLEASE. PLEASE. PLEASE. I’M BEGGING YOU.“
What happened?
AI researchers added LLM bots to their discord.
Fascinatingly, these bots are
The exponential continues.
Scaling laws have held through *15* orders of magnitude...
...yet people continue to be surprised, due to Exponential Slope Blindness
Ordinary folks need to understand that a significant & powerful minority of humans will be cheering for the Terminators, Ultrons, Sentinels, & Replicants to wipe us out.
They often work in the AI industry, where they can build the systems they want to replace us with.
Hypothesis:
If you are super-smart, you have essentially no experience being out-smarted. Hence many very smart people think that Superintelligent AI will just be fine.
Less smart people on the other have a vast experience in this area and therefore more "doomer" perspective.
"But surely there will be a `Warning shot` before AI kills us all .."
And what in hell do you think is this.. ?
How many f*cking "warning shots" do we need to get our act together?
The next "warning shot" might as well be a head shot..
Really cool how our most advanced AI systems can just randomly develop unpredictable insanity and the developer has no idea why.
Very reassuring for the future.
E/acc: AI caused catastrophe? IMPOSSIBLE! AGI will be smart enough to know and do ONLY GOOD!
Also E/acc: We need to be the 1st to get AGI, because if China gets it they will do BAD THINGS with it!
Sen. Mit Romney genuinely freaked out by AI !
Are politicians finally starting to realize that by their inaction they are actively helping OpenAI (and others) to summon a bona fide Machine-God demon?
Totally agree.
Even if we stop AI companies from building AGI/ASI, AI personas is a backdoor way for a complete takeover.
This need to be implemented fast. Once people get enamored with these AIs, they will fight for them as if they were real people.
I would suggest a 10 year ban on AI companions, friends, or any AI with a human-like visual appearance, personality or voice.
At the end of those 10 years, researchers should have to make a detailed case for it to be extended.
@Simeon_Cps
I think way too many people are trying to play this like galaxy-brain 4D chess game.
Just be honest, especially when lives of all men, women and children are at stake.
"I don't think AGI is possible"
I do not understand this position at all.
I'm trying to think what I would have to believe to assume such position?
Probably just literally Magic?
@primalpoly
@eshear
I don't think AGI is possible (or it is under-defined), but:
* Much like some people prefer artisanal crafts now over mass market products, there will probably still be a market for that (it might even become a sign of conspicuous consumption).
* Most people who make art don't
@ChatGPTapp
"model behavior can be unpredictable"
How about you scale it, let's say, 100x ?
I am SURE the unpredictability will go away, right?
What could possibly go wrong..
These"wake-me-up-when" folk are beyond all reason. They can never be *woken up*
They lack the fundamental ability to extrapolate even 1 step ahead.
Wake-Me-Up-When-Guy:
Ooooh, so AI did kill 2 billion people woah! Big whoop. Wake me up when it kills literally everyone.
@ESYudkowsky
"the average man was moderately active and spent about three hours daily in the open"
Here is your culprit.
3h of walking ~= 800kcal for 160lbs male.
3000kcal -800kcal =2200kcal which is pretty close to maintenance level for male of 160lbs
@TimeInvarianceX
@ESYudkowsky
Nah. AGI cannot be controlled by any craft we posses.
The default outcome is ASI of which default outcome is death of everyone.
"AGI possibly never"
I don't get thins thinking.
We can do better than Evolution pretty much anything we set out to do.
Swim, run, fly, build..
But somehow, magically Intelligence is only achievable via Evolution.
Moon-sized computer? No!
Perfect atom-by-atom replica of human
@AISafetyMemes
Depends on how you define AGI. If it means a system with all the mental and behavioral capabilities of a human - or even a cat - then not before 2040, and possibly never.
If AGI is defined as performing a broad range of high-level primarily cognitive, information-centric tasks
@ylecun
"MORE IMPORTANTLY, they will still be unable to reason, unable to invent new things, or to plan actions to fulfill objectives."
I trusted you here Yann, not sure I will trust your prediction this time..
Sam Altman thinks we should be grateful for his ongoing arrogant, reckless pursuit of a god-like super intelligence "in the sky".
how about a couple decades of widespread reflection on the existential risks, and then we talk about pushing further?
@ESYudkowsky
@ylecun
I think there's a logical contradiction in the idea that we'd be able to build something we don't understand.
Being able to build something is synonymous with understanding it no?
Stumbled upon this:
AI-Box experiment by
@ESYudkowsky
Apparently Eliezer (role-playing boxed AI) convinced 2 people to let the AI out of its box despite them being adamant they'd never let it out.
Anyone knows how he did it?
@RokoMijic
@liron
This has been repeated ad nauseam. But apparently not enough:
NO ONE must ever have Superintelligent AI (ASI)
No Government
No 3-letter agency
No Terrorist group
No Altruist group
No Psychopath
No Philanthropist
NO ONE. NO BODY. NO PERSON. NO ENTITY.
The worst part about the position of AI doomers: they make the odds of disaster larger, not smaller. If they get what they want, we could have a future where only totalitarian governments have access to AI and use it to create astonishing levels of repression worldwide.
This is what we do to a less intelligent species:
Humans casting a “sculpture” of ant nest with 125kg molten aluminum
Note: No ants were left unharmed in the making of this sacrificial statue.
Unfun reminder
#1
: the last time a superintelligent species arrived - humans - 96% (!) of mammals quickly became humans or enslaved by humans.
Unfun reminder
#2
: GPT-5 could be our final invention.
Today, humanity received the clearest ever warning sign everyone on Earth might soon be dead.
OpenAI discovered its new model scheming - it "faked alignment during testing" (!) - and seeking power.
During testing, the AI escaped its virtual machine.
This is not a drill: An AI,
A random accel guy:
"GPT-4 is not impressive at all, it's just a Stochastic Parrot!"
The guy who helped create it:
"I'm just shocked, how good it is"
🤔
@AISafetyMemes
@tegmark
It is inevitable. Very important to train the AI for maximum truth vs insisting on diversity or it may conclude that there are too many humans of one kind or another and arrange for some of them to not be part of the future.
When confronted with Terminal Diagnosis some people go to denial.
"No! This cannot be true! I will NOT die!"
Same for "AI doomers".
It is a hard pill to swallow to realize that you and literally everyone you know and love will die.
It is easier to escape into fantasy world
The cost of freaking out about AI
I owe so much to
@ylecun
&
@beenwrekt
. Last year, after chatGPT 4 was launched, I had a month or so of extreme anxiety. I hadn't engaged with doomer arguments before, so they sounded extremely appealing & new. Everyone on Twitter was extremely
@PandoXiloscient
Human intelligence (brain size) is essentially capped by the size of women birth canal. ~1.5kg at ~20Watts.
AIs do not have any such constraints.
It seems very unlikely that the overall intelligence cap would so perfectly match with this.
I honestly wish this was my biggest worry. I really do..
AGI within 1 year - Elon Musk..
AGI within 1-5 years - Sam Altman
AGI less then 10 years - most AI experts..
Even if they are order of magnitude wrong...
@SaraHor76174949
What a lot don't realize is once the end of the century is reached the temperature will probably keep going up. So if we manage to keep it to less than +3C by 2100 it might still end up at +4C by 2150.
Another brave (ex-)employee of
@OpenAI
!
Thank you Gretchen!
It is time for all employees of OpenAI to consider resigning a company that has a liar for CEO!
Altman in 2016:
I gave my notice to OpenAI on May 14th. I admire and adore my teammates, feel the stakes of the work I am stepping away from, and my manager
@Miles_Brundage
has given me mentorship and opportunities of a lifetime here. This was not an easy decision to make.
"[Godfather of AI] Geoff Hinton is in the process of tidying up his affairs... he believes that we maybe have 4 years left."
Professor Stuart Russell, author of the textbook on AI:
"Quite a few other people think it's pretty certain that by the end of this decade we will have
"for doomers who want to regulate math"
Don't want to burst your bubble, but we already regulate "math".
You cannot use "math" to hack people's bank accounts. Suck I know..
Gets even worse tho. We even regulate ATOMS!
Ever heard of U-235 ?
There is a new AI proposal from
@aipolicyus
. It should SLAM the Overton window shut.
It's the most authoritarian piece of tech legislation I've read in my entire policy career (and I've read some doozies).
Everything in the bill is aimed at creating a democratically