If, as mentioned, Sama and others truly believe the public should have board oversight, we deserve the information that caused this severity of contention.
If super intelligence has been achieved, this is a human-kind level tech shift. As inspiring as it is, I refrain from 100%…
Whilst we all root for Sam, it *is* a bit scary that all voices here seem so unified.
To take the steelman position here, what did Ilya see? What are the risks specifically & how real are they?
We can make fun of “doomers”, but we would be unwise to believe in zero risk.
1/3
Whenever it seems everyone is on one side, I deeply believe in entertaining the counter-position. It is not against Sama whatsoever, but more to instead believe the worries of some very smart people.
Ilya knew this was a huge risk. To say it’s purely politics seems naive.
2/3
@zachtratar
Don't you think we'll find out more about that either way, and--taking the proposition seriously--that there would be time to take further action before there is actual risk?
@zachtratar
General intelligence was achieved at AlphaZero. What OpenAI has achieved is building a version that is more useful to humans, but there is nothing special about GPT-5 as compared to 3.5 or AphaZero, except that humans find it much more useful.
Ilya wants to keep this realization…
@zachtratar
Super-intelligence has not been achieved. Perhaps Altman did something secretively and irresponsibly, but super-intelligence? No. They don't have enough compute.
But it would be nice at this point for the issue to be publicly aired. Maybe Altman built a self-directed learning…
@zachtratar
Doubt it. Clips of Ilya talmbout the supremacy if BIG closed end models compared to open the process of this all really seem to indicate ego > safety. We'll see though.
Question is where does Ilya go after stepping down 🤔
@zachtratar
It is all very well to point out that people
‘should be scared’ but what about the many benefits which it will bring. Not least, with its ability to help cure many diseases
@zachtratar
All the drama aside, "Intelligence, in my view, is the final frontier in the computing tech tree and potentially *all other tech trees*."
IMHO, it is not even potentially all other tech trees. It IS all other tech trees. When AGI is achieved, it will very VERY quickly catch up…
@zachtratar
If there is something, here are the clues:
Nov 1: SamA says LLMs are not enough. We need big breakthrough for AGi.
Nov 6: SamA “everything today will seem quaint next year”
Nov 16: SamA “4 times pushed the frontier of discovery, last one within last couple of weeks”
Nov 17:
@zachtratar
Imo it’s actually simpler than that. It likely was a do it now before it’s too late attempt at doing what Elon and others disliked about OpenAI and to just keep it open, non-profit, and wind-down the last 18 months of product and business build-up because he believes in a…
@zachtratar
Polarization is due to unclear reasons.
To anyone watching, it seemed like a coup.
But what if Ilya was just after power, fame, and money?
We don't know if he actually saw something or not.
@zachtratar
Everyone is silent about the risks Ilya saw, so I guess in this climate of war it was top secret military contracts on autonomous weapons that had been disguised to the Board.
@zachtratar
If you’re concerned about ethics - having it created within the walls of the world’s 2nd largest company with everything to lose is almost certainly the best place for it.