Kevin Lotto
@kevinlotto
Followers
68
Following
386
Media
8
Statuses
327
Dedicated to a safe human future
Wisconsin
Joined March 2011
Microsoft AI CEO Mustafa Suleyman says there's a creeping assumption that it's inevitable that AI will exceed our control and move beyond us as a species.
11
15
34
Elon Musk says he doesn't think anyone will have control over superintellignece any more than a chimp would have control over humans.
11
9
66
𝘕𝘦𝘹𝘵 𝘞𝘦𝘦𝘬 𝘰𝘯 𝘋𝘰𝘰𝘮 𝘋𝘦𝘣𝘢𝘵𝘦𝘴: SHOULD WE BAN ARTIFICIAL SUPERINTELLIGENCE? Featuring: 👤 Max Tegmark (@tegmark) 👤 Dean Ball (@deanwball) This is the debate the world needs right now, between two of the clearest voices on both sides. Topics covered: ◻️ Should
37
17
152
The American public are worried about us losing control to superhuman AI.
2
14
38
@ai_ctrl Calm voices of reason. Speaking and listening to other people. Finding areas of agreement and disagreement. Our path forward should bend and curve, and sometimes go backward. We will have to struggle to climb up steep inclines, and carefully descend declines. The most
0
2
12
OpenAI is aiming to develop automated AI researchers by March 2028. This is thought to be a key stepping stone on the path to superintelligence, which experts warn could lead to human extinction. Learn more about this and other AI news in our newsletter: https://t.co/3MFbziv2Lo
controlai.news
“a true automated AI researcher by March of 2028”
0
15
40
I am of course naive on many fronts here. Just giving my sense of the situation.
0
0
0
This feels like it will lead to a precarious situation in the US that will win big or fail big, while China is building a strong foundation for any outcome.
1
0
0
China leads in applying the learnings of the US companies and research (along with their own) to robots and adopting AI to business and factories.
1
0
0
Based on some reading I've been doing, it seems like the current state of play is the following: The US leads in AI model scaling by leveraging it's economic power toward a handful of companies that make up almost the entirety of economic growth for the past few years.
1
0
0
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
15
30
112
If I were a policymaker right now I would 1) Be asking 'how many months are between Claude Code's capabilities and that of leading open-source models for cyberattack purposes? 2) What are claude code's capabilities (and that of other frontier models) expected to be in 1 year,
5
18
80
While not surprising, this is seriously worrying. Understanding emerging dual-use AI capabilities should be a priority for the US gov. Now's the time to turbocharge CAISI. "the first documented case of a large-scale cyberattack executed without substantial human intervention."
We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
6
10
78
With many more to come, as has been predicted for decades. Our best hope at survival is to pause. Join the 120k others in fighting for a pause: https://t.co/Wi9EJXdQ4i. And take other actions as you can: https://t.co/QeewM92W8t.
existentialsafety.org
Our central aim is to ensure humanity survives this decade. We need to fight for our existential safety. Everyone can and should do their part. Collectively, we can increase the odds that we not only...
We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more:
0
4
9
Baroness Cass (@Hilary_Cass) just backed our campaign for binding regulation on the most powerful AI systems! 90+ UK politicians have joined, acknowledging the extinction risk from AI. This is the first time that such a coalition of lawmakers is taking a stand on this issue!
2
6
29
It is a common belief of some AI developers that they will be able to retain control of the alien intelligences they are growing. This is very unlikely to be the case. It is extreme hubris to claim otherwise, given the evidence and theory to date.
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
0
4
9
I've said some nice things about Anthropic today so it looks like I have to balance it out with a whole lot of FFS. Way to boast about doing the thing we trusted you not to do.
Anthropic (2023, Core Views on AI Safety): "we do not wish to advance the rate of AI capabilities progress" Anthropic employee (2025, Twitter):
2
2
17
GPT 5.1 is so sycophantic even right out of the box (no memory, and not even the "sycophant" modes called "friendly" and "quirky") This Thanksgiving I'll be grateful that I've got an inherent disgust reaction to this, and I pray for the millions of people who don't.
4
2
27
Anthropic (2023, Core Views on AI Safety): "we do not wish to advance the rate of AI capabilities progress" Anthropic employee (today, Twitter):
2
3
38
🚨 NEW POLL: Most Americans now believe artificial intelligence will probably destroy humanity. Many experts agree with them. We can solve this problem. Over 100,000 people, including countless experts and leaders, have joined a call to ban the development of superintelligence.
10
7
41