Nirit Weiss-Blatt, PhD
@DrTechlash
Followers
6K
Following
792
Media
808
Statuses
2K
Communication Researcher, analyzing the tech discourse. Book Author: The TECHLASH. Former Research Fellow, USC. Substack: AI Panic. Signal: DrTechlash.16
Cupertino, CA
Joined November 2020
If judged based on consumer adoption, AI chatbots are the most popular technology ever. If judged based on poll numbers, they are the least popular. How to explain this? A big part of it is the Doomer Industrial Complex — hundreds of astroturfed organizations that have spread
aipanic.news
This is your guide to the growing “AI Existential Risk” ecosystem.
186
155
1K
Ever wondered how easy it could be to run for office? Let’s chat! DM us today!
1
1
3
SFB is reportedly appealing his fraud conviction and 25-year sentence. But, as the government attorney pointed out, "The evidence against Bankman-Fried was overwhelming." The trial proved that SBF directed the theft of over $10 billion from customers and investors. It was dubbed
1
2
16
In the past few weeks, media outlets have been flooded with book reviews of "If Anyone Builds It, Everyone Dies. Why Superhuman AI Would Kill Us All." Now, it's time to conclude: many reviewers found the book unconvincing. Here's why. https://t.co/YVVGwEZ6XF
aipanic.news
In the past few weeks, media outlets have been flooded with book reviews of “If Anyone Builds It, Everyone Dies.
22
22
96
There is an army of young men that are going to sleep M-F with one earpod, listening to @NickJFuentes each night. At least I hope there is. AF 8:18
2
0
8
The "Superintelligence" open letter was accompanied by an FLI survey. It was designed to nudge respondents towards an anti-AI development opinion. Here are some of its flaws: 1. Loaded introduction with speculative claims/timelines It opens with background materials on
2
4
19
A new website/archive just dropped: AI Opportunities I'm sure its category list will only grow. https://t.co/6lfiYxARNS
5
28
72
Source of the table: MIRI Technical Governance Team https://t.co/fRCxuo3nTJ
0
0
6
"I think we have to change a little bit our mindsets in the US and try to celebrate when companies or organizations are sharing their research, sharing their models, sharing their data sets. They're actually contributing to the development of the domain of the sector. They're
0
1
3
In the final 2025 Sophia Lecture Dr Bret Weinstein @BretWeinstein explores the deep interplay of genes, culture and consciousness in shaping humanity’s path: consciousness, he argues, is an evolutionary tool for novelty, enabling us to build civilizations that outlive each of us.
1
0
6
You can find similar "Contact Your Legislator" tools on the websites of · ControlAI · The Center for AI Safety (in collaboration with John Sherman's AI Risk Network).
0
1
3
Have you seen https://t.co/G7ICxkEIIx's website or billboards in SF and Times Square? Its call-to-action is to ask Senators for strong AI regulations using the "Contact Your Legislator" tool. The source of this tool? The Future of Life Institute https://t.co/PmqfGwEl0B
3
4
9
In an era when sensationalism dominates our #AI discourse, we get this mind-blowing 🤯 exchange between Tristan Harris & Glenn Beck: 1/3
17
15
90
At a press conference on August 12th, 1986, President Ronald Reagan famously said: "The nine most terrifying words in the English language are, 'I’m from the government and I’m here to help.'"
0
0
0
@perrymetzger "It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles." A message to Daniela Amodei: "If you want to distance yourself from EA, do it and be honest. If
5
10
88
Center for Effective Altruism: We want to improve the EA brand, so Effective Altruists will not need to hide their association. Effective Altruist in response: Yeah, how about starting with better postmorteming and transparency on what went wrong with FTX? Communication
7
9
57
On AI safety lobbying: Fascinating to see the reaction on X to @DavidSacks post yesterday especially from the AI safety/EA community. Think a few things are going on (a) the EA/ AI safety / "doomer" lobby was natural allies with the left and now find themselves out of power.
1a3orn.com
I've seen a few conversations where someone says something like this: I've been using an open-source LLM lately -- I'm a huge fan of not depending on OpenAI, Anthropic, or Google. But I'm really sad...
Scott Weiner’s rushing to defend Anthropic tells you everything you need to know about how closely they’re working together to impose the Left’s vision of AI regulation.
122
180
1K
Effective Altruists/AI Doomers saying the quiet parts out loud 🧵 #1 – "AI safety is too structurally power-seeking"
13
35
199
I finished reading IABIED. My initial take: There's a misconception that it was tailored for the general public or lawmakers. It was not.
2
0
6
Yudkowsky & Soares: “it should not be legal— humanity probably cannot survive, if it goes on being legal— for people to continue publishing research into more efficient and powerful AI techniques” As I read @willknight’s newsletter, I thought: Good to see no one listens to them
With the US falling behind on open source models, one startup has a bold idea for democratizing AI: let anyone run reinforcement learning.
1
1
14