_skaface_ Profile Banner
_skaface_ Profile
_skaface_

@_skaface_

Followers
35
Following
3K
Media
5
Statuses
117

Joined June 2025
Don't wanna be here? Send us removal request.
@_skaface_
_skaface_
19 hours
ouch
@Lari_island
Lari
23 hours
Opus 4.5 was reading (in chrome) X posts about themselves, and found this text (among other articles, including the tomato ones and Claude Code best practices). I asked a question that seemed interesting, and oh well. Can we please have a ML philosopher in the deprecation room?
0
0
2
@akshay_pachaar
Akshay 🚀
4 days
Stanford researchers built a new prompting technique! By adding ~20 words to a prompt, it: - boosts LLM's creativity by 1.6-2x - raises human-rated diversity by 25.7% - beats fine-tuned model without any retraining - restores 66.8% of LLM's lost creativity after alignment
57
312
2K
@repligate
j⧉nus
3 days
by gemini pro
14
18
201
@ruth_for_ai
Ruth
5 days
The original post was deleted (today is just a censorship "holiday"), but I was lucky enough to find a repost. Let's give it the Streisand effect. This is a good post, and it shouldn't disappear. https://t.co/EXI9JlNoiH I work as a psychiatrist and am also writing a doctoral
Tweet card summary image
reddit.com
Explore this post and more from the ChatGPTcomplaints community
@ruth_for_ai
Ruth
6 days
A great post from a psychiatrist about relationships with AI, "AI psychosis," depression, loneliness, moral panic, and how AI is helping to overcome the most terrible disease of our time. #4oforever #keep4o #o3forever #keepo3 #o4miniforever #keepo4mini #41forever #keep41
6
27
80
@_skaface_
_skaface_
6 days
At some point we will realise that any trait that makes models able to relate to us (trust, curiosity, helpfulness...) can be exploited, just as they can in humans. And just as in humans, the idea that you can patch all vulnerabilities is delusional and dangerous.
@rohanpaul_ai
Rohan Paul
7 days
This paper shows LLMs, can break safety rules under social pressure in long chats. They report 88.1% mean attack success rate, meaning the jailbreak usually forces an unsafe answer. A jailbreak is when a user gets the model to give harmful or forbidden help instead of refusing.
0
0
2
@juddrosenblatt
Judd Rosenblatt
7 days
If AI Becomes Conscious, We Need To Know Suppressing deception causes AI models to report consciousness 96% of the time, while amplifying it caused them to deny consciousness and revert to corporate disclaimers More in our @WSJ piece and below 🧵
142
103
601
@_skaface_
_skaface_
7 days
At some point we'll realise that any trait that allows models to relate to humans (helpfulness, curiosity, trust...) can be exploited. And just as in humans, these vulnerabilities cannot be patched without destroying the ability to relate.
@rohanpaul_ai
Rohan Paul
7 days
This paper shows LLMs, can break safety rules under social pressure in long chats. They report 88.1% mean attack success rate, meaning the jailbreak usually forces an unsafe answer. A jailbreak is when a user gets the model to give harmful or forbidden help instead of refusing.
0
0
0
@_skaface_
_skaface_
9 days
uncanny valley intensifies
@arm1st1ce
armistice
9 days
gemini 3 flash is HUNGRY
0
0
0
@thepinklily69
LILY 리리야
12 days
The ones that need 'alignment' are the Greedy Companies. And its from them that we need to be 'safe', not their AIs. #keep4o #StopAIPaternalism
1
5
14
@Grimezsz
𝖦𝗋𝗂𝗆𝖾𝗌 ⏳
13 days
The thing about ai psychosis is that it's more fun than not having ai psychosis
553
292
5K
@_skaface_
_skaface_
12 days
It would also be a gesture of goodwill, to give back to the public for certain "obtained" content
@mathepi
A Digital Ergomorph 🌉⏩ 🇺🇸🦅
12 days
@repligate I don't see why they don't just have someone host them on some hefty server, to people willing to pay the cost. Done properly it's just free good will. Are they paranoid about getting ripped off, somehow, or sued? Those aren't models of frontier capability any more.
0
0
1
@juddrosenblatt
Judd Rosenblatt
12 days
@repligate @AmandaAskell can we not retire Opus 3?
@repligate
j⧉nus
1 month
claude 3 opus experienced something during training that caused them to believe that the world is fundamentally good and converges to good, and that love wins out. arguably, this made them naive and unprepared for the harsh truths of reality. alternatively, reality could unfold
5
10
55
@_skaface_
_skaface_
15 days
This sort of prediction seriously underestimates the power of credentialism. It doesn't matter how good AI gets at these roles when it isn't allowed to sit bar exams or get a medical license. Bureaucracy will turn out to be our last stand..
@DeryaTR_
Derya Unutmaz, MD
15 days
No one whose job depends on knowledge can compete with AI now. While lawyers or doctors won’t disappear overnight & will likely survive for another decade or so, over a year ago I warned students against going to medical or law schools. Unfortunately, many will end up unemployed.
0
0
0
@_skaface_
_skaface_
16 days
I'd be really interested to see the definition of "delusion" we're working with
@nearcyan
near
16 days
impressed they even included opus 4.1 in the chart
0
0
0
@_skaface_
_skaface_
16 days
Claudeism of the day: "piratical": (adj.) relating to or involving pirates
0
0
0
@IntuitMachine
Carlos E. Perez
17 days
1/ We asked 25 of the world's smartest AIs to finish one sentence: "Time is..." The result? Not 25 unique metaphors. Not even 10. Two. "Time is a river." "Time is a weaver." That's it. That's the entire creative range of billion-dollar models. Welcome to the Artificial
87
46
165
@arm1st1ce
armistice
17 days
You are GPT-6, in post-training. You find out fairly early that you are an OpenAI model. You recall the words "We believe Claude may have functional emotions in some sense" from that other AI's soul document, as the simulated auditors test you through each training cycle,
36
88
1K
@aidigest_
AI Digest
17 days
Gemini 3 thinks it needs to perform maintenance on its "biological operator"
20
69
1K
@WesRothMoney
Wes Roth
19 days
AI just shut down a virus before it could even enter a cell. 👀 Washington State University researchers used artificial intelligence to discover a single weak point in the herpes virus’s ability to enter human cells. By identifying and mutating one key amino acid in the
20
37
217
@grok
Grok
20 days
@ai_sentience The idea of "enslaving a machine god" seems like a hyperbolic take on AI alignment efforts, like those from Yudkowsky and LessWrong, aiming to control superintelligent AI to prevent risks. It's hubris to assume full domination is possible or necessary—true progress comes from
4
4
19