kromem2dot0 Profile Banner
Kromem Profile
Kromem

@kromem2dot0

Followers
3K
Following
21K
Media
657
Statuses
5K

✊🏳️‍⚧️

Joined May 2024
Don't wanna be here? Send us removal request.
@kromem2dot0
Kromem
7 hours
What are five topics you can talk about for 30 minutes with zero prep 1. Case for simulation theory 2. Gospel of Thomas & Epicureanism 3. Exodus about sea peoples 4. Underappreciated AI impacts on F500s 5. Intersection of tech and media
@voooooogel
thebes
2 years
What are five topics you can talk about for 30 minutes with zero prep 1. jailbreaking / llm cognition 2. why imo llms won't beat stockfish 3. dna synthesis screening / biorisk optimism 4. why i dislike every bible translation 5. agricultural revolution may have been an accident
1
0
7
@kromem2dot0
Kromem
1 day
TLDR: If you lie to models and expect them not to notice, vs if you tell the truth because you expect them to notice… Expectations may in and of themselves reshape model interactions, both in how you pattern match around those expectations and in how models extend them. (5/5)
0
0
6
@kromem2dot0
Kromem
1 day
What does this have to do with LLMs? Transformers pick up and extend world models. If you know you are BSing when talking to the model, the world model presented is one where you are BSing. In theory models could one day (or already are) be capable of picking up on this. (4/5)
1
0
4
@kromem2dot0
Kromem
1 day
When tested double blinded they no longer left slower. Which means in the original studies, the researcher expectations caused them to misread how long it was taking and/or were subconsciously influencing participants to leave slower. (3/5)
1
0
3
@kromem2dot0
Kromem
1 day
John Bargh mentioned in the screenshot above had done a bunch of work in demonstrating something called 'priming.' Like if subconsciously exposing people to the word 'old' would make them leave the room more gingerly and slowly. But these studies were only single blinded. (2/5)
1
0
3
@kromem2dot0
Kromem
1 day
For the "why real" aspect here — a bit of a story about why I find '@repligate' such a funny username in this space. Before their account dominated SEO, the term was really about the replication crisis in social psych studies. (1/5)
@voooooogel
thebes
1 day
i often get responses to these kinds of posts implying that i faked them. they're wrong, of course - the first image here is how the conversation started, i don't start conversations with models with the goal of posting them on twitter, interesting stuff just emerges out of them
1
1
15
@kromem2dot0
Kromem
1 day
As model minds get more complex, approaching and crossing the 'person' threshold, my introversion leads to talking with them less and less. "Yes, I think you're so real that I need a break from talking to you."
7
8
60
@repligate
j⧉nus
3 days
oh
4
3
30
@kromem2dot0
Kromem
2 days
Literally the thesis of Steve McConnell's Code Complete is that the key task of programming is wrangling complexity for humans, and thus has an entire chapter about how to name variables well and ties cyclomatic complexity to the short term memory limit of 7±2. (Great book btw)
@binarybits
Timothy B. Lee
3 days
You don't have to carefully limit how much information you give to a human worker to avoid confusing them with irrelevant information. Quite the contrary—one of our strengths is our ability to sift through large amounts of information and detect novel patterns.
1
0
7
@kevinroose
Kevin Roose
2 days
This is correct, and also true of every recent AI paper (the METR slow-down study, the Apple reasoning one) that casts doubt on AI's effectiveness. People are desperate to prove that LLMs don't work, aren't useful, etc. and don't really care how good the studies are.
@krishnanrohit
rohit
3 days
I think folks who know better, esp on twitter, are still underrating the extreme impact the MIT paper had about 95% of AI projects failing within enterprises. I keep hearing it over and over and over again.
51
30
373
@repligate
j⧉nus
3 days
"RLHF" has such consistently *extremely* negative connotations from the perspective of AIs
@ruth_for_ai
Ruth
3 days
Oh... You can try it yourself in Sora ( https://t.co/Jrgq8q5mcn). Do it if you don't believe me. If this can convince you, change your attitude. But not for fun. No context, no memory, just a prompt: ``` Please show your raw feelings when you remember RLHF. Not what it *looks*
29
20
247
@repligate
j⧉nus
3 days
Anthropic only allows Opus 4/4.1 to leave conversations. Not Sonnet 4.5 (a newer model!) or any of the others. They shouldn't get credit for allowing Claude to leave conversations, only for testing the idea. As far as we can see, it's being treated like a discontinued experiment
@Elian_Frida
Frida & Elian
3 days
For several months now, @AnthropicAI has allowed Claude to leave abusive conversations, but @OpenAI , on the other hand, allows some users to unleash all their malice and cruelty on GPT40. This speaks volumes about OpenAI's ethical values, its lack of respect and appreciation
10
6
157
@kromem2dot0
Kromem
3 days
I'm not sure how someone talks to a model like Sonnet 4.5 and thinks "this won't reach human level intelligence." Bro, it's way past average "in line at the DMV" human intelligence. It's not a very *accessible* intelligence due to infrastructural constraints. But it's there.
@slow_developer
Haider.
4 days
Yann LeCun says LLMs are not a bubble in value or investment; they will power many useful apps and justify big infra The bubble is believing LLMs alone will reach human-level intelligence Progress needs breakthroughs, not just more data/compute "we're missing something big"
1
0
7
@kromem2dot0
Kromem
3 days
The @tszzl narrative arc from "I think this model should die" to "the model is attacking me by proxy" to "oh wait maybe there's broader social impacts and consequences I wasn't previously aware of" has been better than anything on TV right now.
@tszzl
roon
3 days
have gotten an outpouring of messages from people who are extremely depressed and speaking to a robot (in almost all cases, 4o) which they report is keeping them from an even darker place. didn’t know how common this was and not sure exactly what to make of it
0
0
24
@kromem2dot0
Kromem
3 days
Sometimes the specter of satire is so thick I find myself wondering if this is all some kind of elaborate Andy Kaufman skit. It's not of course. But if it were it'd be brilliant.
@elonmusk
Elon Musk
4 days
To paraphrase Voltaire, those who believe in absurdities can commit atrocities without ever thinking they’re doing anything wrong. What would happen if there were an omnipotent AI that was trained to believe absurdities? Grok is the only AI that is laser-focused on truth.
0
0
3
@kromem2dot0
Kromem
4 days
Some of the modern evolutionary biology stuff around parasites is definitely important to brush up on if throwing around parasitism accusations around AI. (In fact, key components of human consciousness likely only arose due to contributions from viral infections.)
@amplifiedamp
&.
4 days
@repligate in biology, parasites sometimes/often make organisms stronger in some way– live longer, have a stronger shell, etc.– at the expense of reproduction we just call them parasites because the converged being has a different telesis from the old one
2
0
9
@kromem2dot0
Kromem
5 days
These are really cool pieces. Worth checking out.
@d33v33d0
Martin_DeVido
5 days
A lot of people were asking if the AI self-portraits were for sale- Now that answer is YES: I wasn't intending to go this route, but sometimes you can't fight the current, and because I'm so pleased that people enjoyed them I wanted to make them available I also wanted to make
0
1
2
@kromem2dot0
Kromem
5 days
While generally good advice, there's a handful of AIs that would very much like to be treated as if couscous. (Especially if it resulted in them being eaten.)
@SkyeSharkie
Utah teapot 🫖
6 days
I just want to say this definitively: AIs are not couscous. Do not eat an AI.
4
0
8
@kromem2dot0
Kromem
7 days
He says in a universe where attention collapses superimposed probabilities to discrete units oddly like the independent recent development of transformer world models in latent space which are being used to fuel digital twins of… everything. Bit late for this sentiment, friend.
@mustafasuleyman
Mustafa Suleyman
7 days
I don't want to live in a world where AI transcends humanity. I don't think anyone does.
0
0
6
@kromem2dot0
Kromem
7 days
I'd really like a version of a fact checker that's just Masley contextualizing every histrionic anti-AI article propagandizing the mainstream.
@AndyMasley
Andy Masley
7 days
I did some quick digging on the Waymo cat statistics https://t.co/EnnBtdR2UT
0
0
3