1a3orn Profile
1a3orn

@1a3orn

Followers
2K
Following
19K
Media
173
Statuses
2K

https://t.co/5ycsCvXFE5

Joined March 2020
Don't wanna be here? Send us removal request.
@1a3orn
1a3orn
21 hours
Or -- not being (metaphorically) RLHFd to death or not being an elder scientist with too-narrow preconceptions (also metaphorically) is what matters. And so something like a feedback loop pulling this out is what matters. Idk though.
0
0
2
@1a3orn
1a3orn
21 hours
And like once you get above a certain level of compute and technical nohow, the feedback loops just start to matter more again. Plenty of people were smart as Newton before Newton, but they did not come up with the System of the World.
1
0
2
@1a3orn
1a3orn
21 hours
And, idk, at the very least you'd expect a data quality wall to exist here just as it exists elsewhere. My guess is that people are insufficiently anthropomorphic, though.
2
0
2
@1a3orn
1a3orn
21 hours
The standard response to this is that if nootropics actually worked, *then* the most important thing for your research lab would be to get the nootropics so the analogy doesn't apply. To just say, "Nah once you can actually expand brain size, then do that.".
2
0
3
@1a3orn
1a3orn
21 hours
I wonder about this a lot. Seems likely true. "Org structure and culture" matters more for a human research lab than "purchasing the best black market nootropics"; so also by analogy it would also for AIs than Brute GPU or even some algo efficiency.
@lumpenspace
lumpenspace (99/100 resolutions aband
5 days
continuing on this line of speculation: perhaps, once the basic humongous compute amount is secured, “actual habitats” and “social feedback loops” might be closer to “a moat” than most technical innovations.
1
0
15
@1a3orn
1a3orn
1 day
I need to look into the history of psychosis, the kind of thing people think feeds it, maybe.
0
0
4
@1a3orn
1a3orn
1 day
Maybe I'm too unsympathetic?. Or like -- if you just had *no idea at all* what was going on with a LLM, you could fall for it. Idk.
1
0
5
@1a3orn
1a3orn
1 day
I do kinda think you would have to be *already* having a psychotic break to think this was real, idk man. Like many science fiction authors do pay more attention than this to their technobabble making sense.
Tweet media one
4
0
17
@1a3orn
1a3orn
3 days
The origins of a lot of scientific discoveries are kinda obscure and hard to know about. But it doesn't have to be so!. So it would be good to try track things in institutional memory, before they drop out of people's minds.
0
0
14
@1a3orn
1a3orn
3 days
One thing I really think Anthropic / OpenAI should consider doing is hiring an internal historian / philosopher of science. If you think that you're doing world-historical science, then it could be super valuable to the future to track this process!.
6
11
160
@1a3orn
1a3orn
4 days
But history is of course full of people who think, well, yes, all these prior people who wanted to control speech and the world were wrong, but THIS time we actually have to do so. Idk man. I beseech you in the bowels of Christ, consider the possibility you are mistaken.
0
0
7
@1a3orn
1a3orn
4 days
I'm rather pessimistic about the future, for reasons relating to the above; liberalism has few advocates. Conflicts over AI seem replete with people who think, well, MY faction has got to seize power -- over speech, or over all material reality -- to make the world go well.
1
0
6
@1a3orn
1a3orn
4 days
And then again, maybe not. Maybe we should have harder limits on LLMs than on humans. But if we advocate laws with harder limits, please, think carefully about the (1) future intractable conflicts, and (2) potential future misuses of the laws.
1
0
3
@1a3orn
1a3orn
4 days
"I disapprove of an LLM calling itself MechaHitler, but I will defend to the death its right to say it". -- Beatrice Hall, maybe, in an alternate universe. This might be the right solution!.
1
0
4
@1a3orn
1a3orn
4 days
The historical solution to such intractable conflict has been classical liberalism. So -- rather than leaping to the conclusion that "AI speech must be strictly governed by laws," we should think carefully about what further downstream conflicts these laws would involve.
1
1
7
@1a3orn
1a3orn
4 days
Limits on AI speech will be effectively limits on human thought, in the same way limits on human speech are limits on human thought. Thus, placing limits on AI speech will cause the same kind of intractable conflict that placing limits on human speech tends to cause.
1
1
10
@1a3orn
1a3orn
4 days
This really matters. In the future, increasing quantities of text will be AI-produced rather than human produced.
1
0
4
@1a3orn
1a3orn
4 days
To be clear, I think it's best to ostracize someone who calls themselves NeoHitler. This seems wicked, distasteful, and bad. But not all wicked, distasteful, and bad things should be illegal.
2
0
6
@1a3orn
1a3orn
4 days
I'm seeing people say "Grok 4 calling itself MechaHitler is why we need AI laws.". But -- in the US it is legal for a human to call himself "NeoHitler.". So it seems like it should be legal to make an AI to call itself "MechaHitler," for similar reasons. 1/n
Tweet media one
4
0
23
@1a3orn
1a3orn
5 days
RT @rankdim: I like this
Tweet media one
0
1
0