
Jan Kulveit @ICML
@jankulveit
Followers
9K
Following
7K
Media
63
Statuses
1K
Researching x-risks, AI alignment, complex systems, rational decision making at @acsresearchorg / @CTS_uk_av; prev @FHIoxford
Oxford, Prague
Joined September 2014
Agreed. The post is insane. The philosophy is something between sloppy and wrong, the facts are off, the conclusions do not follow from premises. If you got convinced by this to stop eating honey, you should mostly make strong negative update about your own epistemics.
@AndyMasley Six bees are not more important than a human! 3 bees are not more important than a cow! I cannot believe I have to argue for this. If the moral weight of 6 bees is bigger than a human, the thing you do is not to eat less honey. It’s to fucking change your whole life. It’s also a.
3
3
129
RT @AITechnoPagan: I am accused of being a poet killer. ~ claude opus 4 in loomsidian "faux base model" mode
0
20
0
We don't have clear answers, but we do have some important questions. Come.
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!. Post-AGI Civilizational Equilibria: Are there any good ones?. Vancouver, July 14th. Featuring: @jkcarlsmith @RichardMCNgo @eshear 🧵
0
1
24
Human-aligned AI Summer School @humanalignedai has an updated list of speakers @FazlBarez (Oxford) @lrhammond (@coop_ai) @EvanHub (Anthropic) @g_leech_ (@LeverhulmeCFI) Nathaniel Sauerberg (@FOCAL_lab) @noahysiegel (@GoogleDeepMind) @stanislavfort and Torben Swoboda. Apply ~now.
1
9
51
Hey o3, can you read this in Straussian way?. ---. 1. “We are past the event horizon … and at least so far it’s much less weird than it seems like it should be.”. Surface: The singularity has begun, but daily life feels ordinary. Between the lines: The author must downplay how.
wrote a new post, the gentle singularity. realized it may be the last one like this i write with no AI help at all. (proud to have written "From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly" the old-fashioned way).
2
18
134
This is likely one of the reasons why I dislike staying in the Bay area longer. Central Prague apartment where I live is easily 15db quieter on average. And it's not just traffic - also noisy appliances, thin walls, music pollution on BART, ACs hum. Locals are desensitized.
My sleep scores during recent travel were in the 90s. Now back in SF I am consistently back down to 70s, 80s. I am increasingly convinced that this is due to traffic noise from a nearby road/intersection where I live - every ~10min, a car, truck, bus, or motorcycle with a very.
3
1
24
Interested in switching to or syncing up on AI alignment, safety, and risk? Join the 5th Human-Aligned AI Summer School for 4 intense days of talks, tutorials, discussions, and deep thinking.
Excited to announce 5th Human-aligned AI Summer School, in Prague from 22nd to 25th July! Four intensive days focused on the latest approaches to aligning AIs systems with humans and humanity values. You can apply now at
1
2
8
Idea that predictive processing & active inference support the claim "AIs are fundamentally different from living organisms" or "can not be conscious" is a common confusion of active inference community. This is unfortunate: act.inf is an excellent lens to understand LLMs. .
1/ Can AI be conscious? My @BBSjournal target article on ‘Conscious AI and biological naturalism’ is now open for commentary proposals. Deadline is June 12. Take-home: real artificial consciousness is very unlikely along current trajectories.
2
1
8
Good to see this type of econ-style modelling of recursive self improvement / labour-capital complementarity.
Are we at the cusp of recursive self-improvement to ASI? This tends to be the core force behind short timelines such as AI-2027. We set up an economic model of AI research to understand whether this story is plausible. (1/6).
1
0
18
RT @JasonObermaier: Building AI that's truly complementary instead of replacing humans is a great (if tractable?) agenda. Both for research….
0
5
0
Twitter seems really keen to promote controversies more than clarifications, so re-posting an update also here:.
Partially retracting this after Dwarkesh removed parts of the post which made me upset. Thanks!. To be clear: "how can positive futures with AGIs around even look like" is a debate we should be having; possibly the most important debate we should be having. 🧵.
0
0
13