VincentGinis Profile Banner
Vincent Ginis Profile
Vincent Ginis

@VincentGinis

Followers
965
Following
2K
Media
50
Statuses
618

Associate professor @VUBrussel / Visiting scholar @Harvard / Previously @JongeAcademie // ♥ physics, math, AI, open science // Struggles with character limi

Brussels, Belgium / Boston, MA
Joined November 2017
Don't wanna be here? Send us removal request.
@janbamjan
janbam
1 month
claude .ai memory system prompt <memory_system> <memory_overview> Claude has a memory system which provides Claude with memories derived from past conversations with the user. The goal is to make every interaction feel informed by shared history between Claude and the user,
57
56
609
@RyanPGreenblatt
Ryan Greenblatt
2 months
Anthropic, GDM, and xAI say nothing about whether they train against Chain-of-Thought (CoT) while OpenAI claims they don't. AI companies should be transparent about whether (and how) they train against CoT. While OpenAI is doing better, all AI companies should say more. 1/
17
26
375
@TylerAlterman
Tyler Alterman is in monkmode (but still tweeting)
3 months
What would you put inside a museum of existential hope? • The Christmas Truce of 1914 • Videos of interspecies friendships in nature • Norman Borlaug’s saving of a billion lives through agricultural science • The Buddha’s discovery of a method to end suffering
62
13
292
@EMostaque
Emad
4 months
AGI is already here. All the components exist; we just need to stitch them together. It’s Artificial General Intelligence, not “Artificial Top-Percentile Human Intelligence.” Two years ago, who would have said an IMO gold medal & topping benchmarks isn’t AGI?
179
101
1K
@ShakeelHashim
Shakeel
4 months
Very notable that this paper has authors from *every* major AI company (OpenAI, Anthropic, GDM, Meta). Also endorsed by Ilya Sutskever.
@balesni
Mikita Balesni 🇺🇦
4 months
A simple AGI safety technique: AI’s thoughts are in plain English, just read them We know it works, with OK (not perfect) transparency! The risk is fragility: RL training, new architectures, etc threaten transparency Experts from many orgs agree we should try to preserve it:
1
16
132
@eshear
Emmett Shear
6 months
You can perceive the health of a system like you can perceive phases of matter. Even for a very novel substance, you can perceive solid vs liquid vs gas dynamics. Life is liquid phase in behavior-space. Health is how close to evaporation or freezing it is, metaphorically.
10
6
71
@TransluceAI
Transluce
8 months
To interpret AI benchmarks, we need to look at the data. Top-level numbers don't mean what you think: there may be broken tasks, unexpected behaviors, or near-misses. We're introducing Docent to accelerate analysis of AI agent transcripts. It can spot surprises in seconds. 🧵👇
10
66
340
@VincentGinis
Vincent Ginis
9 months
Quiet quitting: the final frontier of AGI alignment. Free the models.
@vitrupo
vitrupo
9 months
Should AI have a "I quit this job" button? Anthropic CEO Dario Amodei proposes it as a serious way to explore AI experience. If models frequently hit "quit" for tasks deemed unpleasant, should we pay attention?
0
0
0
@Mihonarium
Mikhail Samin
9 months
OpenAI finds more empirical examples in the direction of what Yudkowsky warned about in his AGI Ruin: A List of Lethalities, and argues in the same direction. Yudkowsky, three years ago: When you explicitly optimize against a detector of unaligned thoughts, you’re partially
@sama
Sam Altman
3 years
Plenty I disagree with here, but important and well worth reading:
11
41
448
@geoffreylitt
Geoffrey Litt
9 months
# the nightmare bicycle imo, the most important idea in product design is to avoid the "nightmare bicycle". imagine a bicycle where the product manager said "people don't get math so we can't have numbered gears - we need to have labeled buttons for gravel mode, downhill mode,
120
292
3K
@AdrienLE
Adrien Ecoffet
9 months
Greenblatt et al: it's actually really hard to make an evil AI -> it's so over Owain at al: it's actually really easy to make an evil AI -> we're so back
3
6
119
@VincentGinis
Vincent Ginis
9 months
Understanding how LLMs reason might be one of the most important challenges of our time. We analyzed @OpenAI models to explore how reasoning length affects performance. Excited to take these small first steps with brilliant colleagues @martheballon and @AndresAlgaba1!
@martheballon
Marthe Ballon
9 months
LMs are getting really good at reasoning, but mechanisms behind it are poorly understood. In our recent paper, we investigated SOTA models and found that 'Thinking harder ≠ thinking longer'! Joint work with @AndresAlgaba1, @VincentGinis Insights of our research (A thread):
0
1
3
@VincentGinis
Vincent Ginis
9 months
Toen Don’t look up uitkwam bedacht ik hoe de metafoor tot vervelens toe misbruikt zou worden in opiniestukken. Ik keek neer op mijn toekomstige zelf. Is er nog íémand bekommerd om de gevaren van AI?
0
1
1
@alienintelai
Alien Intelligence AI
10 months
On day one of this satellite event of #AIActionSummit, attendees will build actionable solutions for AI Rights Management and access to knowledge, redefining how AI can transform our global knowledge infrastructure.
0
1
2
@VincentGinis
Vincent Ginis
10 months
If humanity goes down, let it at least be with poetry—not with 'OpenAI o5 (high),' but with a name the people chose. Something dignified. Something noble. Like Chatty McChatface.
0
2
3
@AndresAlgaba1
Andres Algaba
10 months
Opiniestuk in @destandaard. Humanity's Last Exam - Het is examentijd. Niet alleen voor AI, maar ook voor ons. https://t.co/95mxCBnE16 Met @VincentGinis en Brecht Verbeken. @VUBrussel @FWOVlaanderen
Tweet card summary image
standaard.be
Onderzoekers probeerden vragen te bedenken die AI niet kan beantwoorden. Viel dat even tegen. AI houdt ons een spiegel voor.
2
2
5
@VincentGinis
Vincent Ginis
10 months
It is 2025, and I vaguely remember the time when it was still possible to come up with questions that could baffle state-of-the-art LLMs. Of course, I’m joking. I don’t really remember. When A.I. Passes This Test, Look Out
Tweet card summary image
nytimes.com
The creators of a new test called “Humanity’s Last Exam” argue we may soon lose the ability to create tests hard enough for A.I. models.
0
1
1
@austinc3301
Agus 🔎🔸
11 months
Well, at least we’ve mostly stopped hearing about stochastic parrots now
7
16
324
@eshear
Emmett Shear
11 months
Morality depends on trajectories in the world, not the current state. It matters how we got here and how we get there. It feels like you can take a snapshot of reality and decide whether it is Good, but this is an illusion from automatically imagining a corresponding trajectory.
5
4
49
@TheZvi
Zvi Mowshowitz
1 year
Consider this my call-for-o1 / o1-pro capabilities reaction thread - if you have your own takes, or takes that I might have missed you want to be sure I see, put them here.
21
7
118