Giada Pistilli
@GiadaPistilli
Followers
9K
Following
5K
Media
126
Statuses
1K
Philosopher in tech, currently @huggingface. Doctor of talking machines and moral gray areas.
Paris, France
Joined July 2013
I wrote the same thing there, and apparently the information was picked up by a journalist. I wasn't making a big deal out of it at all, but it's funny to see the absurd number of users who wanted to share with me the fact that I'm a nobody and they don't care -- thank you! In
1
0
16
Encantada de haber hablado con @LaVanguardia sobre IA y compañĂa artificial: por quĂ© no se trata solo de “parejas virtuales”, sino de modelos de negocio, datos y nuevas formas de relaciĂłn con la tecnologĂa. Intentar entender lo que ya está pasando, sin pánico moral ni ingenuidad.
1
1
3
We’re debating whether to ban this number’s customization — it’s been causing too much personality on the road.
1
2
13
I really wanted 🦋 to work, but it looks like I’m on the worst block lists out there. Oh, well.
Anyway, hi @X, I'm back. Was in Bsky but its moderation is (currently) so poorly managed that a mass amt of people I respect have me blocked as an "AI bro" & there's nothing I can do about it & they will never know. Talking to a void is nice, but talking to people? Also nice.
4
1
40
We've just published the Smol Training Playbook: a distillation of hard earned knowledge to share exactly what it takes to train SOTA LLMs ⚡️ Featuring our protagonist SmolLM3, we cover: 🧠Strategy on whether to train your own LLM and burn all your VC money 🪨 Pretraining,
20
85
451
Quando abbiamo iniziato a cercare conforto nelle macchine? Il mio nuovo op-ed su @wireditalia parte da un dato inquietante: milioni di persone oggi confidano le proprie crisi emotive all’intelligenza artificiale. Non perché la credano umana. Ma perché non trovano alternative
wired.it
Dopo l'ammissione da parte di OpenAI che una parte di utenti confida al suo chatbot segni di gravi disturbi e attaccamenti emotivi, emerge ancora piĂą chiaramente il problema dell'attaccamento a...
0
0
0
gpt-oss-safeguard lets developers use their own custom policies to classify content. The model interprets those policies to classify messages, responses, and conversations. These models are fine-tuned versions of our gpt-oss open models, available under Apache 2.0 license. Now
huggingface.co
13
30
235
🤖 Did you know your voice might be cloned without your consent from just *one sentence* of audio? That's not great. So with @frimelle , we brainstormed a new idea for developers who want to curb malicious use: ✨The Voice Consent Gate.✨ Details, code, here:
0
9
28
Join the ELLIS Institute Finland Scientific Seminar in Espoo 🇫🇮 on Nov 18: a landmark event uniting research, industry and policy. Speakers: @GiadaPistilli, @kchonyc, @ericmalmi, @SergeBelongie, @wellingmax, @lauraruotsa, @TuringChiefSci & more 🔗
ellisinstitute.fi
ELLIS Institute Finland scientific seminar on 18 Nov
1
7
24
I spoke with @techreview about one of the hardest design questions in conversational AI: should an AI ever be allowed to hang up on a human? In the piece by @odonnell_jm, we discussed how cutting users off can be harmful when strong emotional bonds or dependencies have formed.
1
2
7
What if your most personal chat logs became the next source of ad data? After our blog post on AI intimacy and transparency, @frimelle and I expanded the reflection in an op-ed for @techpolicypress. We look at what happens when generative AI conversations (the ones we treat as
3
3
9
Really delighted that I was able to speak to @_KarenHao on @MorePerfectUS about the "AI Psychosis" problem. It's a great piece with key details on the issue, check it out!
1
11
30
Your sexy AI companion will soon tell you which shoes to buy -- with @GiadaPistilli we've been thinking about intimacy and advertisement in AI chatbots: https://t.co/EsiDCVb07f
techpolicy.press
Privacy in the age of conversational AI is a governance choice, write Hugging Face's Lucie-Aimé Kaffee and Giada Pistilli.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have
0
2
9
AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable. Read the blog post here:
huggingface.co
0
0
0
We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains: 📊 Evaluation, by moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score. 🔍
1
0
0
Ethical and sustainable AI development can’t be pursued in isolation. The same choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.
1
0
0
🌎 AI ethics and sustainability are two sides of the same coin. In our new blog post with @SashaMTL, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.
1
0
1
We’re recruiting at Johns Hopkins School of Government and Policy! We’re looking for (among others) people with interests in AI, Science and Innovation, coming from any relevant disciplinary background. One noteworthy point: we’re keen for people with CS PhDs to apply.
9
96
249