Greg Serapio-García
@gregserapio
Followers
245
Following
3K
Media
2
Statuses
131
computational social psych + responsible AI + LLM research @Gates_Cambridge @Cambridge_Uni | prev. @GoogleDeepMind @GoogleAI | he/him/they 🏳️🌈
Cambridge, England
Joined October 2016
Proud to see @GoogleAI’s latest blog post feature responsible AI + rater diversity research I’ve worked on w/ @laroyo, @AliciaVParrish, @blahtino, @justaturbo, @vinodkpg, @alxndrt, and others! #GoogleResearch
The Building Responsible AI Data & Solutions (BRAIDS) team within #GoogleResearch aims to simplify adoption of RAI practices through the utilization of scalable tools, high-quality data, streamlined processes and novel research. Learn about their work ↓
0
1
9
My students and I are running a creative writing competition with a twist. You'll write three short pieces, but sometimes you'll work by yourself and sometimes with an AI. Winner gets £500 and 5 runners up get £100 each. Takes 40 mins, have a go & pls RT! https://t.co/6zDvm9lCJf
2
6
14
How do we probe for particular moral stances mimicked by LLMs? What are the potential risks and downstream consequences of prompting models with moral stances? Check out this cool project I worked on at @GoogleAI with @marwaabdulhai + @natashajaques!
What moral stance do large language models inherit from their training data? We use Moral Foundation Theory to analyze LLMs, showing they exhibit moral foundations most similar to politically conservative humans. https://t.co/wxQlmwOfAm
0
0
5
A new APA report outlines the many obstacles faced by psychology faculty of color when pursuing tenure or promotion—including financial constraints, workplace complexities, and "inequity taxes" related to their engagement in social justice issues.
apa.org
College and university faculty of color in psychology face myriad obstacles in their pursuit of promotion and tenure.
1
42
57
So excited to have this paper be at @NeurIPSConf D&B track. This is the flagship paper (a few more to follow) out of a multi-year cross-org collaboration that was led by the amazing @laroyo. It adds to the recent line of papers interrogating disagreements in data annotations.
Very excited that our DICES - #Diversity in #ConversationalAI #Safety #Evaluation paper https://t.co/KInGCcVkPq got accepted at the @NeurIPSConf Datasets and Benchmarks Track with @vinodkpg @AliciaVParrish @alxndrt Mark Diaz, Christopher Homan, Greg Serapio-Garcia, Ding Wang
2
4
37
Very excited that our DICES - #Diversity in #ConversationalAI #Safety #Evaluation paper https://t.co/KInGCcVkPq got accepted at the @NeurIPSConf Datasets and Benchmarks Track with @vinodkpg @AliciaVParrish @alxndrt Mark Diaz, Christopher Homan, Greg Serapio-Garcia, Ding Wang
arxiv.org
Machine learning approaches often require training and evaluation datasets with a clear separation between positive and negative examples. This risks simplifying and even obscuring the inherent...
2
13
54
I’m excited to share this brilliant documentary from the @bbcworldservice, where I was interviewed by @ellielhouse on AI recommendation algorithms and their impact on LGBTQ people :) Out today!
bbc.co.uk
What do big tech recommendations mean for LGBTQ people around the world?
A sneak preview of my @bbcworldservice documentary, investigating how recommendation algorithms work, and what that means for LGBTQ people around the world 🏳️🌈 https://t.co/HALEEM1T4N
0
1
7
Google is hosting the first "Machine Unlearning" challenge. Yes you heard it right - it's the art of forgetting, an emergent research field. GPT-4 lobotomy is a type of machine unlearning. OpenAI tried for months to remove abilities it deems unethical or harmful, sometimes
95
527
2K
There are so many issues with anti-asian discrimination and monolithing. But using this as a bludgeon to strike down education access and opportunity equity for other Black and Brown students is insidious. And yet has been a tactic used in America for decades.
0
1
3
One of the (many) troubling things about the SCOTUS decision is that when Blum et al tried this exact same thing with Abigail Fisher it failed. But pit the model minority myth against realities of education inequity and race disparity and they get further. Divide us and conquer.
1
1
8
So proud of my wonderful friend and the most brilliant person I know, @BailWeatherbee, for winning the Bill Gates Sr. Award!!
Big congratulations to Rumbidzai Dube & @BailWeatherbee who have won this year's Bill Gates Sr Award in recognition of their academic excellence and social leadership - https://t.co/N8yy6KeOkL
@LucyCavColl @Dept_of_POLIS @PDN_Cambridge @JesusCollegeCam @GatesAlumni
0
1
5
Are Emergent Abilities of Large Language Models a Mirage? Presents an alternative explanation for emergent abilities: one can choose a metric which leads to the inference of an emergent ability or another metric which does not. https://t.co/CZzh2th2xo
25
178
929
Our team at Google Brain in Paris is looking for a PhD student to work with us at the interface of differentiable programming and NLP. This is a 14-week position onsite. Contact me by email if interested!
2
23
85
"crowd of psychometricians celebrating the demise of the MBTI personality test"
0
3
26
Even twisting the narrative you don't have a leg to stand on @DHSCgovuk. The inflated figure you quote £30/hr are doctors with >10yrs experience, working excessive out of hours, the most senior medic in the building at 3am. Let's compare to elsewhere 1/3
The BMA has demanded a 35% pay rise for junior doctors. @FullFact found average earnings for junior doctors is likely between £20 - £30 an hour. https://t.co/ZvlT6Rnbz9
1
1
4
Looks like a public waitlist for Google Bard launched today 🚨👀
1
0
1
Apparently neither @OpenAI's new Chat API,nor @AnthropicAI's API for Claude allow users to request the log probabilities assigned to each token 🫤 This means they can only be used to generate text, not evaluate the probability of text under the model. (h/t @alexanderklew)
35
66
559