Oskar van der Wal
@oskarvanderwal
Followers
320
Following
1K
Media
26
Statuses
123
Technology specialist at EU AI Office / AI Safety / Prev: @AmsterdamNLP @AiEleuther Thoughts & opinions are my own and do not necessarily represent my employer
Brussels, Belgium
Joined April 2022
I am happy to announce that our position paper "You Reap What You Sow: On the Challenges of Bias Evaluation Under Multi-Lingual Settings" has been accepted for presentation at the @BigscienceW #acl2022 ☘️ workshop! 🧵⬇️ https://t.co/3lWsNTuzVC
1
16
65
I’ll be on job market in early 2026, looking for research scientist or academic roles in NLP/Speech. I’ll be at #ACL2025 & giving a tutorial on #interpretability at #Interspeech2025; I’d love to chat & connect if there are any opportunities!🤗 Website: https://t.co/L0BRGz21W0 🧵
3
9
97
Like Pythia, but 1234 isn't your favorite random seed? We retrained Pythia 9x using different random seeds to explore how stable analyses of learning dynamics are to randomness. Meet @pietro_lesci @blancheminerva Fri 1500-1730 Hall 3 + Hall 2B #259
✈️ Headed to @iclr_conf — whether you’ll be there in person or tuning in remotely, I’d love to connect! We’ll be presenting our paper on pre-training stability in language models and the PolyPythias 🧵 🔗 ArXiv: https://t.co/B8DBtDRj4Y 🤗 PolyPythias: https://t.co/jqVUFZJyZo
1
3
6
✈️ Headed to @iclr_conf — whether you’ll be there in person or tuning in remotely, I’d love to connect! We’ll be presenting our paper on pre-training stability in language models and the PolyPythias 🧵 🔗 ArXiv: https://t.co/B8DBtDRj4Y 🤗 PolyPythias: https://t.co/jqVUFZJyZo
3
3
15
🚨 PhD position alert! 🚨 I'm hiring a fully funded PhD student to work on mechanistic interpretability at @UvA_Amsterdam. If you're interested in reverse engineering modern deep learning architectures, please apply:
5
103
391
💬Panel discussion with Sally Haslanger and Marjolein Lanzing: A philosophical perspective on algorithmic discrimination Is discrimination the right way to frame the issues of lang tech? Or should we answer deeper rooted questions? And how does tech fit in systems of oppression?
0
0
0
We also presented our own work on testing the validity and reliability of LM bias measures: 📄Undesirable Biases in NLP: Addressing Challenges of Measurement https://t.co/KqYH9Sj0mc
1
0
0
🔑Keynote @ZeerakTalat: On the promise of equitable machine learning technologies Can we create equitable ML technologies? Can statistical models faithfully express human language? Or are tokenizers "tokenizing" people—creating a Frankenstein monster of lived experiences?
1
0
1
📄A Capabilities Approach to Studying Bias and Harm in Language Technologies @HellinaNigatu introduced us to the Capabilities Approach and how it can help us better understand the social impact of language technologies—with case studies of failing tech in the Majority World.
1
0
2
📄Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution @florplaza22 discussed the importance of studying emotional stereotypes in LLMs, and how collaborating with philosophers benefits work on bias evaluation greatly.
1
3
5
🔑Keynote by John Lalor: Should Fairness be a Metric or a Model? While fairness is often viewed as a metric, using integrated models instead can help with explaining upstream bias, predicting downstream fairness, and capturing intersectional bias. https://t.co/bFiZFPAOnv
1
0
0
📄A Decade of Gender Bias in Machine Translation @Evanmassenhove: how has research on gender bias in MT developed over the years? Important issues, like non-binary gender bias, now get more attention. Yet, fundamental problems (that initially seemed trivial) remain unsolved.
1
0
0
📄MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs @VeraNeplenbroek presented a multilingual extension of the BBQ bias benchmark to study bias across English, Dutch, Spanish, and Turkish. "Multilingual LLMs are not necessarily multicultural!"
1
0
1
🔑Keynote @dongng: When LLMs meet language variation: Taking stock and looking forward Non-standard language is often seen as noisy/incorrect data, but this ignores the reality of language. Variation should play a larger role in LLM developments and sociolinguistics can help!
1
0
1
Last week, we organized the workshop "New Perspectives on Bias and Discrimination in Language Technology" @UvA_Amsterdam @AmsterdamNLP. We're looking back at two inspiring days of talks, posters, and discussions—thanks to everyone who participated! https://t.co/RcX15Am9Xy
1
2
24
This is a friendly reminder that there are 7 days left for submitting your extended abstract to this workshop! (Since the workshop is non-archival, previously published work is welcome too. So consider submitting previous/future work to join the discussion in Amsterdam!)
Working on #bias & #discrimination in #NLP? Passionate about integrating insights from other disciplines? Want to discuss current limitations of #LLM bias mitigation? 👋Join the workshop New Perspectives on Bias and Discrimination in Language Technology; 4&5 Nov in #Amsterdam!
0
1
6
#CallforPapers for the workshop New Perspectives on Bias and Discrimination in Language Technology, which will discuss the state of the art on bias measurement and mitigation in language technology and explore new avenues of approach. Deadline: 15 Sept!
uva.nl
One of the central issues discussed in the context of the societal impact of language technology is that machine learning systems can contribute to discrimination, for instance by propagating human...
1
1
2
This workshop is organized by @AmsterdamNLP @UvA_Amsterdam researchers Katrin Schulz, Leendert van Maanen, @wzuidema, Dominik Bachmann, and myself. More information on the workshop can be found on the website, which will be updated regularly. https://t.co/RcX15Am9Xy
1
0
3
🌟The goal of this workshop is to bring together researchers from different fields to discuss the state of the art on bias measurement and mitigation in language technology and to explore new avenues of approach.
1
0
1
One of the central issues discussed in the context of the societal impact of language technology is that ML systems can contribute to discrimination. Despite efforts to address these issues, we are far from solving them.
1
0
1
We're super excited to host @dongng, John Lalor, @ZeerakTalat, and @az_jacobs as invited speakers at this workshop! Submit an extended abstract to join the discussions; either in a 20min talk or a poster session. 📝Deadline Call for Abstracts: 15 Sep, 2024 https://t.co/RcX15Am9Xy
1
0
3