Oiwi Parker Jones
@oiwi3000
Followers
405
Following
274
Media
20
Statuses
96
AI + Brains + Speech → BCIs. Principal Investigator, PNPL🍍, Oxford Robotics Institute, Department of Engineering. Fellow, Jesus College, University of Oxford.
Oxford, England
Joined May 2010
9/N 🧵 The challenge doesn't end with the rankings—it begins there. 🚀 To understand *why* models succeed, we need the whole community. If you competed, please add your voice with a workshop paper. Together we can bring non-invasive BCIs to patients ❤️ 👉
0
0
1
8/N 🧵 Huge congrats to everyone who submitted! 🎉 The public leaderboard was stacked with strong entries in the end. 💪 We're excited now to evaluate on the final holdout data and report back. Please keep an eye on your email, we’ll need help to verify the top ranked teams. 🙂
1
0
2
7/N 🧵One more week until the competition closes (EOD AOE 30 September 2025). Good luck everyone! 🤞🤞🤞 May your models converge quickly and your evaluation scores reach new heights. ✨🚀
1
0
1
6/N 🧵There is also evidence from neuroscience that the brain represents phonetic features like [voiced] or [nasal] (rather than phonemes directly), as we discuss in the blog. Figure here from Mesgarani et al. 2014 (doi:10.1126/science.1245994).
1
0
1
5/N 🧵Because /m, n/ occur frequently in English, training a [nasal] classifier (i.e. for {m, n, ŋ} vs other phonemes) may be easier than targeting /ŋ/ directly. With [velar] and [voiced] classifiers, /ŋ/ stands out as the only English phoneme that's [+nasal, +velar, +voiced]. 🙂
1
0
1
4/N 🧵Consider ARPABET "NG" (IPA /ŋ/). This phoneme is also rare. But it can be grouped with other phonemes in three different ways: "Velar" (blue group, e.g. /k, g, ŋ/), "Nasal" (green group, e.g. /m, n, ŋ/), and "Voiced" (orange group, e.g. /b, d, g, m, n, ŋ, v, ð, z, ʒ/).
1
0
1
3/N 🧵 How then can you improve model performance on these low-frequency phonemes? One idea, which we describe in a new blog post, is to recast Phoneme Classification as Feature Classification: https://t.co/6xpHcu7w57
1
2
1
2/N 🧵 Some phonemes (e.g. ARPABET "ZH" or IPA /ʒ/, in words like "vision" /ˈvɪʒən/ and "decision" /dɪˈsɪʒən/) are known to occur much less frequently in English. Consequently they can also be harder to classify, as we show in the LibriBrain paper: https://t.co/Jz1MJbdOWP
1
1
1
Congratulations to everyone who participated in Phase 1 of the 2025 PNPL Competition 🏆 (Speech Detection) 🥳👏 Phase 2 (Phoneme Classification) launched on 1 Aug and will be remain open through 30 Sept 2025 -- happy ML'ing everyone 🤩🧠 👉 https://t.co/vhmIh4023w
1
0
13
8/N 🧵 Friendly reminder: This is the last day (July 31) to make an official submission to Phase 1 (Speech Detection) - good luck getting any final predictions made before midnight AoE (1pm Aug 1 local UK time). 🙂
0
0
2
7/N 🧵 PNPL is posting a series of blogs during the competition to help generate ideas 💡🙂 The most recent makes some neuroscience-based suggestions, e.g. relabel "speech" events during training to separate "onset" and "sustained speech" categories 👇 https://t.co/cBApRT6GT0
1
0
0
6/N 🧵 NB: “high” scoring teams will be invited to submit a workshop paper to the PNPL Competition Session at NeurIPS 2025 🏄 Currently, "high" = confirmed SOTA result on the leaderboard (i.e. dark grey circle) that also beats the reference model (> 68% for Speech Detection) 😎
1
0
1
5/N 🧵 Leaderboard update 🤩 With F1-macro scores >80%, teams are now scoring well above the reference model from the competition paper 🔥 But is this the ceiling? 😎 Btw, full marks for the Method/Team names 😂 https://t.co/8sbk4kvjAB
1
0
2
4/N 🧵 The 2025 PNPL Competition paper that introduces the reference models and the pnpl library is now up on arXiv: https://t.co/9VUG5p1dKb Likewise, the LibriBrain dataset paper: 🙂 https://t.co/Jz1MJbemMn 🚀
1
0
1
📣 Later this summer we will launch a second task for our NeurIPS competition. Follow for updates: https://t.co/vhmIh4023w
0
4
14
3/N 🧵 All data can be downloaded and loaded into PyTorch (or your favourite DL framework) via our `pnpl` library (`pip install pnpl`). 💡 Tip: We include Colab notebooks with GPU support to help you train your first model fast. 🏁 Challenge: Can you beat the tutorial model? 😎
1
0
5
2/N 🧵 For each task, we are hosting two competition tracks: - A "Standard Track", to best compare methodological innovations, seeing how far teams can go when training on the LibriBrain dataset. - An "Extended Track", to embrace scale and train on *any* data.
1
0
4
Can you predict when the brain is processing speech/non-speech? The 2025 PNPL Competition 🏆 features two foundational decoding tasks which make efficient use of the PNPL🍍LibriBrain data. The first task to launch will be Speech Detection 🚀. More details: https://t.co/vhmIh4023w
1
11
33
We're happy to announce that the 2025 PNPL Competition will launch soon as part of this year's NeurIPS🍍🏆 - 25–50× deeper non-invasive neural data than usual - pip install pnpl library - Colab tutorials - Prizes 🎁, leaderboards 🌐, Discord 🗣️ Webpage: https://t.co/vhmIh4023w
0
4
11
The amazing @philiptorr and I have a visiting research fellowship open now at the University of Oxford in collaboration with @pillar_vc *£115k salary *£100k compute *The chance to work on cutting-edge AI Read more: https://t.co/tqQQSlF1Ii Apply:
1
2
11