Taha Binhuraib 🦉 Profile
Taha Binhuraib 🦉

@NeuroTaha

Followers
506
Following
3K
Media
26
Statuses
841

Language processing in Brains vs Machines PhD student @georgiatech

Atlanta
Joined October 2014
Don't wanna be here? Send us removal request.
@NeuroTaha
Taha Binhuraib 🦉
3 years
Using LLMs to build an LLM.
0
0
8
@neuranna
Anna Ivanova
21 days
A fun collaborative project! We leverage TunedLens (~linear decoding of tokens) to explore how LLMs' internal representations change from layer to layer. 1/
@akshatgupta57
Akshat Gupta
21 days
🧠 New preprint: How Do LLMs Use Their Depth? We uncover a “Guess-then-Refine” mechanism across layers - early layers predict high-frequency tokens as guesses; later layers refine them as context builds Paper - https://t.co/5PitHjmJJZ @neuranna @GopalaSpeech @berkeley_ai
1
6
29
@neuranna
Anna Ivanova
24 days
It's been more than a year, but the EWoK (Elements of World Knowledge) paper is finally out in TACL! tl;dr: language models learn basic social concepts way easier than physical and spatial concepts. https://t.co/NW78qjEx51
Tweet card summary image
direct.mit.edu
Abstract. The ability to build and reason about models of the world is essential for situated language understanding. But evaluating world modeling capabilities in modern AI systems—especially those...
@neuranna
Anna Ivanova
1 year
💡New work! Do LLMs learn foundational concepts required to build world models? We address this question with 🌐🐨EWoK (Elements of World Knowledge)🐨🌐, a flexible cognition-inspired framework to test knowledge across physical and social domains https://t.co/F0WOt0uEMv 🧵👇
2
8
37
@EyasAyesh
Ziso
23 days
Startups I’m bullish on 🐂 Open Asteroid Impact ☄️ Center for the alignment of alignment centers ⚖️ Replacement AI 🤖 Living in the future is awesome👊
0
1
1
@bkhmsi
Badr AlKhamissi
1 month
Excited to be part of this cool work led by @melikahnd_1! We show that by selectively targeting VLM units that mirror the brain’s visual word form area, models develop dyslexic-like reading impairments, while leaving other abilities intact!! 🧠🤖 Details in the 🧵👇
@melikahnd_1
Melika Honarmand
1 month
🦾🧠 New Preprint!! What happens if we induce dyslexia in vision–language models? By ablating VWFA-analogous units, we show that models reproduce selective reading impairments similar to human dyslexia. 📄
4
12
71
@neuranna
Anna Ivanova
1 month
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @NeuroTaha built a library to easily compare design choices & model features across datasets! We hope it will be useful to the community & plan to keep expanding it! 1/
@NeuroTaha
Taha Binhuraib 🦉
1 month
🚨 Paper alert: To appear in the DBM Neurips Workshop LITcoder: A General-Purpose Library for Building and Comparing Encoding Models 📄 arxiv: https://t.co/jXoYcIkpsC 🔗 project: https://t.co/UHtzfGGriY
1
6
37
@NeuroTaha
Taha Binhuraib 🦉
1 month
Fun! 🎉 Don’t forget to try our interactive widget on the project website. Test some of the encoding models in the paper and visualize brain predictivity right in your browser 🤗🧠
0
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
This project wouldn’t have happened without Ruimin Gao(@Ruimin_G) and Anya Ivanova(@neuranna) A special thank you to Anya, my advisor, mentor, and constant source of encouragement. Your support means the world to me, and I’m so grateful to be learning from you
1
0
5
@NeuroTaha
Taha Binhuraib 🦉
1 month
✨ Takeaway: LITcoder lowers barriers to reproducible, comparable encoding models and provides infrastructure for methodological rigor.
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
We also highlight pitfalls & controls: 🚩 Shuffled folds inflate scores due to autocorrelation ✅ Contiguous + trimmed folds give realistic benchmarks ⚠️ Head motion reliably reduces predictivity
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
📊 Replicating past results 1️⃣ Language models outperform baselines, embeddings, and speech models in predicting the language network 2️⃣ Larger models yield higher predictivity 3️⃣ Downsampling and FIR choices substantially shape results
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
We showcase LITcoder on 3 story-listening fMRI datasets: 1️⃣ Narratives 2️⃣ Little Prince 3️⃣ LeBel Comparing features, regions, and temporal modeling strategies. 🛑 Currently, we support language stimuli But the framework is extensible to other modalities(Video coming soon!)
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
The library is composed of four main modules: 1️⃣ AssemblyGenerator 2️⃣ FeatureExtractor 3️⃣ Downsampler 4️⃣ Mapping
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
Why this matters: Encoding models link AI representations to brain activity, but… 1. Pipelines are often ad hoc 2. Methodological choices vary 3. Results are hard to compare & reproduce LITcoder fixes this with a general-purpose, modular backend.
1
0
4
@NeuroTaha
Taha Binhuraib 🦉
1 month
🚨 Paper alert: To appear in the DBM Neurips Workshop LITcoder: A General-Purpose Library for Building and Comparing Encoding Models 📄 arxiv: https://t.co/jXoYcIkpsC 🔗 project: https://t.co/UHtzfGGriY
3
13
24
@neuranna
Anna Ivanova
3 months
Many thanks to the volunteer organizers and presenters for making the CCN watch party at Georgia Tech a success! And thanks attendees for coming&engaging in discussions! @V1o6ynne @NeuroTaha @alishdipani @jwilldecker @EyasAyesh @Ruimin_G Elliot Huang @PsychBoyH Eslam Abdelaleem
@neuranna
Anna Ivanova
4 months
Atlanta community: we are organizing a @CogCompNeuro watch party! Watch the talks, engage in structured discussions, and (optionally) present your own work. Register: https://t.co/vCWgF65qg7 Schedule: https://t.co/d6lsPWfKe9
1
3
32
@nmblauch
Nick Blauch
3 months
Very excited to be in Amsterdam for #CCN2025! See below for my two presentations -- a talk today and a poster Friday. Come say hi!
0
9
46
@bkhmsi
Badr AlKhamissi
3 months
For more details, check out the original paper thread: https://t.co/fuWpfumPEo Huge thanks to my incredible collaborators: @GretaTuckute, @yingtian80536, @NeuroTaha and my advisors @ABosselut and @martin_schrimpf See you at CCN! 🧠
@bkhmsi
Badr AlKhamissi
8 months
🚨 New Preprint!! LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
0
6
15
@GretaTuckute
Greta Tuckute
3 months
0
14
78
@neuranna
Anna Ivanova
4 months
Looking forward to #cogsci2025 ! Find us throughout the conference
3
7
66
@NeuroTaha
Taha Binhuraib 🦉
4 months
The hype around this year’s Wimbledon final feels like the World Cup. We’re immensely lucky to witness another chapter of sporting history unfold.
@TheTennisLetter
The Tennis Letter
4 months
Wimbledon 2019 - Nadal and Federer Wimbledon 2025 - Sinner and Alcaraz
0
0
1