nandahkrishna Profile Banner
Nanda H Krishna Profile
Nanda H Krishna

@nandahkrishna

Followers
458
Following
139
Media
7
Statuses
33

PhD student at @Mila_Quebec & @UMontreal, @MacHomebrew maintainer.

Chennai & Montréal
Joined May 2016
Don't wanna be here? Send us removal request.
@nandahkrishna
Nanda H Krishna
1 month
New preprint! 🧠🤖.How do we build neural decoders that are:.⚡️ fast enough for real-time use.🎯 accurate across diverse tasks.🌍 generalizable to new sessions, subjects, and species?.We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!.🧵1/7
Tweet media one
4
25
55
@nandahkrishna
Nanda H Krishna
7 days
RT @thevineetjain: How to align your diffusion model with unseen objectives at inference time? Presenting Diffusion Tree Sampling/Search (D….
0
25
0
@nandahkrishna
Nanda H Krishna
1 month
Stay tuned for the project page and code, coming soon!.Link: A big thank you to my co-authors: @averyryoo*, @XimengMao*, @mehdiazabou, @evadyer, @mattperich, and @g_lajoie_!.🧵7/7.
1
2
4
@nandahkrishna
Nanda H Krishna
1 month
Finally, we show POSSM's performance on speech decoding – a long context task that can grow expensive for Transformers. In the unidirectional setting, POSSM beats the GRU baseline, achieving a phoneme error rate of 27.3 while being more robust to variation in preprocessing. 🧵6/7
Tweet media one
1
0
2
@nandahkrishna
Nanda H Krishna
1 month
Cross-species transfer! 🐵➡️🧑.We find that POSSM pretrained solely on NHP reaching data achieves SOTA when decoding imagined handwriting in human subjects! This shows the potential of leveraging NHP data to bootstrap human BCI decoding in low-data clinical settings. 🧵5/7
Tweet media one
2
0
2
@nandahkrishna
Nanda H Krishna
1 month
By pretraining on 140 monkey reaching sessions, POSSM effectively transfers to new subjects and tasks, matching or outperforming several baselines (e.g., GRU, POYO, Mamba) across sessions. ✅ High R² across the board.✅ 9× faster inference than Transformers.✅ <5ms latency.🧵4/7
Tweet media one
Tweet media two
1
0
1
@nandahkrishna
Nanda H Krishna
1 month
POSSM combines the real-time inference of an RNN with the tokenization, pretraining, and finetuning abilities of a Transformer!.Using POYO-style tokenization, we encode spikes in 50ms windows and stream them to a recurrent model (e.g., Mamba, GRU) for rapid predictions. 🧵3/7
Tweet media one
1
0
1
@nandahkrishna
Nanda H Krishna
1 month
The problem with existing decoders?.😔 RNNs are efficient, but rely on rigid, binned input formats – limiting generalization to new neurons or sessions. 😔 Transformers enable generalization via tokenization, but have high computational costs due to the attention mechanism. 🧵2/7
Tweet media one
2
1
3
@nandahkrishna
Nanda H Krishna
1 month
RT @dvnxmvl_hdf5: Preprint Alert 🚀.Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate wi….
0
12
0
@nandahkrishna
Nanda H Krishna
3 months
RT @mehdiazabou: The recordings from the 🌐🧠 Neuro Foundation Model workshop are up on the workshop website! . Thanks again to our speakers,….
0
11
0
@nandahkrishna
Nanda H Krishna
3 months
Really enjoyed TAing for this!.
@memming
Il Memming Park
3 months
#COSYNE2025 tutorial by Eva Dyer. Foundations of Transformers in Neuroscience Materials:
0
0
3
@nandahkrishna
Nanda H Krishna
4 months
RT @averyryoo: Just a couple days until Cosyne - stop by [3-083] this Saturday and say hi! @nandahkrishna @XimengMao
Tweet media one
0
1
0
@nandahkrishna
Nanda H Krishna
4 months
RT @mehdiazabou: How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a….
0
20
0
@nandahkrishna
Nanda H Krishna
5 months
RT @averyryoo: Looking for an undergrad volunteer who's interested in SSMs + transformers for neural decoding/BCIs at Mila! Strong coding +….
0
6
0
@nandahkrishna
Nanda H Krishna
7 months
Two abstracts accepted to COSYNE – real-time BCI decoding (w/ @averyryoo, @XimengMao, @mattperich, @g_lajoie_) and motor learning (w/ @OlivierCodol, @g_lajoie_, @mattperich). Thanks to the organisers and reviewers for the great Xmas gift! 🎄.
0
0
15
@nandahkrishna
Nanda H Krishna
7 months
I’ll be attending #NeurIPS2024 in Vancouver this week. Excited to meet new people and chat about comp neuro, NeuroAI, and foundation models for neuroscience. Also keen to attend the NeuroAI, @unireps and @neur_reps workshops!.
0
4
28
@nandahkrishna
Nanda H Krishna
9 months
RT @OlivierCodol: Here’s our latest work at @g_lajoie_ and @mattperich's labs! Excited to see this out. We used a combination of neural re….
0
13
0
@nandahkrishna
Nanda H Krishna
1 year
I’ll be presenting this today at #ICLR2024! If you’re interested in hippocampal replay and how it could be linked to task optimisation, check out Poster 53 during Poster Session 3 at 10:45 am.
@nandahkrishna
Nanda H Krishna
1 year
Heading to Lisbon for @CosyneMeeting #COSYNE2024! Check out our poster (3-101) on Saturday, March 2 to find out how reactivation in the brain could be an emergent consequence of task optimisation. Joint work w/ Colin Bredenberg @dlevenstein @tyrell_turing @g_lajoie_ @Mila_Quebec
Tweet media one
0
6
25
@nandahkrishna
Nanda H Krishna
1 year
RT @ai_unique: 📢UNIQUE Student Symposium 2024: Limited places!. If you are interested in Neuro-AI, join us on May 8-10 @ CERVO Brain resear….
0
15
0
@nandahkrishna
Nanda H Krishna
1 year
Heading to Lisbon for @CosyneMeeting #COSYNE2024! Check out our poster (3-101) on Saturday, March 2 to find out how reactivation in the brain could be an emergent consequence of task optimisation. Joint work w/ Colin Bredenberg @dlevenstein @tyrell_turing @g_lajoie_ @Mila_Quebec
Tweet media one
0
7
39