Avery Ryoo
@averyryoo
Followers
261
Following
595
Media
9
Statuses
151
phd @Mila_Quebec, prev. @UWaterloo i like generative models and toronto sports teams (not necessarily in that order) 🇨🇦🇰🇷
Montréal, Canada
Joined November 2016
Super stoked to share my first first-author paper that introduces a hybrid architecture approach for real-time neural decoding. It's been a lot of work, but happy to showcase some very cool results!
New preprint! 🧠🤖 How do we build neural decoders that are: ⚡️ fast enough for real-time use 🎯 accurate across diverse tasks 🌍 generalizable to new sessions, subjects, and species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! 🧵1/7
1
5
32
turns out everything in my life can be fixed by the blue jays #WANTITALL
0
0
2
Correct. #WANTITALL
736
4K
24K
Please consider applying! @veds_12 promises to be your friend 🫂
Mila's annual supervision request process is now open to receive MSc and PhD applications for Fall 2026 admission! For more information, visit https://t.co/r01eLcY1P4
1
0
8
I forgot that people in Montreal do this insane thing where they'll say "go west" and it's actually south
2
1
19
NO verifiers. NO Tools. Qwen3-4B-Instruct can match DeepSeek-R1 and o3-mini (high) with ONLY test-time scaling. Presenting Recursive Self-Aggregation (RSA) — the strongest test-time scaling method I know of! Then we use aggregation-aware RL to push further!! 📈📈 🧵below!
22
102
787
Two exciting updates 🚀 1️⃣ POSSM has been accepted to NeurIPS 2025! We'll see you in San Diego 🏖️! 2️⃣ I've officially started my PhD! Very grateful to stay at Mila, and excited to continue working on advancing both deep learning + science! 🧪🧬🧠
Super stoked to share my first first-author paper that introduces a hybrid architecture approach for real-time neural decoding. It's been a lot of work, but happy to showcase some very cool results!
1
1
26
🚨Reasoning LLMs are e̵f̵f̵e̵c̵t̵i̵v̵e̵ ̵y̵e̵t̵ inefficient! Large language models (LLMs) now solve multi-step problems by emitting extended chains of thought. During the process, they often re-derive the same intermediate steps across problems, inflating token usage and
4
35
210
@giffmana @tenderizzation They are lying, this is what waterloo looks like (i know cause i applied there)
1
1
3
Cool workshop organized by some close friends at Mila 🔥
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
0
7
🧵 Everyone is chasing new diffusion models—but what about the representations they model from? We introduce Discrete Latent Codes (DLCs): - Discrete representation for diffusion models - Uncond. gen. SOTA FID (1.59 on ImageNet) - Compositional generation - Integrates with LLM 🧱
3
50
301
As #ICML2025 kicks off in Vancouver, our AI talent is being quietly pushed out. 🇨🇦 We've been waiting 28 months for permanent residency, but @CitImmCanada won’t budge. Please read and share our story https://t.co/NkiH483OIh
https://t.co/kM2BpfxUyh
#IRCC #AI #Immigration #AI
3
9
25
“do not disturb” is a double negative “turb” is much more elegant
41
272
5K
Explicit latents or implicit marginalization? at #ICML2025 📌 Tue, 11 am 📍East Exhibition Hall A-B (E-1603) Come check out surprising results on whether explicitly incentivizing learning of correct latents improves generalization over implicitly marginalizing it!
0
4
19
Step 1: Understand how scaling improves LLMs. Step 2: Directly target underlying mechanism. Step 3: Improve LLMs independent of scale. Profit. In our ACL 2025 paper we look at Step 1 in terms of training dynamics. Project: https://t.co/4mkBALoilL Paper: https://t.co/CxBxbuZqgC
6
34
197
Excited to announce the Foundation Models for the Brain and Body workshop at #NeurIPS2025!🧠 We invite short papers or interactive demos on AI for neural, physiological or behavioral data. Submit by Aug 22 👉 https://t.co/t77lrS2by5
2
27
135