danilobzdok Profile Banner
Danilo Bzdok Profile
Danilo Bzdok

@danilobzdok

Followers
7K
Following
12K
Media
1K
Statuses
9K

Research director | @McGillU @Mila_Quebec @IVADO_Qc | My team designs machine learning frameworks to understand biological systems from new angles of attack

Montreal
Joined February 2014
Don't wanna be here? Send us removal request.
@danilobzdok
Danilo Bzdok
2 years
How can #LLMs make a difference in #neuroscience and #biomedicine ? Language has more human information per bit than potentially any other form of data. https://t.co/SawHesdJFY Great collabo with @sivareddyg + @MindstateDesign. @Mila_Quebec @TheNeuro_MNI @mcgillu
3
73
203
@gdb
Greg Brockman
17 hours
Taste of what LLM-driven scientific discovery will be like: with 12 minutes of thinking, GPT-5 Pro suggested repurposing a known drug to treat an untreatable food allergy. Same exact result was found by an (at the time unpublished) peer-reviewed study. And models still improving.
@DeryaTR_
Derya Unutmaz, MD
3 days
Here is the story of a remarkable, independent treatment suggestion by GPT-5 Pro: repurposing a known drug for a patient with food protein–induced enterocolitis syndrome (FPIES). First, how we came to test this. My close friend, physician-scientist Dr. Oral Alpan, treated the
136
171
2K
@John_H_Ingle
John Ingle
7 days
The book is live today on Amazon!
5
4
72
@leafs_s
CLaE
4 days
Consciousness science: where are we, where are we going, and what if we get there? https://t.co/OuZbs4p3dQ
11
20
109
@iScienceLuvr
Tanishq Mathew Abraham, Ph.D.
14 days
DeepSeek released an OCR model today. Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information, and use this to solve long-context challenges in LLMs. Of course, they are using it to get more training
52
156
1K
@satyanadella
Satya Nadella
20 days
This is super useful! With new Formula Completions in Excel, just type "=" and Copilot proactively suggests a formula, based on the context of your sheet. Here's a great example.
126
344
2K
@Mila_Quebec
Mila - Institut québécois d'IA
26 days
How do we stop LLMs from making things up? Read Praneet Suresh's groundbreaking method to detect and eliminate AI hallucinations from within the model: https://t.co/jEX3dMz1o0 @danilobzdok @jackhtstanley
1
4
17
@nntaleb
Nassim Nicholas Taleb
1 month
The introduction to my paper on data hacking, particularly p-hacking. For comments.
25
93
1K
@demishassabis
Demis Hassabis
3 months
Official results are in - Gemini achieved gold-medal level in the International Mathematical Olympiad! 🏆 An advanced version was able to solve 5 out of 6 problems. Incredible progress - huge congrats to @lmthang and the team!
Tweet card summary image
deepmind.google
Our advanced model officially achieved a gold-medal level performance on problems from the International Mathematical Olympiad (IMO), the world’s most prestigious competition for young...
202
760
6K
@nrehiew_
wh
1 month
The biggest news of the day: John Schulman has dropped a new blog post.
@thinkymachines
Thinking Machines
1 month
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.
1
20
419
@rohanpaul_ai
Rohan Paul
1 month
🚨 BAD news for Medical AI models. MASSIVE revelations from this @Microsoft paper. 🤯 Current medical AI models may look good on standard medical benchmarks but those scores do not mean the models can handle real medical reasoning. The key point is that many models pass tests
176
846
4K
@DillanDiNardo
Dillan DiNardo
1 month
1/ It feels surreal to announce completion of the first human trial in the development of our neurotech platform for designing mental states, from the molecular level. Human experience is now programmable. A🧵on the sequel to psychedelics & the first new "emotion in a bottle."
175
421
3K
@danilobzdok
Danilo Bzdok
2 months
Now accepted at #NeurIPS 2025: Our team makes three key contributions to #large #language #models + #security: 1) Pre-trained transformer models impose semantic structure on inputs, tying them into learned conceptual webs, even if the model inputs are ambiguous or lack any
1
21
170
@jackhtstanley
Jack Stanley
2 months
Why do LLMs hallucinate? We may have the answer! We show that LLMs imagine MORE concepts in inputs with LESS semantic structure. Very surprising! Read the thread below for more details 👇 @praneet_suresh_ @soniajoseph_ @ScimecaLuca @danilobzdok
@praneet_suresh_
Praneet
2 months
We're thrilled to finally share what we've been working on. Our new paper gives a first-ever glimpse into the "mind" of an LLM, and we discovered something that shocked us: AIs see ghosts in the machine. 👻 What do LLMs see in the dark?
0
2
7
@praneet_suresh_
Praneet
2 months
We're thrilled to finally share what we've been working on. Our new paper gives a first-ever glimpse into the "mind" of an LLM, and we discovered something that shocked us: AIs see ghosts in the machine. 👻 What do LLMs see in the dark?
3
6
22
@danilobzdok
Danilo Bzdok
2 months
Why #language #models hallucinate? ->We have experimental data on that. We precisely locate where #hallucinations arise in LLMs + show by targeted #intervention that these internal bias features do cause bad model outputs The trick: we get LLMs drunk and they tell us the
@adamfungi
Adam Tauman Kalai
2 months
New research explains why LLMs hallucinate, through a connection between supervised and self-supervised learning. We also describe a key obstacle that can be removed to reduce them. 🧵 https://t.co/6Lb6xlg0SZ
4
22
90
@kareem_carr
🔥 Dr Kareem Carr 🔥
6 months
If you think about how statistics works it’s extremely obvious why a model built on purely statistical patterns would “hallucinate”. Explanation in next tweet.
140
938
10K
@danilobzdok
Danilo Bzdok
3 months
📢 job opening: software developer at Mila Quebec AI Institute & McGill As of now, my team is looking to hire a new technical staff member with abackground in a STEM area, such as computer science, engineering or physics, to support core lab activities around data analysis,
1
2
6
@Jack_W_Lindsey
Jack Lindsey
3 months
We're launching an "AI psychiatry" team as part of interpretability efforts at Anthropic!  We'll be researching phenomena like model personas, motivations, and situational awareness, and how they lead to spooky/unhinged behaviors. We're hiring - join us!
Tweet card summary image
job-boards.greenhouse.io
San Francisco, CA
183
207
2K
@danilobzdok
Danilo Bzdok
4 months
Great workshop - thanks again for inviting me to Banff.
@BIRS_Math
BIRS
4 months
This week at BIRS: Novel Statistical Approaches for Studying Multi-omics Data https://t.co/MfkX36mlY1
0
0
3