Danilo Bzdok
@danilobzdok
Followers
7K
Following
12K
Media
1K
Statuses
9K
Research director | @McGillU @Mila_Quebec @IVADO_Qc | My team designs machine learning frameworks to understand biological systems from new angles of attack
Montreal
Joined February 2014
How can #LLMs make a difference in #neuroscience and #biomedicine ? Language has more human information per bit than potentially any other form of data. https://t.co/SawHesdJFY Great collabo with @sivareddyg + @MindstateDesign. @Mila_Quebec @TheNeuro_MNI @mcgillu
3
73
203
Taste of what LLM-driven scientific discovery will be like: with 12 minutes of thinking, GPT-5 Pro suggested repurposing a known drug to treat an untreatable food allergy. Same exact result was found by an (at the time unpublished) peer-reviewed study. And models still improving.
Here is the story of a remarkable, independent treatment suggestion by GPT-5 Pro: repurposing a known drug for a patient with food protein–induced enterocolitis syndrome (FPIES). First, how we came to test this. My close friend, physician-scientist Dr. Oral Alpan, treated the
136
171
2K
Consciousness science: where are we, where are we going, and what if we get there? https://t.co/OuZbs4p3dQ
11
20
109
DeepSeek released an OCR model today. Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information, and use this to solve long-context challenges in LLMs. Of course, they are using it to get more training
52
156
1K
It’s not the thought that counts: Allostasis at the core of brain function https://t.co/ZimeMoTYnl
cell.com
The authors review evidence that the primary function of the brain, supported by distributed neural systems, is the predictive regulation of physiology (i.e., allostasis). An example from Alzheimer’s...
5
16
128
This is super useful! With new Formula Completions in Excel, just type "=" and Copilot proactively suggests a formula, based on the context of your sheet. Here's a great example.
126
344
2K
How do we stop LLMs from making things up? Read Praneet Suresh's groundbreaking method to detect and eliminate AI hallucinations from within the model: https://t.co/jEX3dMz1o0
@danilobzdok @jackhtstanley
1
4
17
The introduction to my paper on data hacking, particularly p-hacking. For comments.
25
93
1K
Official results are in - Gemini achieved gold-medal level in the International Mathematical Olympiad! 🏆 An advanced version was able to solve 5 out of 6 problems. Incredible progress - huge congrats to @lmthang and the team!
deepmind.google
Our advanced model officially achieved a gold-medal level performance on problems from the International Mathematical Olympiad (IMO), the world’s most prestigious competition for young...
202
760
6K
The biggest news of the day: John Schulman has dropped a new blog post.
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.
1
20
419
🚨 BAD news for Medical AI models. MASSIVE revelations from this @Microsoft paper. 🤯 Current medical AI models may look good on standard medical benchmarks but those scores do not mean the models can handle real medical reasoning. The key point is that many models pass tests
176
846
4K
1/ It feels surreal to announce completion of the first human trial in the development of our neurotech platform for designing mental states, from the molecular level. Human experience is now programmable. A🧵on the sequel to psychedelics & the first new "emotion in a bottle."
175
421
3K
Why do LLMs hallucinate? We may have the answer! We show that LLMs imagine MORE concepts in inputs with LESS semantic structure. Very surprising! Read the thread below for more details 👇 @praneet_suresh_ @soniajoseph_ @ScimecaLuca @danilobzdok
We're thrilled to finally share what we've been working on. Our new paper gives a first-ever glimpse into the "mind" of an LLM, and we discovered something that shocked us: AIs see ghosts in the machine. 👻 What do LLMs see in the dark?
0
2
7
We're thrilled to finally share what we've been working on. Our new paper gives a first-ever glimpse into the "mind" of an LLM, and we discovered something that shocked us: AIs see ghosts in the machine. 👻 What do LLMs see in the dark?
3
6
22
Why #language #models hallucinate? ->We have experimental data on that. We precisely locate where #hallucinations arise in LLMs + show by targeted #intervention that these internal bias features do cause bad model outputs The trick: we get LLMs drunk and they tell us the
New research explains why LLMs hallucinate, through a connection between supervised and self-supervised learning. We also describe a key obstacle that can be removed to reduce them. 🧵 https://t.co/6Lb6xlg0SZ
4
22
90
If you think about how statistics works it’s extremely obvious why a model built on purely statistical patterns would “hallucinate”. Explanation in next tweet.
140
938
10K
📢 job opening: software developer at Mila Quebec AI Institute & McGill As of now, my team is looking to hire a new technical staff member with abackground in a STEM area, such as computer science, engineering or physics, to support core lab activities around data analysis,
1
2
6
We're launching an "AI psychiatry" team as part of interpretability efforts at Anthropic! We'll be researching phenomena like model personas, motivations, and situational awareness, and how they lead to spooky/unhinged behaviors. We're hiring - join us!
job-boards.greenhouse.io
San Francisco, CA
183
207
2K
Great workshop - thanks again for inviting me to Banff.
This week at BIRS: Novel Statistical Approaches for Studying Multi-omics Data https://t.co/MfkX36mlY1
0
0
3