ProfData Profile Banner
Bradley Love Profile
Bradley Love

@ProfData

Followers
6K
Following
2K
Media
156
Statuses
1K

Senior research scientist at @LosAlamosNatLab. Former prof at @ucl and @UTAustin. CogSci, AI, Comp Neuro, AI for scientific discovery Also @profdata on Bluesky

London, UK
Joined September 2014
Don't wanna be here? Send us removal request.
@ProfData
Bradley Love
7 months
"Large language models surpass human experts in predicting neuroscience results" w @ken_lxl and LLMs integrate a noisy yet interrelated scientific literature to forecast outcomes. 1/8
Tweet media one
15
167
673
@ProfData
Bradley Love
27 days
New blog w @ken_lxl, “Giving LLMs too much RoPE: A limit on Sutton’s Bitter Lesson”. The field has shifted from flexible data-driven position representations to fixed approaches following human intuitions. Here’s why and what it means for model performance
0
2
9
@ProfData
Bradley Love
1 month
New blog, "Backwards Compatible: The Strange Math Behind Word Order in AI" w @ken_lxl. It turns out the language learning problem is the same for any word order, but is that true in practice for large language models? paper: BLOG:
Tweet media one
0
2
4
@ProfData
Bradley Love
2 months
Bonus: I found it counterintuitive that (in theory) the learning problem is the same for any word ordering. Aligning proof and simulation was key. Now, new avenues open to address positional biases, better training and knowing when to trust LLMs.w @ken_lxl, @ramscar1, @XinyiXu6.
0
0
3
@ProfData
Bradley Love
2 months
When LLMs diverge from one another because of word order (data factorization), it indicates their probability distributions are inconsistent, which is a red flag (not trustworthy). We trace deviations to self-attention positional and locality biases. 2/2
1
0
3
@ProfData
Bradley Love
2 months
"Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies".Oddly, we prove LLMs should be equivalent for any word ordering: forward, backward, scrambled. In practice, LLMs diverge from one another. Why? 1/2
1
2
13
@ProfData
Bradley Love
4 months
RT @touchNEUROLOGY: 🧠 We speak with Prof. Bradley Love about BrainGPT—an AI model helping researchers process neuroscientific data faster t….
0
1
0
@ProfData
Bradley Love
5 months
with @ken_lxl, @Rob_Mok , and @BDRoads.
0
0
0
@ProfData
Bradley Love
5 months
"Coordinating multiple mental faculties during learning" There's lots of good work in object recognition and learning, but how do we integrate the two? Here's a proposal and model that is more interactive than perception provides the inputs to cognition.
1
19
67
@ProfData
Bradley Love
6 months
RT @ermgrant: Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your….
0
13
0
@ProfData
Bradley Love
6 months
RT @sucholutsky: 🚨Call for Papers🚨 .The Re-Align Workshop is coming back to #ICLR2025. Our CfP is finally up! Come share your representati….
0
16
0
@ProfData
Bradley Love
7 months
Thanks @skdh for covering our recent paper, Also, I want to spotlight this excellent podcast (19 minutes long) with Nicky Cartridge covering how AI will impact science and healthcare in the coming years,
@skdh
Sabine Hossenfelder
7 months
A new AI study foreshadows a lot of the changes we will see in science in the next years. Large Language Models turn out to be really good at predicting results of studies they've never seen before.
1
1
5
@ProfData
Bradley Love
7 months
While BrainBench focused on neuroscience, our approach is science general, so others can adopt our template. Everything is open weight and Open source. Thanks to the entire team and the expert participants. Sign up for news at 8/8.
1
0
11
@ProfData
Bradley Love
7 months
Finally, LLMs can be augmented with neuroscience knowledge for better performance. We tuned Mistral on 20 years of the neuroscience literature using LoRA. The tuned model, which we refer to as BrainGPT, performed better on BrainBench. 7/8
Tweet media one
1
0
10
@ProfData
Bradley Love
7 months
Indeed, follow-up work on teaming finds that joint LLM and human teams outperform either alone, because LLMs and humans make different types of errors. We offer a simple method to combine confidence-weighted judgements. w @yanezlang 6/8.
1
0
8
@ProfData
Bradley Love
7 months
In the Nature HB paper, both human experts and LLMs were well calibrated - when they were more certain of their decisions, they were more likely to be correct. Calibration is beneficial for human-machine teaming. 5/8
Tweet media one
1
0
7
@ProfData
Bradley Love
7 months
There were no signs of leakage from the training to test set. We performed standard checks. In follow-up work, we trained an LLM from scratch to rule out leakage; even this smaller model was superhuman on BrainBench 4/8.
1
0
7
@ProfData
Bradley Love
7 months
All 15 LLMs considered crushed human experts at BrainBench's predictive task. LLMs correctly predicted neuroscience results (across all sub areas) dramatically better than human experts, including those with decades of experience. 3/8
Tweet media one
3
1
15
@ProfData
Bradley Love
7 months
To test, we created BrainBench, a forward-looking benchmark that stresses prediction over retrieval of facts, avoiding LLM's "hallucination" issue. The task was to predict which version of a Journal of Neuroscience abstract gave the actual result. 2/8
Tweet media one
1
1
13
@ProfData
Bradley Love
8 months
Instead of viewing LLMs as models of humans or stochastic parrots, we view them as general and powerful pattern learners that can master a superset of what people can. 2/2.
0
0
12