Jacob Bamberger
@jacobbamb
Followers
251
Following
560
Media
3
Statuses
81
Looking for geometry where it shouldn’t be. PhD student @UniofOxford. Interested in Geometric Deep Learning
Joined June 2020
🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025
3
16
52
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
1
10
23
Cool news: our extended Riemannian Gaussian VFM paper is out! 🔮 We define and study a variational objective for probability flows 🌀 on manifolds with closed-form geodesics. @FEijkelboom @a_ppln @CongLiu202212 @wellingmax @jwvdm @erikjbekkers 🔥 📜 https://t.co/PE6I6YcoTn
2
25
50
Introducing Generalised Flow Maps 🎉 A stable, few-step generative model on Riemannian manifolds 🪩 📚 Read it at: https://t.co/iCTHedwCxf 💾 Code: https://t.co/MeukcthFN2
@msalbergo @nmboffi @mmbronstein @bose_joey
3
22
112
#AITHYRA, Vienna's new Biomedical AI institute, is hiring Postdocs! Come work with us. Openings in: 🔹 Generative AI 🔹 Multimodal ML 🔹 Virology 🔹 Enzyme Function Apply by Nov 20: https://t.co/8jNpkhdw1x
#PostDoc #AI #ML #Vienna #ScienceJobs
2
15
58
🚨 How do attention sinks relate to information flow in LLMs? We show how massive activations create attention sinks and compression valleys, revealing a three-stage theory of information flow in LLMs. 🧵 w/ Enrique* @fedzbar @epomqo @mmbronstein @ylecun @ziv_ravid
6
34
162
Thanks @kwangmoo_yi! Thread coming soon 😁
Bamberger and Jones et al., "Carré du champ flow matching: better quality-generalisation tradeoff in generative models" Geometric regularization of the flow manifold. Boils down to adding anisotropic Gaussian Noise to flow matching training. Neat idea, enhances generalization.
0
2
49
Time to give ChebNet another life? 🤔🧐 Interesting work! Congrats @haririAli95 @arroyo_alvr 🎉
⭐️Return of ChebNet is a Spotlight at NeurIPS 2025! • Revives ChebNet for long-range graph tasks • Identifies instability in high-order polynomial filters ⚡ • Introduces Stable-ChebNet, a non-dissipative system for controlled, stable info flow! 📄
0
0
9
Interested in Long-Range Interaractions ? Come speak with us now (4:30pm-7pm) at our poster E-2802 @ #ICML2025
@benpgutteridge
@jacobbamberger
@mmbronstein
@epomqo
0
10
22
Come check out SBG happening now! W-115 11-1:30 with @charliebtan
@bose_joey Chen Lin @leonklein26
@mmbronstein
0
19
91
5. On Measuring Long-Range Interactions in Graph Neural Networks East Exhibition Hall A-B #E-2802 Wed 16 Jul 4:30 p.m. PDT @jacobbamberger
@benpgutteridge
@leRoux_Scott
@epomqo
1
2
10
We’re thrilled to share that the first in-person LoG conference is officially happening December 10–12, 2025 at Arizona State University https://t.co/Js9FSm6p3N Important Deadlines: Abstract: Aug 22 Submission: Aug 29 Reviews: Sept 3–27 Rebuttal: Oct 1–15 Notifications: Oct 20
logconference.org
Learning on Graphs Conference
2
29
82
🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025
3
16
52
@benpgutteridge @leRoux_Scott @mmbronstein @epomqo Presenting at poster session 4 east. 📅Wednesday, July 16th 🕓4:30-7:00 PM 📈#E-2802
0
1
4
Read more here: 📄paper: https://t.co/yvraKeZfUg 💻 code: https://t.co/eBcDnYrDaR 🙌With @benpgutteridge @leRoux_Scott @mmbronstein @epomqo
#ICML2025 #GNN #AI
github.com
Contribute to BenGutteridge/range-measure development by creating an account on GitHub.
1
1
9
🔑 Takeaways: ✅ Long-range can be formalized & measured ✅ Reveals new insights into models & datasets 🚀 Time to rethink evaluation: not just accuracy, but how models solve tasks
1
0
2
Why does this matter? "Long-range" is often just a dataset intuition or model label. We offer a measurable way to: 💡Understand models 🧪Test benchmarks 🦮Guide model design 🚀Go beyond performance gaps
1
0
2
We reassess LRGB, the go-to long-range benchmark, by checking if model range correlates with performance—expected for truly long-range tasks. Surprisingly: ❌ Peptides-func: negative correlation, suggests not long-range ✅ VOC: positive correlation, suggests long-range
2
0
2
We validate our framework in three steps: 👷Construct synthetic tasks with analytically-known range 💯Show trained GNNs can approximate the true task range 🔬Use range as a proxy to evaluate real benchmarks
1
0
2
Our measure uses the model's Jacobian (for node tasks) and Hessian (for graph tasks) to quantify input-output influence, works with any distance metric, and supports analysis at all granularities—node, graph, and dataset.
1
0
3
We propose a formal range measure for any graph operator, derived from natural axioms (like locality, additivity, homogeneity) — and show it’s the unique measure satisfying these. This measure applies to both node- and graph-level tasks, and across architectures.
1
0
4