Jacob Bamberger Profile
Jacob Bamberger

@jacobbamb

Followers
251
Following
560
Media
3
Statuses
81

Looking for geometry where it shouldn’t be. PhD student @UniofOxford. Interested in Geometric Deep Learning

Joined June 2020
Don't wanna be here? Send us removal request.
@jacobbamb
Jacob Bamberger
4 months
🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025
3
16
52
@BenjMurrell
Ben Murrell
1 day
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
1
10
23
@olgazaghen
Olga Zaghen
8 days
Cool news: our extended Riemannian Gaussian VFM paper is out! 🔮 We define and study a variational objective for probability flows 🌀 on manifolds with closed-form geodesics. @FEijkelboom @a_ppln @CongLiu202212 @wellingmax @jwvdm @erikjbekkers 🔥 📜 https://t.co/PE6I6YcoTn
2
25
50
@osclsd
Oscar Davis
12 days
Introducing Generalised Flow Maps 🎉 A stable, few-step generative model on Riemannian manifolds 🪩 📚 Read it at: https://t.co/iCTHedwCxf 💾 Code: https://t.co/MeukcthFN2 @msalbergo @nmboffi @mmbronstein @bose_joey
3
22
112
@AlexanderTong7
Alex Tong
22 days
#AITHYRA, Vienna's new Biomedical AI institute, is hiring Postdocs! Come work with us. Openings in: 🔹 Generative AI 🔹 Multimodal ML 🔹 Virology 🔹 Enzyme Function Apply by Nov 20: https://t.co/8jNpkhdw1x #PostDoc #AI #ML #Vienna #ScienceJobs
2
15
58
@arroyo_alvr
Alvaro Arroyo
1 month
🚨 How do attention sinks relate to information flow in LLMs? We show how massive activations create attention sinks and compression valleys, revealing a three-stage theory of information flow in LLMs. 🧵 w/ Enrique* @fedzbar @epomqo @mmbronstein @ylecun @ziv_ravid
6
34
162
@jacobbamb
Jacob Bamberger
1 month
Thanks @kwangmoo_yi! Thread coming soon 😁
@kwangmoo_yi
Kwang Moo Yi
1 month
Bamberger and Jones et al., "Carré du champ flow matching: better quality-generalisation tradeoff in generative models" Geometric regularization of the flow manifold. Boils down to adding anisotropic Gaussian Noise to flow matching training. Neat idea, enhances generalization.
0
2
49
@jacobbamb
Jacob Bamberger
2 months
Time to give ChebNet another life? 🤔🧐 Interesting work! Congrats @haririAli95 @arroyo_alvr 🎉
@haririAli95
Ali Hariri
2 months
⭐️Return of ChebNet is a Spotlight at NeurIPS 2025! • Revives ChebNet for long-range graph tasks • Identifies instability in high-order polynomial filters ⚡ • Introduces Stable-ChebNet, a non-dissipative system for controlled, stable info flow! 📄
0
0
9
@leRoux_Scott
Scott le Roux
4 months
Interested in Long-Range Interaractions ? Come speak with us now (4:30pm-7pm) at our poster E-2802 @ #ICML2025 @benpgutteridge @jacobbamberger @mmbronstein @epomqo
0
10
22
@AlexanderTong7
Alex Tong
4 months
Come check out SBG happening now! W-115 11-1:30 with @charliebtan @bose_joey Chen Lin @leonklein26 @mmbronstein
0
19
91
@mmbronstein
Michael Bronstein
4 months
5. On Measuring Long-Range Interactions in Graph Neural Networks East Exhibition Hall A-B #E-2802 Wed 16 Jul 4:30 p.m. PDT @jacobbamberger @benpgutteridge @leRoux_Scott @epomqo
1
2
10
@LogConference
Learning on Graphs Conference 2025
4 months
We’re thrilled to share that the first in-person LoG conference is officially happening December 10–12, 2025 at Arizona State University https://t.co/Js9FSm6p3N Important Deadlines: Abstract: Aug 22 Submission: Aug 29 Reviews: Sept 3–27 Rebuttal: Oct 1–15 Notifications: Oct 20
logconference.org
Learning on Graphs Conference
2
29
82
@jacobbamb
Jacob Bamberger
4 months
🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025
3
16
52
@jacobbamb
Jacob Bamberger
4 months
@benpgutteridge @leRoux_Scott @mmbronstein @epomqo Presenting at poster session 4 east. 📅Wednesday, July 16th 🕓4:30-7:00 PM 📈#E-2802
0
1
4
@jacobbamb
Jacob Bamberger
4 months
🔑 Takeaways: ✅ Long-range can be formalized & measured ✅ Reveals new insights into models & datasets 🚀 Time to rethink evaluation: not just accuracy, but how models solve tasks
1
0
2
@jacobbamb
Jacob Bamberger
4 months
Why does this matter? "Long-range" is often just a dataset intuition or model label. We offer a measurable way to: 💡Understand models 🧪Test benchmarks 🦮Guide model design 🚀Go beyond performance gaps
1
0
2
@jacobbamb
Jacob Bamberger
4 months
We reassess LRGB, the go-to long-range benchmark, by checking if model range correlates with performance—expected for truly long-range tasks. Surprisingly: ❌ Peptides-func: negative correlation, suggests not long-range ✅ VOC: positive correlation, suggests long-range
2
0
2
@jacobbamb
Jacob Bamberger
4 months
We validate our framework in three steps: 👷Construct synthetic tasks with analytically-known range 💯Show trained GNNs can approximate the true task range 🔬Use range as a proxy to evaluate real benchmarks
1
0
2
@jacobbamb
Jacob Bamberger
4 months
Our measure uses the model's Jacobian (for node tasks) and Hessian (for graph tasks) to quantify input-output influence, works with any distance metric, and supports analysis at all granularities—node, graph, and dataset.
1
0
3
@jacobbamb
Jacob Bamberger
4 months
We propose a formal range measure for any graph operator, derived from natural axioms (like locality, additivity, homogeneity) — and show it’s the unique measure satisfying these. This measure applies to both node- and graph-level tasks, and across architectures.
1
0
4