Filippo Guerranti
@guerrantif
Followers
137
Following
176
Media
12
Statuses
54
PhD student @TU_Muenchen | ML for Graphs and Relational Data
Munich, Bavaria
Joined October 2012
Our #NeurIPS2025 paper, NicheFlow, is out. In his thread, @k_sakalyan walks through how we model the evolution of cellular microenvironments in spatial transcriptomics. Worth reading if you want the details. Thanks @ale__palmaa @fabian_theis @guennemann.
Our paper “Modeling Microenvironment Trajectories on Spatial Transcriptomics with NicheFlow” has been accepted at #NeurIPS2025! @ale__palmaa @guerrantif @fabian_theis @guennemann (@TU_Muenchen × @HelmholtzMunich × @MunichCenterML × @MdsiTum ) 🧵
0
0
4
Can we actually tell when LLMs know or don’t know? For questions with a single answer, that works. But once ambiguity enters - several answers are correct - current methods collapse, confusing model with data uncertainty. w/ @dfuchsgruber @TomWollschlager @guennemann [1/4]🧵
2
11
18
The hunt for increased WL expressivity has led to many new GNNs but limited real-world success. So what are we missing? Can we find a better objective? We answer these questions in our new paper: https://t.co/jrNtzM9mdY Joint work /w @TomWollschlager @guennemann 🧵 (1/6)
arxiv.org
Expressivity theory, characterizing which graphs a GNN can distinguish, has become the predominant framework for analyzing GNNs, with new models striving for higher expressivity. However, we argue...
1
10
19
Quick clarification on the first figure: - Linearization does not preserve structure, but does not require model changes. - Graph encoders preserve structure, but typically require model changes.
0
0
0
5/5 If you’re working on LLMs for structured data (graphs, trees, AMRs, and so on) stop by! We’ll be at the KDD Workshop on Structure Knowledge for LLM #SKnowLLM 📍Toronto 🇨🇦 · MTCC · Room 717 🕐 August 4th, 1–5pm SAFT: Structure-Aware Fine-Tuning for LLMs
0
1
2
4/5 🏆 Results? SAFT improves across models and dataset scales: · +3.5 BLEU vs prior SOTA (no extra data) · Improves generalization on long + complex graphs · Compatible with LLaMA, Qwen, Gemma, etc.
1
0
1
3/5 🧃SAFT = Structure-Aware Fine-Tuning 1. Convert AMRs into semantic-preserving graphs (SPGs) 2. Use magnetic Laplacian to extract directional graph PEs 3. Inject PEs during LLM fine-tuning ✅ Graph-aware token representations, with no architecture changes.
1
0
1
2/5 Why does this matter? LLMs process sequences well, but they generally ignore structure. AMRs encode sentence semantics as graphs, but typical LLM pipelines flatten them into sequences. Can we inject graph structure during fine-tuning without changing the LLM architecture?
1
0
1
Injecting structure into LLMs with no changes to the architecture? SAFT🧃 Structure-aware LLM fine-tuning for AMR-to-text. New SOTA + no model changes! 📍#SKnowLLM, KDD, Toronto 🇨🇦 🗓️ Aug 4, 1–5pm · Room 717 📄 https://t.co/sgBzmEHzDa w/ R. Kamel, @geisler_si, @guennemann 1/5
2
10
18
Do you think your LLM is robust?⚠️With current adversarial attacks it is hard to find out since they optimize the wrong thing! We fix this with our adaptive, semantic, and distributional objective. By @guennemann's lab & @GoogleAI, w/ @ai_risks support Here's how we did it. 🧵
2
14
20
Monday with @n_gao96 in the reading group "Learning Equivariant Non-Local Electron Density Functionals" https://t.co/RfDtZSmO03 Join us on zoom at 9am PT / 12pm ET / 6pm CET: https://t.co/R8d1EHxLCx
5
17
99
Deep learning with differential privacy can protect sensitive information of individuals. But what about groups of multiple users? We answer this question in our #NeurIPS2024 paper https://t.co/PemQWF3PAq Joint work w/ @mihail_sto @ArthurK48147 @guennemann. #Neurips (1/7)
1
12
21
Excited to share that my master's thesis on "molecular graph generation in latent Euclidean space" was accepted at @GRaM_workshop and selected for an oral presentation. If you are at @icmlconf on Saturday, make sure to stop by.
3
11
36
It could be pivotal for foundation models to support more than sequences (and grids), e.g., for reasoning over discrete structures. We propose Spatio-Spectral Graph Neural Networks (S²GNNs) that generalize key properties of State Space Models (SSMs) to directed graphs. #ICML 1/4
1
9
16
We introduce Spatio-Spectral GNNs (S²GNNs) – an effective modeling paradigm via the synergy of spatially and spectrally parametrized graph conv. S²GNNs generalize the spatial + FFT conv. of State Space Models like H3/Hyena. Joint work w/ @ArthurK48147 @dan1elherbst @guennemann
3
28
108
Check out our 25+ new models for faster, smaller, greener text-to-image generation! Smashed versions of the top-tier pretrained diffusion models Run them in seconds from Hugging Face 🤗 https://t.co/i1niJTTRlW
2
8
19
Wrapping up an amazing time in NOLA after #NeurIPS2023 with (part of) the DAML team (@ludke_david, @ale__palmaa, @geisler_si, @YanScholten). Loved the atmosphere at the conference and the vibes of the city! Missing faces: @SchuchardtJan, @MarcelKollovieh, @Bertrand_Charp.
0
0
19
To the @NeurIPSConf Folks! Can path-aggregation increase the expressive power of GNNs? Come visit our poster at #GLFrontiers to find this out (and to appreciate the creativity of @geneticpizza)! 11.30 in HALL C2 ;) #NeurIPS2023 #GLFrontiers
0
5
14
I will soon present our work on the adversarial robustness of graph contrastive learning methods at the #GLFrontiers workshop @ #NeurIPS2023 (Hall C2) Drop by our poster if you want to learn more!
Does graph contrastive learning truly improve adversarial robustness? We answer this question in our work at #GLFrontiers @ #NeurIPS2023. Paper: https://t.co/cB1JsrgiLK Poster: 16 Dec 4:30pm Hall C2 Joint work with @ZinuoYi, @AnnaStrvt, @RafikiMazen, @geisler_si, @guennemann
0
0
14
We present generative diffusion for TPPs by adding and deleting events in a really neat way, reasoning about each event individually. Our forecasting is also excellent! Find out all the details at poster #602, Tue 10:45am-12:45pm! #NeurIPS23
https://t.co/5uhNDFBc30
arxiv.org
Autoregressive neural networks within the temporal point process (TPP) framework have become the standard for modeling continuous-time event data. Even though these models can expressively capture...
I am thrilled to present Add and Thin: Diffusion for Temporal Point Processes next week at #NeurIPS2023. Joint work with my fantastic collaborators Marin Bilos, @shchur_ @martenlienen @guennemann! Paper and code are available here: https://t.co/yZO5RxgR5m More in the 🧵.
0
1
8