steveazzolin Profile Banner
Steve Azzolin Profile
Steve Azzolin

@steveazzolin

Followers
259
Following
755
Media
5
Statuses
98

ELLIS PhD student @ UNITN/UniCambridge || Prev. Visiting Research Student at UniCambridge || Prev. Research intern at SISLab

Joined January 2014
Don't wanna be here? Send us removal request.
@steveazzolin
Steve Azzolin
3 days
RT @LogConference: 📢 Call for Tutorials – LoG 2025.Learning on Graphs Conference, Dec 10 . 🗓 Key dates (AoE):.• Se….
Tweet card summary image
logconference.org
0
4
0
@steveazzolin
Steve Azzolin
4 days
In case you missed it, we're still taking self-nominations for reviewers at LoG 2025✍️.
@LogConference
Learning on Graphs Conference 2025
1 month
🚨 Reviewer Call — LoG 2025 📷 Passionate about graph ML or GNNs? Help shape the future of learning on graphs by reviewing for the LoG 2025 conference!. 📷📷 RT & share! #GraphML #GNN #ML #AI #CallForReviewers.
0
0
3
@steveazzolin
Steve Azzolin
22 days
RT @tgl_workshop: Join the Temporal Graph Learning Workshop at KDD 2025, we have an amazing program with great speakers and papers waiting….
Tweet card summary image
sites.google.com
Key dates
0
4
0
@steveazzolin
Steve Azzolin
1 month
RT @unireps: Ready to present your latest work? The Call for Papers for #UniReps2025 @NeurIPSConf is open!. 👉Check the CFP: .
0
11
0
@steveazzolin
Steve Azzolin
1 month
RT @LogConference: 🚨 Calling all ML & AI companies!.The LOG 2025 sponsor page is now live: LOG is the go-to venue f….
Tweet card summary image
logconference.org
0
9
0
@steveazzolin
Steve Azzolin
1 month
What can we do to make self-explanations less ambiguous?. -> We propose to automatically adapt explanations to the task by stitching together SE-GNNs with white-box models and combining their explanations.
1
0
2
@steveazzolin
Steve Azzolin
1 month
- Self-explanations can be "unfaithful" by design.
1
0
1
@steveazzolin
Steve Azzolin
1 month
- Models encoding different tasks can produce the same self-explanations, limiting the usefulness of explanations.
1
0
2
@steveazzolin
Steve Azzolin
1 month
Studying some popular models, we found that: . - The information that self-explanations convey can radically change based on the underlying task to be explained, which is, however, generally unknown.
1
0
2
@steveazzolin
Steve Azzolin
1 month
🧐What are the properties of self-explanations in GNNs? What can we expect from them?. We investigate this in our #ICML25 paper. Come to have a chat at poster session 5, Thu 17 11 am. w. Sagar Malhotra @andrea_whatever @looselycorrect
Tweet media one
2
4
21
@steveazzolin
Steve Azzolin
1 month
Big news: The first in-person event is coming 👀.
@LogConference
Learning on Graphs Conference 2025
2 months
We’re thrilled to share that the first in-person LoG conference is officially happening December 10–12, 2025 at Arizona State University . Important Deadlines:. Abstract: Aug 22.Submission: Aug 29.Reviews: Sept 3–27.Rebuttal: Oct 1–15.Notifications: Oct 20.
0
0
4
@steveazzolin
Steve Azzolin
2 months
This is an issue on multiple levels, and authors using those "shortcuts"👀 are equally responsible for this unethical behaviour.
@PMinervini
Pasquale Minervini
2 months
to clarify -- I didn't mean to shame the authors of these papers; the real issue is AI reviewers, what we see here is just the authors trying to defend against that in some way (the proper way would be identifying poor reviews and asking the AC or meta-reviewer to discard them).
1
2
6
@steveazzolin
Steve Azzolin
4 months
Happening tomorrow!. Saturday 10-12:30 am.Poster # 508.
@looselycorrect
Stefano Teso
4 months
📣 New paper on #XAI for #GNNs: . 👉 Can you trust your GNN explanations?.👉 How can you measure #faithfulness properly?.👉 Are all estimators the same?.👉 What's the link with #OOD generalization?. We look at these questions and more in Steve's latest #ICLR paper! Have a look!.
0
3
7
@steveazzolin
Steve Azzolin
4 months
RT @diegocalanzone: In LoCo-LMs, we propose a neuro-symbolic loss function to fine-tune a LM to acquire logically consistent knowledge fro….
Tweet card summary image
arxiv.org
Large language models (LLMs) are a promising venue for natural language understanding and generation. However, current LLMs are far from reliable: they are prone to generating non-factual...
0
4
0
@steveazzolin
Steve Azzolin
4 months
3. ITS ROLE IN OOD GENERALISATION. Domain-Invariant GNNs make predictions over a domain-invariant subgraph to achieve OOD generalisation. We show that unless this subgraph is also *sufficient*, DIGNNs are not domain-invariant. 5/5.
0
0
0
@steveazzolin
Steve Azzolin
4 months
2. HOW GNNs AIM TO ACHIEVE IT. We highlight several architectural design choices of Self-Explainable GNNs favoring information leakage from nodes outside the explanation, and propose mitigations. 4/5.
1
0
0
@steveazzolin
Steve Azzolin
4 months
We propose rethinking faithfulness from three essential angles:. 1. HOW TO COMPUTE IT. Many ways to compute faithfulness exists, but we show:. - they are not interchangeable.- some of them do not have the desired semantics. 3/5.
1
0
0
@steveazzolin
Steve Azzolin
4 months
Paper: "Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs".Link: Poster session: 26 April 10am. 2/5.
1
0
1
@steveazzolin
Steve Azzolin
4 months
Faithfulness of GNN explanations isn’t one-size-fits-all🧢.Our last @iclr_conf paper breaks it down across:. 1. Evaluation metrics.2. Model implementations.3. OOD generalisation. w: @AntonioLonga94 @looselycorrect @andrea_whatever . 1/5
Tweet media one
1
6
17