
Steve Azzolin
@steveazzolin
Followers
259
Following
755
Media
5
Statuses
98
ELLIS PhD student @ UNITN/UniCambridge || Prev. Visiting Research Student at UniCambridge || Prev. Research intern at SISLab
Joined January 2014
RT @LogConference: 📢 Call for Tutorials – LoG 2025.Learning on Graphs Conference, Dec 10 . 🗓 Key dates (AoE):.• Se….
logconference.org
0
4
0
In case you missed it, we're still taking self-nominations for reviewers at LoG 2025✍️.
🚨 Reviewer Call — LoG 2025 📷 Passionate about graph ML or GNNs? Help shape the future of learning on graphs by reviewing for the LoG 2025 conference!. 📷📷 RT & share! #GraphML #GNN #ML #AI #CallForReviewers.
0
0
3
RT @tgl_workshop: Join the Temporal Graph Learning Workshop at KDD 2025, we have an amazing program with great speakers and papers waiting….
sites.google.com
Key dates
0
4
0
RT @unireps: Ready to present your latest work? The Call for Papers for #UniReps2025 @NeurIPSConf is open!. 👉Check the CFP: .
0
11
0
RT @LogConference: 🚨 Calling all ML & AI companies!.The LOG 2025 sponsor page is now live: LOG is the go-to venue f….
logconference.org
0
9
0
🧐What are the properties of self-explanations in GNNs? What can we expect from them?. We investigate this in our #ICML25 paper. Come to have a chat at poster session 5, Thu 17 11 am. w. Sagar Malhotra @andrea_whatever @looselycorrect
2
4
21
Big news: The first in-person event is coming 👀.
We’re thrilled to share that the first in-person LoG conference is officially happening December 10–12, 2025 at Arizona State University . Important Deadlines:. Abstract: Aug 22.Submission: Aug 29.Reviews: Sept 3–27.Rebuttal: Oct 1–15.Notifications: Oct 20.
0
0
4
This is an issue on multiple levels, and authors using those "shortcuts"👀 are equally responsible for this unethical behaviour.
to clarify -- I didn't mean to shame the authors of these papers; the real issue is AI reviewers, what we see here is just the authors trying to defend against that in some way (the proper way would be identifying poor reviews and asking the AC or meta-reviewer to discard them).
1
2
6
Happening tomorrow!. Saturday 10-12:30 am.Poster # 508.
📣 New paper on #XAI for #GNNs: . 👉 Can you trust your GNN explanations?.👉 How can you measure #faithfulness properly?.👉 Are all estimators the same?.👉 What's the link with #OOD generalization?. We look at these questions and more in Steve's latest #ICLR paper! Have a look!.
0
3
7
RT @diegocalanzone: In LoCo-LMs, we propose a neuro-symbolic loss function to fine-tune a LM to acquire logically consistent knowledge fro….
arxiv.org
Large language models (LLMs) are a promising venue for natural language understanding and generation. However, current LLMs are far from reliable: they are prone to generating non-factual...
0
4
0
Faithfulness of GNN explanations isn’t one-size-fits-all🧢.Our last @iclr_conf paper breaks it down across:. 1. Evaluation metrics.2. Model implementations.3. OOD generalisation. w: @AntonioLonga94 @looselycorrect @andrea_whatever . 1/5
1
6
17