GuyBarSh Profile Banner
Guy Bar-Shalom Profile
Guy Bar-Shalom

@GuyBarSh

Followers
73
Following
14
Media
6
Statuses
21

ML PhD student @TechnionLive - Learning under symmetries | Ex Research Intern @Verily Working with @HaggaiMaron

Israel
Joined February 2023
Don't wanna be here? Send us removal request.
@GuyBarSh
Guy Bar-Shalom
4 months
📢 Introducing:. Learning on LLM Output Signatures for Gray-box LLM Behavior Analysis [.A joint work with @ffabffrasca (co-first author) and our amazing collaborators:. @dereklim_lzh @yoav_gelberg @YftahZ @el_yaniv @GalChechik @HaggaiMaron .🧵Thread.
1
6
28
@GuyBarSh
Guy Bar-Shalom
28 days
RT @ffabffrasca: “Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality” is at #ICML2025!. Drop by our poster o….
0
13
0
@GuyBarSh
Guy Bar-Shalom
4 months
We believe LOS-Net is a step toward transparent, and scalable (gray-box) LLM analysis. Inspired by works by: @WeijiaShi2 , @anirudhajith42 , @LukeZettlemoyer , @OrgadHadas , @boknilev and more. 🔗 Paper: [. 💻 Code: [.#AI #LLMs.
0
0
5
@GuyBarSh
Guy Bar-Shalom
4 months
📈 Results Highlights:.• SOTA performance on Hallucination & Data Contamination Detection (DCD).• Strong cross-LLM and cross-dataset generalization.See the figure for zero-shot cross-LLM generalization on the BookMIA DCD benchmark (ROC-AUC):
Tweet media one
1
0
2
@GuyBarSh
Guy Bar-Shalom
4 months
LOSs are structured objects, which we process with a tailored, lightweight transformer dubbed LOS-Net:. - Learns behavior from output logits/probs. - No reference models. - Fast + generalizable. - Can implement existing methods.
1
0
3
@GuyBarSh
Guy Bar-Shalom
4 months
Why LOS matters:. Two tokens might have the same ATP, but wildly different contexts, which can be captured via the TDS (see Figure). Example:. - ATP = 0.5 with rest flat = high certainty. - ATP = 0.5 with one strong competitor = high uncertainty.Same number, different story.
Tweet media one
1
0
3
@GuyBarSh
Guy Bar-Shalom
4 months
Our key innovation – we formalize and process the LLM Output Signature (LOS), i.e. the pairing of:. • Token Distribution Sequence (TDS): next-token prob. distributions. • Actual Token Probabilities (ATP): the probs of actually sampled tokens
Tweet media one
1
0
4
@GuyBarSh
Guy Bar-Shalom
4 months
LLMs are powerful — but unreliable. They hallucinate, leak training data, and often behave unpredictably. We present a novel, learnable gray-box method to analyze LLMs using only their output probabilities — no access to internals.
1
0
3
@GuyBarSh
Guy Bar-Shalom
4 months
🎤 Spotlight paper @ #ICLR2025 Workshop -- "Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI". 🗓️ Oral presentation: Today, April 27 @ 13:30, Topaz Concourse Room. Come by!.
1
0
4
@GuyBarSh
Guy Bar-Shalom
4 months
🎤 Spotlight paper @ #ICLR2025 Workshop "Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI".🗓️ Oral presentation: Today, April 27 @ 13:30, Topaz Concourse Room. Come by!.
0
0
1
@GuyBarSh
Guy Bar-Shalom
8 months
Proud to see our work honored with @neur_reps best paper award!.This was a joint work with @ytn_ym, and a collaboration with @ffabffrasca @HaggaiMaron.
@neur_reps
Symmetry and Geometry in Neural Representations
8 months
The best paper award for the Topology and Graphs track goes to "Efficient Subgraph GNNs via Graph Products and Coarsening," presented by @GuyBarSh.
Tweet media one
1
1
11
@GuyBarSh
Guy Bar-Shalom
8 months
RT @neur_reps: Our last contributed talk is from @GuyBarSh on "Efficient Subgraph GNNs via Graph Products and Coarsening". .
0
1
0
@GuyBarSh
Guy Bar-Shalom
10 months
0
0
7
@GuyBarSh
Guy Bar-Shalom
10 months
We analyze the symmetry properties of our product graph and introduce a new node-marking technique. Additionally, we enhance the message-passing mechanism with novel equivariant updates, leading to notable performance improvements, demonstrated in our experimental results. [7/8]
Tweet media one
1
0
5
@GuyBarSh
Guy Bar-Shalom
10 months
Our method reduces the complexity of Subgraph GNNs using a coarsened version of the original graph. This smaller, transformed graph is combined with the original graph using the graph cartesian product, simplifying subgraph selection and controlling the size of the bag. [6/8]
Tweet media one
1
0
7
@GuyBarSh
Guy Bar-Shalom
10 months
Building upon the connection between graph products and Subgraph GNNs explored in Subgraphormer (ICML 2024), we develop a more efficient Subgraph GNN. [5/8].
1
0
5
@GuyBarSh
Guy Bar-Shalom
10 months
In this paper, we devise a Subgraph GNN architecture that can flexibly generate and process variable-sized bags, and deliver strong experimental results while sidestepping intricate and lengthy training protocols. [4/8].
1
0
6
@GuyBarSh
Guy Bar-Shalom
10 months
Recent papers focus on scaling Subgraph GNNs, starting with basic random subgraph selection and extending beyond to techniques that learn which subgraphs to select. However, these approaches often involve discrete sampling during training, making them harder to train. [3/8].
1
0
6
@GuyBarSh
Guy Bar-Shalom
10 months
Subgraph GNNs provide a more expressive architecture than MPNNs by representing graphs as collections (bag) of subgraphs. However, this expressiveness comes at the cost of higher computational complexity compared to MPNNs. [2/8].
1
0
5
@GuyBarSh
Guy Bar-Shalom
10 months
🎉``A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening'' is accepted to #NeurIPS2024!🎉. ➡️This is a joint effort with @ytn_ym and was made possible by amazing collaborators: @ffabffrasca @HaggaiMaron .[1/8]
Tweet media one
4
15
72