Shenghao Yang Profile
Shenghao Yang

@shenghao_yang

Followers
172
Following
275
Media
6
Statuses
48

PhD student @UWCheritonCS. Machine learning and optimization over graphs. https://t.co/zUfRiSeMCZ

Waterloo, Ontario
Joined February 2021
Don't wanna be here? Send us removal request.
@shenghao_yang
Shenghao Yang
2 months
RT @kfountou: Positional Attention is accepted at ICML 2025! Thanks to all co-authors for the hard work (64 pages). If you’d like to read t….
0
10
0
@shenghao_yang
Shenghao Yang
7 months
RT @aseemrb: My PhD thesis is now available on UWspace: Thanks to my advisors @kfountou and Aukosh Jagannath for t….
0
2
0
@shenghao_yang
Shenghao Yang
9 months
RT @PetarV_93: "Energy continuously flows from being concentrated, to becoming dispersed, spread out, wasted and useless." ⚡➡️🌬️. Sharing o….
0
79
0
@shenghao_yang
Shenghao Yang
9 months
RT @kfountou: Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning. We propose calcul….
0
61
0
@shenghao_yang
Shenghao Yang
1 year
RT @kfountou: I wrote a blog @Medium on "Random Data and Graph Neural Networks" . Link: . I cover a range of topics….
0
26
0
@shenghao_yang
Shenghao Yang
1 year
RT @backdeluca: For those participating in the Complex Networks in Banking and Finance Workshop, I’ll be presenting our work on Local Graph….
0
4
0
@shenghao_yang
Shenghao Yang
1 year
RT @kfountou: Paper: Analysis of Corrected Graph Convolutions. We study the performance of a vanilla graph convolution from which we remove….
0
4
0
@shenghao_yang
Shenghao Yang
1 year
RT @zdeborova: Emergence in LLMs is a mystery. Emergence in physics is linked to phase transitions. We identify a phase transition between….
0
260
0
@shenghao_yang
Shenghao Yang
2 years
RT @kfountou: Alright, I have some important news (at least for me). Now there exists an accelerated personalized PageRank method which is….
0
16
0
@shenghao_yang
Shenghao Yang
2 years
RT @siam_acda: SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23).May 31 – June 2, 2023.Seattle, Washington, U.S. N….
0
4
0
@shenghao_yang
Shenghao Yang
3 years
RT @siam_acda: SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23), May 31 -- June 2, 2023. .
0
6
0
@shenghao_yang
Shenghao Yang
3 years
RT @kfountou: Open problem: accelerated methods for l1-regularized PageRank.
Tweet media one
0
3
0
@shenghao_yang
Shenghao Yang
3 years
RT @kfountou: Does it matter where you place the graph convolutions (GCs) in a deep network? How much better is a deep GCN vs an MLP? When….
0
12
0
@shenghao_yang
Shenghao Yang
3 years
RT @HannesStaerk: New video with Prof. @kfountou explaining his paper "Graph Attention Retrospective" is now available!..
0
7
0
@shenghao_yang
Shenghao Yang
3 years
RT @HannesStaerk: Tomorrow's GraphML discussion will be with Prof @kfountou about his paper "Graph Attention Retrospective": .
0
11
0
@shenghao_yang
Shenghao Yang
3 years
RT @kfountou: New paper "Graph Attention Retrospective". One of the most popular type of models is graph attention networks. These models….
0
43
0
@shenghao_yang
Shenghao Yang
4 years
How to do local clustering on complex hypergraphs without losing rich higher-order relations? Visit us at #Neurips2021 and check it out!. When: Thu Dec 9, 11:30 AM - 1:00 PM (EST).Where: Slot D0 at Joint work with @kfountou @PanLi90769257
Tweet media one
0
3
5
@shenghao_yang
Shenghao Yang
4 years
RT @konstmish: Newton’s method is a great heuristic but it doesn’t work globally (and line search doesn’t fix it). Cubic Newton works globa….
0
43
0
@shenghao_yang
Shenghao Yang
4 years
. does not look like message-passing anymore. It remains very unclear to me that how general SCMs (e.g., except the linear SCMs) could be converted to standard massage-passing GNNs. Would be very interested to find it out.
0
0
1
@shenghao_yang
Shenghao Yang
4 years
. because Theorem 1 in the paper considers a sum of only n, instead of 2^n, univariate functions. Although the additional “noise” term may be used to account for terms that are left out by the linearly many n univariate function, the resulting computational layer .
Tweet media one
1
0
0