Aseem Baranwal
@aseemrb
Followers
328
Following
835
Media
11
Statuses
481
PhD, ML on graphs @UWCheritonCS. Prev research intern @MSFTResearch, @GoogleAI.
Minkowski spacetime
Joined March 2010
I am hiring one PhD student. Subject: Reasoning and AI, with a focus on computational learning for long reasoning processes such as automated theorem proving and the learnability of algorithmic tasks. Preferred background: A mathematics student interested in transitioning to
13
97
526
On the Statistical Query Complexity of Learning Semiautomata: a Random Walk Approach Work with @ggiapitz, @EshaanNichani and @jasondeanlee. We prove the first SQ hardness result for learning semiautomata under the uniform distribution over input words and initial states,
1
11
43
Waterloo Computational Learning Lab it is: https://t.co/HzPiVnnL6m! I rebranded our lab after six years to better reflect the work we do and will continue to do in the future.
0
1
8
Presenting "Optimal Fair Learning Robust to Adversarial Distribution Shift" at #ICML2025 ( https://t.co/LhPEbnjNS8) đEast Exhibition Hall A-B #E-1001 â˛ď¸16th July, 4:30-7PM Please have a look, and do stop by if it sounds interesting to you! RT's appreciatedđSummary to follow
1
9
17
My former PhD student, Aseem Baranwal, won the PhD Dissertation Award from the Department of Computer Science at the University of Waterloo for his thesis, âStatistical Foundations for Learning on Graphs.â Aseem is the first PhD student I graduated, and I couldn't be happier for
.@aseemrb (co-supervised with A. Jagannath) passed his PhD defence yesterday. Aseem is the first PhD student to graduate from our group. I am very happy for Aseem and the work that he has done. I would also like to thank the members of the committee, @xbresson, @thegautamkamath,
8
14
229
@kfountou This classifier is implementable using a message-passing GNN and is the best of both worlds (an MLP for a noisy graph and a GCN for an informative graph) across the range of SNR in the edges/features on synthetic data. Work is pending to make it scalable for use on real data.
0
0
1
@kfountou We analyze GNNs from this statistical perspective. We isolate the convolutions from the layers for GCN architectures to understand its variance reduction effects on the data. For GAT, we identify regimes of the SNR of the node features where attention helps or does not help.
0
0
0
My PhD thesis is now available on UWspace: https://t.co/YrdI3Nupjq. Thanks to my advisors @kfountou and Aukosh Jagannath for their support throughout my PhD. We introduce a statistical perspective for node classification problems. Brief details are below.
3
2
8
Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning We propose calculating the attention weights in Transformers using only fixed positional encodings (referred to as positional attention). These positional encodings remain
10
61
308
This paper was just accepted at NeurIPS. I am particularly happy about this because it originated as a course project by Robert in our Graph Neural Networks course.
Paper: Analysis of Corrected Graph Convolutions We study the performance of a vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. 1) We perform a spectral analysis for k rounds of corrected graph convolutions, and we provide results
1
3
33
If you are at #ICML2024 and interested in the theory of graph neural networks, come by our poster 'Graph Attention Retrospective.' conference link: https://t.co/ZOURgL7Jmz paper: https://t.co/sB7nHPIWZd relevant blog: https://t.co/4rq6vkQFm0
1
8
40
I guess that as of today I can also announce that I have been promoted to the rank of Associate Professor. I am mostly making this post to publicly thank all the people who have supported me during my career, especially my first two PhD students, @aseemrb and @shenghao_yang (in
6
3
42
For those participating in the Complex Networks in Banking and Finance Workshop, Iâll be presenting our work on Local Graph Clustering with Noisy Labels tomorrow at 9:20 AM EDT at the Fields Institute. Hope to see you there :) https://t.co/hzXIlTyKWt
arxiv.org
The growing interest in machine learning problems over graphs with additional node information such as texts, images, or labels has popularized methods that require the costly operation of...
0
4
4
Paper: Simulation of Graph Algorithms with Looped Transformers (revised version @icmlconf) + Multi-tasking (Remark 6.5) + Discussion on the role of ill-conditioning for the ability of Transformers to simulate algorithms. Link:
arxiv.org
The execution of graph algorithms using neural networks has recently attracted significant interest due to promising empirical progress. This motivates further understanding of how neural networks...
Paper: Simulation of Graph Algorithms with Looped Transformers Current empirical results illustrate promising scale generalization in executing classical graph algorithms. The predominant approach in these studies is to train a neural network to execute a step of a target
1
4
23
Paper: Analysis of Corrected Graph Convolutions We study the performance of a vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. 1) We perform a spectral analysis for k rounds of corrected graph convolutions, and we provide results
0
4
22
.@backdeluca is at ICLR and he will present his joint work with @shenghao_yang on "Local Graph Clustering with Noisy Labels". Date: Friday 10th of May. Time: 4:30pm - 6:30pm CEST. Place: Halle B #175.
0
3
16
.@backdeluca's paper got accepted at ICML. The final version will include a proof for multi-tasking and comments about training + other clarifications. I will post the updated version soon. I would like to thank the reviewers for their valuable feedback. The above extensions
Paper: Simulation of Graph Algorithms with Looped Transformers Current empirical results illustrate promising scale generalization in executing classical graph algorithms. The predominant approach in these studies is to train a neural network to execute a step of a target
1
3
31
If your model is weak, your paper might end up getting more citations because people are always happy to include your model as a baseline.
48
149
2K