@satailor96
Shyam Tailor
3 years
I am delighted to announce our new work "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions" w/ @FelixOpo @pl219_cambridge @niclane7 We provide code + pretrained models; find our blog post here:
2
10
45

Replies

@satailor96
Shyam Tailor
3 years
Perhaps surprisingly, we find that our method beats SOTA GNN architectures, while reducing memory consumption from O(E) to O(V). This was evaluated on large datasets taken from the Benchmarking GNNs work and OGB.
1
0
2
@satailor96
Shyam Tailor
3 years
Our approach is inspired by my experience working on signal processing on embedded devices. Our method has a convenient interpretation of effectively giving each vertex its own weight matrix. Alternative interpretations are also given in the paper.
1
0
2
@satailor96
Shyam Tailor
3 years
This approach alone seems to give *better* results than GAT, despite the resource efficiency. This alone is very interesting, as the asymptotic improvements to memory consumption are non-trivial.
1
0
2
@satailor96
Shyam Tailor
3 years
But that's not all! Using the ideas proposed by PNA, we can extend our model to have multiple aggregators. With little tuning this gives SOTA results. This result raises questions on what is important for GNN architecture design.
1
0
2
@satailor96
Shyam Tailor
3 years
We also propose "aggregator fusion" which allows you to use these extra aggregators with little latency penalty. Since unstructured sparse operations are memory bound, we can do computation "for free" when we would otherwise be idle.
1
0
2
@satailor96
Shyam Tailor
3 years
We designed our architecture with thought given to current hardware design. At present, GNN accelerators focus on sparse matrix multiplication, and can't accelerate SOTA models.
1
0
2
@satailor96
Shyam Tailor
3 years
EGC can be accelerated by these designs -- and is therefore likely to be very useful in practice!
1
0
3
@satailor96
Shyam Tailor
3 years
I'd like to thank @itsmebenday @jafermarq @chaitjo among others for their comments that helped to improve this work 🙂
0
0
5