@maurice_weiler
Maurice Weiler
7 months
All empirical evidence suggests rather that equivariance unlocks significant gains. At least when using the right type of architecture - making the wrong choices, the NNs end up over-constrained. Regular GCNNs are essentially always working better (even though comp expensive).
1
0
10

Replies

@maurice_weiler
Maurice Weiler
7 months
I might be missing something, but I don't see direct empirical evidence for this claim. Their approach works well, but is orthogonal to equivariance. Combining molecular conformer fields with equivariance would likely improve performance metrics further.
@tkipf
Thomas Kipf
7 months
"We [...] empirically show that explicitly enforcing roto-translation equivariance is not a strong requirement for generalization." "Furthermore, we also show that approaches that do not explicitly enforce roto-translation equivariance (like ours) can match or outperform…
Tweet media one
8
45
433
2
6
66
@Parskatt
Johan Edstedt
7 months
@maurice_weiler What's the right arch for exact (non-discretized) SE(3) equivariance?
1
0
0
@maurice_weiler
Maurice Weiler
7 months
@Parskatt If the high feature dim is no issue I would still use regular SE3 group convolutions. @m_finzi showed how you can Monte Carlo approximate the integrals such that continuous equivariance holds in expectation.
1
0
1