All empirical evidence suggests rather that equivariance unlocks significant gains.
At least when using the right type of architecture - making the wrong choices, the NNs end up over-constrained. Regular GCNNs are essentially always working better (even though comp expensive).
I might be missing something, but I don't see direct empirical evidence for this claim. Their approach works well, but is orthogonal to equivariance. Combining molecular conformer fields with equivariance would likely improve performance metrics further.
"We [...] empirically show that explicitly enforcing roto-translation equivariance is not a strong requirement for generalization."
"Furthermore, we also show that approaches that do not explicitly enforce roto-translation equivariance (like ours) can match or outperform…
@Parskatt
If the high feature dim is no issue I would still use regular SE3 group convolutions.
@m_finzi
showed how you can Monte Carlo approximate the integrals such that continuous equivariance holds in expectation.