This paper "Optimal control of PDEs using physics informed neural networks" looks really nice:
Finally, an assessment of PINNs that includes a runtime comparison to classical adjoint methods!
The paper shows roughly similar performance numbers for PINNs vs the adjoint-based solver, but note that the PINN uses a V100 GPU vs. a single CPU core. The adjoint based method also uses a sub-optimal optimizer (fixed step-size gradient descent, rather than L-BFGS).
I'm sure there are sub-optimal things in the PINN method, too, but my guess is that if you fixed these issues, the adjoint method would be 100x faster.
Anyways, big kudos to the authors (Saviz Mowlavi and Saleh Nabi, who do not appear to be on Twitter) for taking a first crack at establishing some very important baselines for these new neural network methods for solving PDEs.
@shoyer
Cool paper, thanks for sharing! My first thought was that neural PDEs bridge these two methods, exactly satisfying the PDE but no need for manual adjoint methods - what do you think?
@GJ_Both
Are you referring specifically to the Julia package when you say "neural PDEs"? I'm not sure what that means from a convceptual perspective.
For writing (discrete) adjoint codes, for sure auto-diff tools like Julia/PyTorch/JAX make that much easier.
@shoyer
On a similar topic (including a comparisons to adjoint methods, showing that Physics-informed DeepONets are faster in this case) you may like by Sifan Wang, Mohamed Aziz Bhouri &
@ParisPerdikaris
@unsorsodicorda
@ParisPerdikaris
Thanks for sharing. Definitely much less surprising to see a method using a pretrained emulator be effective for PDE constrained optimization. Not sure it's entirely fair to call this approach faster, though, given that that excludes time spent training the model.