Abstract
In this paper we investigate the solution of inverse problems with neural network ansatz functions with generalized decision functions. The relevant observation for this work is that such functions can approximate typical test cases, such as the Shepp-Logan phantom, better, than standard neural networks. Moreover, we show that the convergence analysis of numerical methods for solving inverse problems with shallow generalized neural network functions leads to more intuitive convergence conditions, than for deep affine linear neural networks.
Acknowledgments
The authors would like to thank some referees for their valuable suggestions and their patience.
Notes
1 For a detailed exposition on generalized inverses see [Citation29].
2 In the following and (without subscripts) always denote derivatives with respect to an n-dimensional variable such as . and denotes derivatives of a one-dimensional function.