Optimal Convergence Rates Results for Linear Inverse Problems in Hilbert Spaces
In this article, we prove optimal convergence rates results for regularization methods for solving linear ill-posed operator equations in Hilbert spaces. The results generalizes existing convergence rates results on optimality to general source conditions, such as logarithmic source conditions. Moreover, we also provide optimality results under variational source conditions and show the connection to approximative source conditions.
A Range Condition for Polyconvex Variational Regularization
In the context of variational regularization, it is a known result that, under suitable differentiability assumptions, source conditions in the form of variational inequalities imply range conditions, while the converse implication only holds under an additional restriction on the operator. In this article, we prove the analogous result for regularization. More precisely, we show that the variational inequality derived by the authors in 2017 implies that the derivative of the regularization functional must lie in the range of the dual-adjoint of the derivative of the operator. In addition, we show how to adapt the restriction on the operator in order to obtain the converse implication.
Iteratively Refined Image Reconstruction with Learned Attentive Regularizers
We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
Frames for the Solution of Operator Equations in Hilbert Spaces with Fixed Dual Pairing
For the solution of operator equations, Stevenson introduced a definition of frames, where a Hilbert space and its dual are identified. This means that the Riesz isomorphism is not used as an identification, which, for example, does not make sense for the Sobolev spaces and . In this article, we are going to revisit the concept of Stevenson frames and introduce it for Banach spaces. This is equivalent to -Banach frames. It is known that, if such a system exists, by defining a new inner product and using the Riesz isomorphism, the Banach space is isomorphic to a Hilbert space. In this article, we deal with the contrasting setting, where and are not identified, and equivalent norms are distinguished, and show that in this setting the investigation of -Banach frames make sense.
Continuous Generative Neural Networks: A Wavelet-Based Architecture in Function Spaces
In this work, we present and study Continuous Generative Neural Networks (CGNNs), namely, generative models in the continuous setting: the output of a CGNN belongs to an infinite-dimensional function space. The architecture is inspired by DCGAN, with one fully connected layer, several convolutional layers and nonlinear activation functions. In the continuous setting, the dimensions of the spaces of each layer are replaced by the scales of a multiresolution analysis of a compactly supported wavelet. We present conditions on the convolutional filters and on the nonlinearity that guarantee that a CGNN is injective. This theory finds applications to inverse problems, and allows for deriving Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns belonging to the manifold generated by a CGNN. Several numerical simulations, including signal deblurring, illustrate and validate this approach.