Convergence Rates for Learning Linear Operators from Noisy Data
Convergence Rates for Learning Linear Operators from Noisy Data
This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces. The training data comprises pairs of random input vectors in a Hilbert space and their noisy images under an unknown self-adjoint linear operator. Assuming that the operator is diagonalizable in a known basis, this work solves the equivalent …