View Abstract
Abstract: The computation of the trace of the inverse of a large matrix, A, appears
in many statistics, machine learning, and quantum mechanics applications.
Our driving application comes from Lattice Quantum Chromodynamics (LQCD).
When the size of A does not allow the use of direct methods, most
applications rely on the Hutchinson method, a Monte Carlo variant that
requires the solution of a linear system with A per step.
The variance of the estimator determines the accuracy and therefore the
computational cost of the method. For Hutchinson, the variance equals the
squared Frobenious norm of the matrix inverse, excluding its diagonal.
In this talk, we present some variance reduction techniques we developed
over the last few years that speed up the Hutchinson method. The goal is
to approximate the off-diagonal elements of the inverse of A based either
on structural or on spectral information.
For LQCD, where the discretization space is a 4D regular lattice torus,
our Hierarchical Probing method produces a sequence of Hadamard vectors
that, if used in the Hutchinson method, hierarchically annihilate elements
of the inverse of A whose vertices in the lattice are increasingly farther
from each other. Based on the decay of Green's function, this approach
has yielded significant variance reduction.
Our second approach is to deflate A from its smallest singular triplets.
The hope is that the variance of the deflated A is (much) smaller. Contrary
to low rank matrix approximations, the above deflation may actually
increase variance. We provide an analysis based on standard random unitary
matrices, and derive criteria on when to expect improvement.
Finally, we touch upon the daunting computational task of computing
the smallest singular triplets of A, and the recent progress our group
has made in this direction which are included in the software PRIMME.