Skip to main content

Exploring and exploiting new structured classes of covariance and inverse covariance matrices

Speaker: Heather Battey (Department of Mathematics, Imperial College)

Abstract

Estimation of covariance and inverse covariance (precision) matrices is an essential ingredient to virtually every modern statistical procedure. When the dimension, p, of the covariance matrix is large relative to the sample size, the the sample covariance matrix is inconsistent in non-trivial matrix norms, and its non-invertibilty renders many techniques in multivariate analysis impossible. Structural assumptions are necessary in order to restrain the estimation error, even if this comes at the expense of some approximation error if the structural assumptions fail to hold. I will introduce new structured model classes for estimation of large covariance and precision matrices. These model classes result from imposing sparsity in the domain of the matrix logarithm. After studying the structure induced in the original and inverse domains, I will then introduce estimators of both the covariance and precision matrix that exploit this structure. I derive the convergence rates of these estimators and show that they achieve a new minimax lower bound over classes of covariance and precision matrices whose matrix logarithm is sparse. The implication of this result is that the estimators are efficient and the minimax lower bound is sharp.

Seminar organiser: Dr Alex Lewin