Abstract
For q-dimensional data, penalized versions of the sample covariance matrix are important when the sample size is small or modest relative to q. Since the negative log-likelihood under multivariate normal sampling is convex in , the inverse of the covariance matrix, it is common to consider additive penalties which are also convex in
. More recently, Deng and Tsui and Yu et al. have proposed penalties which are strictly functions of the roots of Σ and are convex in
, but not in
. The resulting penalized optimization problems, though, are neither convex in
nor in
. In this article, however, we show these penalized optimization problems to be geodesically convex in Σ. This allows us to establish the existence and uniqueness of the corresponding penalized covariance matrices. More generally, we show that geodesic convexity in Σ is equivalent to convexity in
for penalties which are functions of the roots of Σ. In addition, when using such penalties, the resulting penalized optimization problem reduces to a q-dimensional convex optimization problem on the logs of the roots of Σ, which can then be readily solved via Newton’s algorithm. Supplementary materials for this article are available online.
Supplementary Materials
The supplementary materials include the following: a supplementary manuscript reporting further simulation results, an R-package logconvx for computing the proposed penalized covariance matrices, an Rdata workspace Sim.Rdata for reproducing the simulations reported in the manuscript, and an RData workspace Sonar.Rdata for reproducing the results for the example given in section 7.