# Minimax risks for sparse regression: a review

Speaker(s):
Nicolas Verzelen (INRA Montpellier, France)
Date:
Wednesday, June 13, 2012 - 10:00am
Location:
Mohrenstrasse 39, Erhard-Schmidt-Hörsaal

Consider the standard Gaussian linear regression model ${\bf Y}={\bf X}\theta_0+{\bf\epsilon}$, where ${\bf Y}\in{I\!\!R}^n$ is a response vector and ${\bf X}\in{I\!\!R}^{n\times p}$ is a design matrix. Numerous works have been devoted to building efficient estimators of $\theta_0$ when $p$ is much larger than $n$. In such a situation, a classical approach amounts to assume that $\theta_0$ is approximately sparse. In this talk, we will review recent minimax bounds over classes of $k$-sparse vectors $\theta_0$. Such bounds shed light on the limitations due to high-dimensionality. Interestingly, an elbow effect occurs when the number of variables $k\log(p/k)$ becomes large compared to $n$. Practical implications of this phenomenon will also be discussed.