Fréjus
Meeting in Mathematical Statistics 2015

Recent advances in nonparametric and high-dimensional inference

December 14-18





Home


Welcome to the 2015 edition of the Meeting in Mathematical Statistics. All the previous editions of this workshop were taking place in CIRM (Luminy). It is our pleasure to announce that this year the workshop will take place in Frejus (south of France) in the villa Clythia. The goal of the workshop is to bring together leading and upcoming researchers working on mathematical statistics.



Tutorials

  • Reiss, Markus (Humboldt-Universität zu Berlin, Germany)
    Optimal adaptation for early stopping rules: inverse problems and beyond

    Abstract:

    Most parametric statistical methods are based on the minimisation of an objective functional. Nowadays, the parameter \(\theta\) is usually high-dimensional (or functional) and early stopping for an iterative minimisation algorithm prevents overfitting, e.g. a limited number of gradient descent steps for the least squares method. After \(m\) iterations this results in a sequence of estimators \(\hat\theta_0, \hat\theta_1,\ldots,\hat\theta_m\). Often, the estimators turn out to be ordered such that their stochastic error (variance) grows, while their approximation error (bias) decreases. We ask for a stopping rule \(\hat m\), only depending on the already calculated estimators, such that \(\hat\theta_{\hat m}\) balances both error types. So far, adaptive methods based on model selection criteria or on Lepski's approach required to fix the number \(m\) of iterations first and then to choose some \(\hat m\in\{0,1,\ldots,m\}\) adaptively. Often \(\hat m\) is much smaller than \(m\) and computational efficiency is wasted or due to time/storage constraints \(m\) is chosen too small.

    For the case of prototypical inverse problems with observations \(Y=A\theta+\epsilon\) with a self-adjoint matrix \(A\) and Gaussian white noise \(\epsilon\) we consider specific procedures like spectral cut-off, Tychonov and Landweber methods and develop a unifying theory. A sequential lower bound for spectral cut-off shows that no adaptative stopping is possible if only the estimators are known. If we add the residuals (empirical risks) of the estimators, which are easy to calculate, then optimal adaptation, as obtained by classical adaptive methods without the stopping time restriction, is indeed possible, but only up to a certain maximal smoothness. For an easy stopping rule a remarkably clear theory can be developed with precise oracle inequalities and we shall draw parallels with the discrepancy principle in deterministic inverse problems. Finally, the potential for other statistical problems is discussed.

    (joint work with Gilles Blanchard, Marc Hoffmann)

  • Rigollet, Philippe (MIT, USA)
    Statistical and Computational tradeoffs in sparse PCA

    Abstract:

    High dimensional statistics come not only with statistical challenges but also with computational ones. In examples such as sparse linear regression, under appropriate conditions, computationally efficient methods (e.g. Lasso) achieve near optimal performance guarantees. In the context of sparse PCA (principal component analysis), the story is somewhat different: there is a statistical price to pay for computational efficiency. To shed light on this phenomenon, we will
       (i) establish minimax rates for sparse PCA
      (ii) provide efficient methods for sparse PCA using convex optimization
     (iii) prove computational lower bounds using polynomial time reductions.
    No prior knowledge on computational complexity is required.



Organizing Committee



Sponsors


Sponsors