## Asymptotic Optimality of Conditioned Stochastic Gradient Descent (preprint)

With F.Portier. (PDF)

*Statistics and Computing*, 31. With F.Portier and J.Segers. (arXiv),(PDF),(code)

With F.Portier. (PDF)

- July 07, 2020: Poster, Machine Learning Summer School (MLSS) 2020, Tübingen (Germany), virtual. Slides
- August 25, 2020: Talk, Bernoulli-IMS One World Symposium 2020, virtual
- November 22, 2019: Workshop on Probabilistic methods in computational statistics, Télécom SudParis, Evry, France

- July 22, 2021: Poster session at International Conference on Machine Learning 2021, virtual
- July 01, 2021: Poster session for Best Student Paper competition at Extreme Value Analysis, virtual. Slides

- July 06, 2022 : Talk, Seminar SIERRA, INRIA Paris, France. Slides
- April 15, 2021: Talk, Seminar 2nd-year PhD candidates of EDMH doctoral school, Télécom Paris, Saclay, France

graduate course, *Télécom Paris*

Many statistical learning problems (calculating an estimator, a classifier, etc.) boil down to the minimization of a functional, typically an empirical risk. Optimization methods are therefore central to the “practical” part of statistical learning. In this module, the students will discover not only the theoretical foundations that are a continuation of the optimization course followed in first semester, but also different techniques to deal specifically with the case of massive data.

graduate course, *Télécom Paris*

In many situations, the data available to the statistician are so complex that, at least in the first analysis, they escape any parametric modelling. The objective of this course is to present less rigid statistical techniques, as well as the theoretical issues inherent in their implementation: the counterpart of the increased flexibility of non-parametric approaches lies in the risk of “over-fitting” the model to the data. Through examples, the “minimax” perspective for non-parametric estimation, the “bias/variance” trade-off depending on the “complexity” of the model, and the statistical learning paradigm, “empirical risk minimization”, will be discussed.

graduate course, *Télécom Paris*

In this course, we will first discuss the simple linear (least squares) model before presenting the general framework, which includes logistic regression. We will then consider the estimation and testing problems in these models. Finally, we will present the problem of variable selection in such a context, relying mainly on L1 (Lasso) regularization/penalization and on greedy selection methods.

graduate course, *Télécom Paris*

The convergence modes for random sequences will be reviewed and further developed. Convergence in law will be particularly studied. Applications in asymptotic statistics will be discussed. The theory of discrete-time martingales is an essential tool for stochastic calculus. This theory will be introduced with applications to Markov chains.

graduate course, *Télécom Paris*

The objective of this course is to give students the tools needed to practice statistics and data analysis, by explaining the fundamental concepts: models, estimators, point estimation, interval estimation, construction principles (moments, likelihood), eligibility criteria (bias/variance, Cramer-Rao limits), decision procedures, hypothesis testing. The lectures will be illustrated with practical examples from statistical data analysis.