Volume 23, 2019
|Page(s)||841 - 873|
|Published online||24 December 2019|
Lp and almost sure rates of convergence of averaged stochastic gradient algorithms: locally strongly convex objective
Institut de Mathématiques de Toulouse, Université Paul Sabatier,
* Corresponding author: email@example.com
Accepted: 14 June 2019
An usual problem in statistics consists in estimating the minimizer of a convex function. When we have to deal with large samples taking values in high dimensional spaces, stochastic gradient algorithms and their averaged versions are efficient candidates. Indeed, (1) they do not need too much computational efforts, (2) they do not need to store all the data, which is crucial when we deal with big data, (3) they allow to simply update the estimates, which is important when data arrive sequentially. The aim of this work is to give asymptotic and non asymptotic rates of convergence of stochastic gradient estimates as well as of their averaged versions when the function we would like to minimize is only locally strongly convex.
Mathematics Subject Classification: 62L12 / 62G35 / 62L20
Key words: Stochastic optimization / Stochastic gradient algorithm / averaging / Robust statistics
© The authors. Published by EDP Sciences, SMAI 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.