Abstract
Online averaged stochastic gradient algorithms are more and more studied since (i) they can deal quickly with large sample taking values in high-dimensional spaces, (ii) they enable to treat data sequentially, (iii) they are known to be asymptotically efficient. In this paper, we focus on giving explicit bounds of the quadratic mean error of the estimates, and this, without supposing that the function we would like to minimize is strongly convex or admits a bounded gradient.
Acknowledgements
The author would like to thank Pierre Tarrago for the many fruitful discussions that enable him to deeply improve this work.
Disclosure statement
No potential conflict of interest was reported by the author(s).