Abstract
Online decision making aims to learn the optimal decision rule by making personalized decisions and updating the decision rule recursively. It has become easier than before with the help of big data, but new challenges also come along. Since the decision rule should be updated once per step, an offline update which uses all the historical data is inefficient in computation and storage. To this end, we propose a completely online algorithm that can make decisions and update the decision rule online via stochastic gradient descent. It is not only efficient but also supports all kinds of parametric reward models. Focusing on the statistical inference of online decision making, we establish the asymptotic normality of the parameter estimator produced by our algorithm and the online inverse probability weighted value estimator we used to estimate the optimal value. Online plugin estimators for the variance of the parameter and value estimators are also provided and shown to be consistent, so that interval estimation and hypothesis test are possible using our method. The proposed algorithm and theoretical results are tested by simulations and a real data application to news article recommendation.
Correction statement
This article was originally published with errors, which have now been corrected in the online version. Please see Correction https://doi.org/10.1080/01621459.2020.1915023.
Notes
1 For example, is not used in . The same rule applies to the true parameter β0 and the estimators , and that are introduced below.
2 To distinguish between the iid setting and the online decision making setting, we use the tilde symbol to mark the data, the conditional mean response model and loss functions from the iid settings and use b to denote the parameters.
3 Code for the numerical studies is at https://github.com/ideechy/Online-Decision-Making.