1,154
Views
2
CrossRef citations to date
0
Altmetric
Articles

Factorial Designs for Online Experiments

&
Pages 1-12 | Received 26 Apr 2018, Accepted 25 Nov 2019, Published online: 23 Jan 2020

References

  • Agrawal, R. (1995), “Sample Mean Based Index Policies With O (   log   n ) Regret for the Multi-Armed Bandit Problem,” Advances in Applied Probability , 27, 1054–1078. DOI: 10.2307/1427934.
  • Agrawal, S. , and Goyal, N. (2012), “Analysis of Thompson Sampling for the Multi-Armed Bandit Problem,” in Proceeding of the 25th Annual Conference on Learning Theory , pp. 39.1–39.26.
  • Albert, A. , and Anderson, J. A. (1984), “On the Existence of Maximum Likelihood Estimates in Logistic Regression Models,” Biometrika , 71, 1–10. DOI: 10.1093/biomet/71.1.1.
  • Albert, J. H. , and Chib, S. (1993). “Bayesian Analysis of Binary and Polychotomous Response Data,” Journal of the American Statistical Association , 88, 669–679. DOI: 10.1080/01621459.1993.10476321.
  • Auer, P. , Cesa-Bianchi, N. , and Fischer, P. (2002), “Finite-Time Analysis of the Multiarmed Bandit Problem,” Machine Learning , 47, 235–256. DOI: 10.1023/A:1013689704352.
  • Bakshy, E. , Eckles, D. , and Bernstein, M. S. (2014), “Designing and Deploying Online Field Experiments,” Proceedings of the 23rd International Conference on World Wide Web , ACM, pp. 283–292. DOI: 10.1145/2566486.2567967.
  • Burtini, G. , Loeppky, J. , and Lawrence, R. (2017), “A Survey of Online Experiment Design With the Stochastic Multi-Armed Bandit,” arXiv no. 1510.00757v4.
  • Chapelle, O. , and Li, L. (2011), “An Empirical Evaluation of Thompson Sampling,” in Advances in Neural Information Processing Systems (Vol. 24), pp. 2249–2257.
  • Dimakopoulou, M. , Zhou, Z. , Athey, S. , and Imbens, G. (2018), “Estimation Considerations in Contextual Bandits,” arXiv no. 1711.07077v4.
  • Dror, H. A. , and Steinberg, D. M. (2008), “Sequential Experimental Design for Generalized Linear Models,” Journal of the American Statistical Association , 103, 288–298. DOI: 10.1198/016214507000001346.
  • Gelman, A. , Jakulin, A. , Pittau, M. G. and Su, Y. S. (2008), “A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models,” Annals of Applied Statistics , 2, 1360–1383. DOI: 10.1214/08-AOAS191.
  • Gilks, W. R. , Richardson, S. , and Spiegelhalter, D. J. (1996), Markov Chain Monte Carlo in Practice , London: Chapman and Hall.
  • Gittins, J. , Glazebrook, K. , and Weber, R. (2011), Multi-Armed Bandit Allocation Indices (2nd ed.), New York: Wiley.
  • Hamming, R. W. (1950), “Error Detecting and Error Correcting Codes,” Bell System Technical Journal , 29, 147–160. DOI: 10.1002/j.1538-7305.1950.tb00463.x.
  • Kaplan, L. (2016), “Active Learning and Experimental Design,” unpublished Ph.D. dissertation, Tel-Aviv University, Department of Statistics and Operations Research.
  • Kohavi, R. , Longbotham, R. , Sommerfield, D. , and Henne, R. (2009), “Controlled Experiments on the Web: Survey and Practical Guide,” Data Mining and Knowledge Discovery , 18, 140–181. DOI: 10.1007/s10618-008-0114-1.
  • Kohavi, R. , and Thomke, S. (2017), “The Surprising Power of Online Experiments: Getting the Most Out of A/B and Other Controlled Tests,” Harvard Business Review , 95, 74–82.
  • Lai, T. L. , and Robbins, H. (1985), “Asymptotically Efficient Adaptive Allocation Rules,” Advances in Applied Mathematics , 6, 4–22. DOI: 10.1016/0196-8858(85)90002-8.
  • Lemieux, C. (2009), Monte Carlo and Quasi-Monte Carlo Sampling , New York: Springer.
  • Nie, X. , Tian, X. , Taylor, J. , and Zou, J. (2018). “Why Adaptively Collected Data Have Negative Bias and How to Correct for It,” in Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) 2018 , Lanzarote, Spain.
  • Pokhilko, V. , Zhang, Q. , Kang, L. , and Mays, D.P. (2019), “D-Optimal Design for Network A/B Testing,” Journal of Statistical Theory and Practice , 13, 61. DOI: 10.1007/s42519-019-0058-3.
  • Russo, D. , and Van Roy, B. (2014), “Learning to Optimize via Posterior Sampling,” Mathematics of Operations Research , 39, 1221–1243. DOI: 10.1287/moor.2014.0650.
  • Sadeghi, S. , Chien, P. , and Arora, N. (2019), “Sliced Designs for Multi-Platform Online Experiments,” Technometrics (in press). DOI: 10.1080/00401706.2019.1647288.
  • Scott, S. (2010), “A Modern Bayesian Look at the Multi-Armed Bandit,” Applied Stochastic Models in Business and Industry , 26, 639–658. DOI: 10.1002/asmb.874.
  • Scott, S. (2015), “Multi-Armed Bandit Experiments in the Online Service Economy,” Applied Stochastic Models in Business and Industry , 31, 37–45.
  • Silvapulle, M. J. (1981), “On the Existence of Maximum Likelihood Estimators for the Binomial Response Models,” Journal of the Royal Statistical Society, Series B, 43, 310–313. DOI: 10.1111/j.2517-6161.1981.tb01676.x.
  • Thompson, W. R. (1933), “On the Likelihood That One Unknown Probability Exceeds Another in View of the Evidence of Two Samples,” Biometrika , 25, 285–294. DOI: 10.1093/biomet/25.3-4.285.
  • Watkins, C. J. C. H. (1989), “Learning From Delayed Rewards,” unpublished Ph.D. dissertation, University of Cambridge.
  • White, J. M. , (2013), Bandit Algorithms for Website Optimization , Sebastopol, CA: O’Reilly Media Inc.
  • Whittle, P. (1988), “Restless Bandits: Activity Allocation in a Changing World,” Journal of Applied Probability , 25, 287–298. DOI: 10.2307/3214163.
  • Wu, C.-F. J. , and Hamada, M. (2009), Experiments: Planning, Analysis, and Parameter Design Optimization , New York: Wiley.
  • Yang, Y. , and Zhu, D. (2002), “Randomized Allocation With Nonparametric Estimation for a Multi-Armed Bandit Problem With Covariates,” The Annals of Statistics , 30, 100–121. DOI: 10.1214/aos/1015362186.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.