92
Views
43
CrossRef citations to date
0
Altmetric
Theory and Method

A One-Armed Bandit Problem with a Concomitant Variable

Pages 799-806 | Received 01 Oct 1977, Published online: 05 Apr 2012

Keep up to date with the latest research on this topic with citation updates for this article.

Read on this site (6)

Wei Qian, Ching-Kang Ing & Ji Liu. (2023) Adaptive Algorithm for Multi-Armed Bandit Problem with High-Dimensional Covariates. Journal of the American Statistical Association 0:0, pages 1-13.
Read now
Haoyu Chen, Wenbin Lu & Rui Song. (2021) Statistical Inference for Online Decision Making via Stochastic Gradient Descent. Journal of the American Statistical Association 116:534, pages 708-719.
Read now
Haoyu Chen, Wenbin Lu & Rui Song. (2021) Statistical Inference for Online Decision Making: In a Contextual Bandit Setting. Journal of the American Statistical Association 116:533, pages 240-255.
Read now
Adam L. Smith & Sofía S. Villar. (2018) Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints. Journal of Applied Statistics 45:6, pages 1052-1076.
Read now
William F. Rosenberger, A. N. Vidyashankar & Deepak K. Agarwal. (2001) COVARIATE-ADJUSTED RESPONSE-ADAPTIVE DESIGNS FOR BINARY RESPONSE. Journal of Biopharmaceutical Statistics 11:4, pages 227-236.
Read now
Murray K. Clayton. (1989) Covariate models for bernoulli bandits. Sequential Analysis 8:4, pages 405-426.
Read now

Articles from other publishers (37)

Changxiao Cai, T. Tony Cai & Hongzhe Li. (2024) Transfer learning for contextual multi-armed bandits. The Annals of Statistics 52:1.
Crossref
Amaury GouverneurBorja Rodríguez-GálvezTobias J. OechteringMikael Skoglund. (2023) Thompson Sampling Regret Bounds for Contextual Bandits with sub-Gaussian rewards. Thompson Sampling Regret Bounds for Contextual Bandits with sub-Gaussian rewards.
Thomas N. Sherratt & Erica O'Neill. (2023) Signal detection models as contextual bandits. Royal Society Open Science 10:6.
Crossref
Anders Bredahl Kock, David Preinerstorfer & Bezirgen Veliyev. (2023) FUNCTIONAL SEQUENTIAL TREATMENT ALLOCATION WITH COVARIATES. Econometric Theory, pages 1-42.
Crossref
Mashfiqui Rabbi, Predrag Klasnja, Tanzeem Choudhury, Ambuj Tewari & Susan Murphy. 2023. Digital Phenotyping and Mobile Sensing. Digital Phenotyping and Mobile Sensing 365 378 .
Thomas N Sherratt & James Voll. (2022) On the strategic learning of signal associations. Behavioral Ecology 33:6, pages 1058-1069.
Crossref
Yonatan Gur, Ahmadreza Momeni & Stefan Wager. (2022) Smoothness-Adaptive Contextual Bandits. Operations Research 70:6, pages 3198-3216.
Crossref
Yonatan Gur & Ahmadreza Momeni. (2022) Adaptive Sequential Experiments with Unknown Information Arrival Processes. Manufacturing & Service Operations Management 24:5, pages 2666-2684.
Crossref
Tze Leung Lai, Michael Benjamin Sklar & Huanzhong Xu. (2022) Bandit and covariate processes, with finite or non-denumerable set of arms. Stochastic Processes and their Applications 150, pages 1222-1237.
Crossref
Preston Biro & Stephen G. Walker. (2022) A reinforcement learning based approach to play calling in football. Journal of Quantitative Analysis in Sports 18:2, pages 97-112.
Crossref
Alexander Semenov, Maciej Rysz, Gaurav Pandey & Guanglin Xu. (2022) Diversity in news recommendations using contextual bandits. Expert Systems with Applications 195, pages 116478.
Crossref
Sakshi Arya & Yuhong Yang. (2021) To update or not to update? Delayed nonparametric bandits with randomized allocation. Stat 10:1.
Crossref
Milad Malekipirbazari & Ozlem Cavus. (2021) Risk-Averse Allocation Indices for Multiarmed Bandit Problem. IEEE Transactions on Automatic Control 66:11, pages 5522-5529.
Crossref
Ying Zhong, L. Jeff Hong & Guangwu Liu. (2021) Earning and Learning with Varying Cost. Production and Operations Management 30:8, pages 2379-2394.
Crossref
Hamsa Bastani, Mohsen Bayati & Khashayar Khosravi. (2021) Mostly Exploration-Free Algorithms for Contextual Bandits. Management Science 67:3, pages 1329-1349.
Crossref
Mingyao Ai, Yimin Huang & Jun Yu. (2021) A non-parametric solution to the multi-armed bandit problem with covariates. Journal of Statistical Planning and Inference 211, pages 402-413.
Crossref
Nikhil BhatVivek F. Farias, Ciamac C. Moallemi & Deeksha Sinha. (2020) Near-Optimal A-B Testing. Management Science 66:10, pages 4477-4495.
Crossref
Sakshi Arya & Yuhong Yang. (2020) Randomized allocation with nonparametric estimation for contextual multi-armed bandits with delayed rewards. Statistics & Probability Letters 164, pages 108818.
Crossref
Yishay Mansour, Aleksandrs Slivkins & Vasilis Syrgkanis. (2020) Bayesian Incentive-Compatible Bandit Exploration. Operations Research 68:4, pages 1132-1161.
Crossref
Erik Miehling, Roy Dong, Cedric Langbort & Tamer Basar. (2019) Strategic Inference with a Single Private Sample. Strategic Inference with a Single Private Sample.
Mashfiqui Rabbi, Predrag Klasnja, Tanzeem Choudhury, Ambuj Tewari & Susan Murphy. 2019. Digital Phenotyping and Mobile Sensing. Digital Phenotyping and Mobile Sensing 277 291 .
Sofía S. Villar & William F. Rosenberger. (2018) Covariate-adjusted Response-adaptive Randomization for Multi-arm Clinical Trials Using a Modified Forward Looking Gittins Index Rule. Biometrics 74:1, pages 49-57.
Crossref
Ambuj Tewari & Susan A. Murphy. 2017. Mobile Health. Mobile Health 495 517 .
Wei Qian & Yuhong Yang. (2016) Randomized allocation with arm elimination in a bandit problem with covariates. Electronic Journal of Statistics 10:1.
Crossref
You Liang, Xikui Wang & Yanqing Yi. (2013) One-armed bandit process with a covariate. Annals of the Institute of Statistical Mathematics 65:5, pages 993-1006.
Crossref
Alexander Goldenshluger & Assaf Zeevi. (2013) A Linear Response Bandit Problem. Stochastic Systems 3:1, pages 230-261.
Crossref
Vianney Perchet & Philippe Rigollet. (2013) The multi-armed bandit problem with covariates. The Annals of Statistics 41:2.
Crossref
Alexander Goldenshluger & Assaf Zeevi. (2011) A Note on Performance Limitations in Bandit Problems With Side Information. IEEE Transactions on Information Theory 57:3, pages 1707-1713.
Crossref
Lihong Li, Wei Chu, John Langford & Xuanhui Wang. (2011) Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms.
Adam M. Sykulski, Niall M. Adams & Nicholas R. Jennings. (2010) On-Line Adaptation of Exploration in the One-Armed Bandit with Covariates Problem. On-Line Adaptation of Exploration in the One-Armed Bandit with Covariates Problem.
Alina Beygelzimer & John Langford. (2009) The offset tree for learning with partial labels. The offset tree for learning with partial labels.
Chih-Chun Wang, Sanjeev R. Kulkarni & H. Vincent Poor. (2005) Arbitrary side observations in bandit problems. Advances in Applied Mathematics 34:4, pages 903-938.
Crossref
Chih-Chun Wang, S.R. Kulkarni & H.V. Poor. (2005) Bandit problems with side observations. IEEE Transactions on Automatic Control 50:3, pages 338-355.
Crossref
Chih-Chun Wang, S.R. Kulkami & H.V. Poor. (2003) Bandit problems with arbitrary side observations. Bandit problems with arbitrary side observations.
Chih-Chun Wang, S.R. Kulkarni & H.V. Poor. (2002) Bandit problems with side observations. Bandit problems with side observations.
Yonatan Gur & Ahmadreza Momeni. (2021) Adaptive Sequential Experiments with Unknown Information Arrival Processes. SSRN Electronic Journal.
Crossref
Sheng Qiang & Mohsen Bayati. (2016) Dynamic Pricing with Demand Covariates. SSRN Electronic Journal.
Crossref

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.