687
Views
2
CrossRef citations to date
0
Altmetric
Theory and Methods

Federated Offline Reinforcement Learning

ORCID Icon, , , , & ORCID Icon
Received 11 Jun 2022, Accepted 19 Jan 2024, Published online: 01 Apr 2024
 

Abstract

Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this article, we propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis dataset in multiple sites to illustrate its use in clinical settings. Supplementary materials for this article are available online including a standardized description of the materials available for reproducing the work.

Supplementary Materials

Supplementary material contains the supplementary material and codes of this article.

Acknowledgments

The authors thank the editor, the associate editor, and four referees for their constructive comments and suggestions.

Disclosure Statement

The authors report there are no competing interests to declare.

Note

Notes

Additional information

Funding

Tianxi Cai acknowledges the support of NIH (R01LM013614, R01HL089778). Junwei Lu acknowledges the support of NIH (R35CA220523, R01ES32418, U01CA209414). Zhaoran Wang acknowledges the support of NSF (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 343.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.