681
Views
0
CrossRef citations to date
0
Altmetric
Research Papers

The reinforcement learning Kelly strategy

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1445-1464 | Received 25 Apr 2021, Accepted 25 Feb 2022, Published online: 24 Mar 2022

References

  • Cover, T.M. and Thomas, J.A., Elements of Information Theory, 1991 (Wiley: New York).
  • Dai, M., Yuchao, D. and Jia, Y., Learning equilibrium mean-variance strategy. Available at SSRN 3770818, 2020.
  • Davis, M. and Lleo, S., Fractional Kelly strategies in continuous time: Recent developments. In Handbook of the Fundamentals of Financial Decision Making: Part II, edited by L.C. MacLean, pp. 753–787, 2013 (World Scientific).
  • Doya, K., Reinforcement learning in continuous time and space. Neural Comput., 2000, 12, 219–245.
  • Goll, T. and Kallsen, J., Optimal portfolios for logarithmic utility. Stoch. Process. Their Appl., 2000, 89, 31–48.
  • Goodfellow, I., Bengio, Y. and Courville, A., Deep Learning, 2016 (MIT Press: Cambridge, MA).
  • Han, Y., Yu, P.L.H. and Mathew, T., Shrinkage estimation of Kelly portfolios. Quant. Finance, 2019, 19, 277–287.
  • Ishii, S., Yoshida, W. and Yoshimoto, J., Control of exploitation–exploration meta-parameter in reinforcement learning. Neural Netw., 2002, 15, 665–687.
  • Kallberg, J.G. and Ziemba, W.T., Mis-specifications in portfolio selection problems. In Risk and Capital, edited by G. Bamberg and K. Spremann, pp. 74–87, 1984 (Springer).
  • MacLean, L.C., Thorp, E.O., Zhao, Y. and Ziemba, W.T., Medium term simulations of the full Kelly and fractional Kelly investment strategies. In The Kelly Capital Growth Investment Criterion: Theory and Practice, edited by L.C. MacLean and E.O. Thorp, pp. 543–561, 2011 (World Scientific).
  • MacLean, L.C., Thorp, E.O. and Ziemba, W.T., Long-term capital growth: The good and bad properties of the Kelly and fractional Kelly capital growth criteria. Quant. Finance, 2010, 10, 681–687.
  • Merton, R., Optimum consumption and portfolio rules in a continuous-time model. J. Econ. Theory, 1971, 3, 373–413.
  • Nekrasov, V., Kelly criterion for multivariate portfolios: A model-free approach. Available at SSRN 2259133, 2014.
  • Rising, J.K. and Wyner, A.J., Partial Kelly portfolios and shrinkage estimators. 2012 IEEE International Symposium on Information Theory Proceedings, Cambridge, MA, USA, pp. 1618–1622, 2012.
  • Shen, W., Wang, B., Pu, J. and Wang, J., The Kelly growth optimal portfolio with ensemble learning. Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, Hawaii, USA, Vol. 33, pp. 1134–1141, 2019.
  • Sutton, R.S. and Barto, A.G., Reinforcement Learning: An Introduction, 2018 (MIT Press: Cambridge, MA).
  • Wang, H., Zariphopoulou, T. and Zhou, X.Y., Reinforcement learning in continuous time and space: A stochastic control approach. J. Mach. Learn. Res., 2019, 198, 1–34.
  • Wang, H. and Zhou, X.Y., Large scale continuous-time mean-variance portfolio allocation via reinforcement learning. Available at SSRN 3428125, 2019.
  • Wang, H. and Zhou, X.Y., Continuous-time mean–variance portfolio selection: A reinforcement learning framework. Math. Finance, 2020, 30, 1273–1308.
  • Weng, C. and Zhuang, S.C., CDF formulation for solving an optimal reinsurance problem. Scand. Actuar. J., 2017, 2017, 395–418.
  • Ziemba, W.T., Understanding the Kelly capital growth investment strategy. Invest. Strateg., 2016, 3, 49–55.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.