123
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

Stochastic switching for partially observable dynamics and optimal asset allocation

&
Pages 553-565 | Received 15 Sep 2015, Accepted 30 Apr 2016, Published online: 15 Jun 2016

References

  • Andersen L., & Broadie, M. (2004). A primal-dual simulation algorithm for pricing multidimensional American options. Management Science, 50, 1222–1234.
  • Bäuerle, N., & Rieder, U. (2011). Markov decision processes with applications to finance. Heidelberg: Springer.
  • Bertsekas, D.P. (2005). Dynamic programming and optimal control. Belmont, MA: Athena Scientific.
  • Bertsekas, D.P., & Tsitsiklis, J.N. (1996). Neuro-dynamic programming. Belmont, MA: Athena Scientific.
  • Brown, D.B., & Smith, J.E. (2010). Dynamic portfolio optimization with transaction costs: Heuristics and dual bounds. Management Science, 58(4), 1752–1770.
  • Brown, D.B., Smith, J.E., & Sun, P. (2011). Information reelaxations and duality in stochastic dynamic programs. Operations Research, 57(10), 785–851.
  • Elliott, R., & Hinz, J. (2002). Portfolio optimization, hidden Markov models, and technical analysis of p&f-charts. International Journal of Theoretical and Applied Finance, 5(4), 1–15.
  • Feinberg, E.A., & Schwartz, A. (2002). Handbook of Markov decision processes. Dordrecht: Kluwer Academic.
  • Haugh, M., & Kogan, L. (2004). Pricing American options: A duality approach. Operations Research, 52, 258–270.
  • Haug, M.B., & Lim, A.E.B. (2012). Linear–quadratic control and information relaxations. Operations Research Letters, 40(6), 521–528.
  • Haugh, M.B., & Wang, C. (2014). Dynamic portfolio execution and information relaxations. SIAM Journal on Financial Mathematics, 7, 316–359.
  • Hauskrecht, M. (2000). Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13, 33–94.
  • Hernández-Lerma, O. (1989). Adaptive Markov control processes. New York, NY: Springer.
  • Hinz, J. (2014). Optimal stochastic switching under convexity assumptions. SIAM Journal on Control and Optimization, 52(1), 164–188.
  • Hinz, J., & Yap, N. (2015). Algorithms for optimal control of stochastic switching systems. Theory of Probability & Its Applications, 60(4), 770–800.
  • Hinz, J. (2015). Using convex switching techniques for partially observable decision processes. IEEE Transactions on Automatic Control. doi:10.1109/TAC.2015.2505403
  • Hinz, J., & Yee, J. (2016a). Algorithmic solutions for optimal switching problems. In Proceedings of the Second International Symposium on Stochastic Models in Reliability Engineering, Life Science and Operations Management (pp. 586–590). Beer-Sheva, Israel: IEEE.
  • Hinz, J., & Yee, J. (2016b). Solving control problems with linear state dynamics – a practical user guide. In Proceedings of the Second International Symposium on Stochastic Models in Reliability Engineering, Life Science and Operations Management (pp. 591–596). Beer-Sheva, Israel: IEEE.
  • Kasyanov, P., Feinberg, E., & Zgurovsky, M. (2014). Partially observable total-cost Markov decision process with weakly continuous transition probabilities. arXiv:1401.2168.
  • Monahan, G.E. (1982). A survey of partially observable Markov decision processes: Theory, models, and algorithms. Management Science, 28, 1–16.
  • Powell, W.B. (2007). Approximate dynamic programming: Solving the curses of dimensionality. Hoboken, NJ: Wiley.
  • Puterman, M.L. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York, NY: Wiley.
  • Rhenius, D. (1974). Incomplete information in Markovian decision models. Annals of Statistics, 2, 1327–1334.
  • Rogers, L.C.G. (2002). Monte Carlo valuation of American options. Mathematical Finance, 12, 271–286.
  • Rogers, L.C.G. (2007). Pathwise stochastic optimal control. SIAM Journal on Control and Optimisation, 46, 1116–1132.
  • Ye, F., & Zhou, E. (2013). Optimal stopping of partially observable Markov processes: A filtering-based duality approach. IEEE Transactions on Automatic Control, 58, 2698–2704.
  • Yushkevich, A. (1976). Reduction of a controlled Markov model with incomplete data to a problem with complete information in the case of borel state and control spaces. Theory of Probability and Its Applications, 21, 153–158.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.