188
Views
0
CrossRef citations to date
0
Altmetric
Operations Engineering & Analytics

Worst-case analysis for a leader–follower partially observable stochastic game

ORCID Icon &
Pages 376-389 | Received 04 Oct 2020, Accepted 06 Jul 2021, Published online: 23 Aug 2021

References

  • Aghassi, M. and Bertsimas, D. (2006) Robust game theory. Mathematical Programming, 107(1), 231–273.
  • Bernstein, D.S., Givan, R., Immerman, N. and Zilberstein, S. (2002) The complexity of decentralized control of Markov decision processes. Mathematics of Operations Research, 27(4), 819–840.
  • Bier, V.M., Oliveros, S. and Samuelson, L. (2007) Choosing what to protect. Journal of Public Economic Theory, 9(4), 563–587.
  • Borrero, J.S., Prokopyev, O.A. and Saure, D. (2016) Sequential shortest path interdiction with incomplete information. Decision Analysis, 13(1), 68–98.
  • Borrero, J.S., Prokopyev, O.A. and Saure, D. (2019) Sequential interdiction with incomplete information and learning. Operations Research, 67(1), 72–89.
  • Bramel, J. and Simchi-Levi, D. (1997) Worst-case analysis, in The Logic of Logistics. Springer Series in Operations Research. Springer, New York, NY, pp. 15–35.
  • Busoniu, L., Babuska, R. and Schutter, B.D. (2008) A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 38(2), 156–172.
  • Chang, Y., Erera, A.L. and White, C.C. (2015) Value of information for a leader-follower partially observed Markov game. Annals of Operations Research, 235(1), 129–153.
  • Chang, Y., Erera, A.L. and White, C.C. (2017) Risk assessment of deliberate contamination of food production facilities. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(3), 381–393.
  • Cheng, J., Leung, J. and Lisser, A. (2016) Random-payoff two-person zero-sum game with joint chance constraints. European Journal of Operational Research, 252(1), 213–219.
  • Conlisk, J. (1996) Why bounded rationality? Journal of Economic Literature, 34(2), 669–700.
  • De Berg, M., Cheong, O., van Kreveld, M. and Overmars, M. (2008) Computational Geometry: Algorithms and Applications. Springer-Verlag, Berlin, Germany.
  • Ghosh, M.K., McDonald, D. and Sinha, S. (2004) Zero-sum stochastic games with partial information. Journal of Optimization Theory and Applications, 121(1), 99–118.
  • Gmytrasiewicz, P.J. and Doshi, P. (2004) Interactive POMDPs: Properties and preliminary results, in Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Volume 3, IEEE, New York, NY, pp. 1374–1375.
  • Goldsmith, J. and Mundhenk, M. (2008) Competition adds complexity, in Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, pp. 561–568.
  • Iyengar, G.N. (2005) Robust dynamic programming. Mathematics of Operations Research, 30(2), 257–280.
  • Jain, M., Tsai, J., Pita, J., Kiekintveld, C., Rathi, R. and Tambe, M. (2010) Software assistants for randomized patrol planning for the LAX Airport Police and the Federal Air Marshal Service. Interfaces, 40(4), 267–290.
  • Kardes, E. (2014) On discounted stochastic games with incomplete information on payoffs and a security application. Operations Research Lettersm, 42(1), 7–11.
  • Kardes, E., Ordonez, F. and Hall, R.W. (2011) Discounted robust stochastic games and an application to queueing control. Operations Research, 59(2), 365–382.
  • Lin, Z., Bean, J.C. and White, C.C. (2004) A hybrid genetic/optimization algorithm for finite-horizon, partially observed Markov decision processes. INFORMS Journal on Computing, 16(1), 27–38.
  • Luque-Vasquez, F. and Minjarez-Sosa, J.A. (2013) Average optimal strategies for zero-sum Markov games with poorly known payoff function on one side. Journal of Dynamics and Games, 1(1), 105–119.
  • Lusena, C., Goldsmith, J. and Mundhenk, M. (2001) Nonapproximability results for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 14(1), 83–103.
  • March, J. (1978) Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics, 9(2), 587–608.
  • Meulenbelt, S. (2018) Assessing chemical, biological, radiological and nuclear threats to the food supply chain. Global Security: Health, Science and Policy, 3(1), 14–27.
  • Minjarez-Sosa, J.A. and Vega-Amaya, O. (2009) Asymptotically optimal strategies for adaptive zero-sum discounted Markov games. SIAM Journal on Control and Optimization, 48(3), 1405–1421.
  • Najim, K., Poznyak, A.S. and Gomez, E. (2001) Adaptive policy for two finite Markov chains zero-sum stochastic game with unknown transition matrices and average payoffs. Automatica, 37(7), 1007–1018.
  • Oliehoek, F.A. (2012) Decentralized POMDPs, in Reinforcement Learning: State of the Art, Adaptation, Learning, and Optimization. Springer Berlin Heidelberg, Germany, pp. 471–503.
  • Oliehoek, F.A. and Amato, C. (2016) Infinite-horizon DEC-POMDPs, in A Concise Introduction to Decentralized POMDPs. Springer International Publishing, pp. 69–77.
  • Osogami, T. (2015) Robust partially observable Markov decision process, in Proceedings of the 32nd International Conference on Machine Learning, Volume 37, PMLR, Lille, pp. 106–115.
  • Papadimitriou, C.H. and Tsitsiklis, J.N. (1987) The complexity of Markov decision processes. Mathematics of Operations Research, 12(3), 441–450.
  • Rabinovich, Z., Goldman, C.V. and Rosenschein, J.S. (2003) The complexity of multiagent systems: The price of silence, in Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), AAMAS, pp. 1102–1103.
  • Rosenberg, D., Solan, E. and Vieille, N. (2004) Stochastic games with a single controller and incomplete information. SIAM Journal on Control and Optimization, 43(1), 86–110.
  • Saghafian, S. (2018) Ambiguous partially observable Markov decision processes: Structural results and applications. Journal of Economic Theory, 178, 1–35.
  • Saha, S. (2014) Zero-sum stochastic games with partial information and average payoff. Journal of Optimization Theory and Applications, 160(1), 344–354.
  • Shapley, L.S. (1953) Stochastic games. Proceedings of the National Academy of Sciences of the United States of America, 39(10), 1095–1100.
  • Smallwood, R.D. and Sondik, E.J. (1973) The optimal control of partially observable Markov processes over a finite horizon. Operations Research, 21(5), 1019–1175.
  • Sorin, S. (1984) “Big Match” with lack of information on one side (part i). International Journal of Game Theory, 13(4), 201–255.
  • Sorin, S. (1985) “Big Match” with lack of information on one side (part ii). International Journal of Game Theory, 14(3), 173–204.
  • White, C.C. and Eldeib, H.K. (1994) Markov decision processes with imprecise transition probabilities. Operations Research, 42(4), 739–749.
  • Wiesemann, W., Kuhn, D. and Rustem, B. (2013) Robust Markov decision processes. Mathematics of Operations Research, 38(1), 153–183.
  • Wiggers, A.J., Oliehoek, F.A. and Roijers, D.M. (2016) Structure in the value function of two-player zero-sum games of incomplete information, in Proceedings of the Twenty-second European Conference on Artificial Intelligence, IOS Press, Amsterdam, Netherlands, pp. 1628–1629.
  • Zhang, H. (2010) Partially observable Markov decision processes: A geometric technique and analysis. Operations Research, 58(1), 214–228.
  • Zhang, Y. (2013) Contributions in supply chain risk assessment and mitigation. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.