1,676
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Detecting Physiological Needs Using Deep Inverse Reinforcement Learning

ORCID Icon, ORCID Icon & ORCID Icon
Article: 2022340 | Received 12 May 2021, Accepted 09 Dec 2021, Published online: 07 Jan 2022

References

  • Abbeel, P., A. Coates, M. Quigley, and A. Y. Ng. 2007. An application of reinforcement learning to aerobatic helicopter flight. Advances in Neural Information Processing Systems 19: 1–1656. https://proceedings.neurips.cc/paper/2006/file/98c39996bf1543e974747a2549b3107c-Paper.pdf.
  • Abbeel, P, A. Coates, A. Y. Ng et al . 2010. Autonomous helicopter aerobatics through apprenticeship learning. The International Journal of Robotics Research 29 (13):1608–1639. doi:10.1177/0278364910371999.
  • Arora, S., and P. Doshi. 2018. “A survey of inverse reinforcement learning: Challenges, methods and progress .” arXiv preprint arXiv:1806.06877. https://arxiv.org/abs/1806.06877
  • Arora, S., and P. Doshi. 2021. A survey of inverse reinforcement learning: Challenges, methods and progress. Artificial Intelligence 297:103500. doi:10.1016/j.artint.2021.103500.
  • Bauer, G., F. Gerstenbrand, and E. Rumpl. 1979. Varieties of the locked-in syndrome. Journal of Neurology 221 (2):77–91. doi:10.1007/bf00313105.
  • Bellman, R. 1957. A Markovian decision process. Indiana University Mathematics Journal 6:679–684. doi:10.1512/iumj.1957.6.56038.
  • Chareonsuk, W., S. Kanhaun, K. Khawkam, and D. Wongsawang. 2016. “Face and eyes mouse for ALS patients.” In 2016 Fifth ICT international student project conference (ICT-ISPC) May 27-28, 2016 Nakhonpathom, Thailand, IEEE, 77–80 doi:10.1109/ICT-ISPC.2016.7519240. .
  • Chen, X.-L., L. Cao, Z.-X. Xu, J. Lai, and C.-X. Li. 2019. A study of continuous maximum entropy deep inverse reinforcement learning. Mathematical Problems in Engineering 2019 1–8 doi:10.1155/2019/4834516 .
  • Coronato, A., M. Naeem, G. De Pietro, and G. Paragliola. 2020. Reinforcement learning for intelligent healthcare applications: A survey. Artificial Intelligence in Medicine 109:101964. doi:10.1016/j.artmed.2020.101964.
  • Ivanov, S., and A. D’yakonov. 2019. “Modern deep reinforcement learning algorithms .” arXiv preprint arXiv:1906.10025.
  • Kartakis, S., V. Sakkalis, P. Tourlakis, G. Zacharioudakis, and C. Stephanidis. 2012. Enhancing health care delivery through ambient intelligence applications. Sensors 12 (9):11435–50. doi:10.3390/s120911435.
  • Light, J., D. Mcnaughton, D. R. Beukelman, S. Fager, M. Fried-Oken, T. Jakobs, and E. Jakobs. 2019. Challenges and opportunities in augmentative and alternative communication: Research and technology development to enhance communication and participation for individuals with complex communication needs. Augmentative and Alternative Communication 35 (1):1–12. doi:10.1080/07434618.2018.1556732.
  • Loja, L. F. B., R. de Sousa Gomide, F. Freitas Mendes, R. Antonio Gonçalves Teixeira, R. Pinto Lemos, and E. Lúcia Flôres. 2015. “A concept-environment for computer-based augmentative and alternative communication founded on a systematic review.” Sep. https://scielo.figshare.com/articles/A_concept-environment_for_computer-based_augmentative_and_alternative_communication_founded_on_a_systematic_review/7518137/1 .
  • Markov, A. A. 1954. Theory of Algorithms. TT 60-51085. Academy of Sciences of the USSR. https://books.google.tn/books?id=mKm1swEACAAJ.
  • Maslow, A. H. 1943. A theory of human motivation. Psychological Review 50 (4):370–96. doi:10.1037/h0054346.
  • Maslow, A. H. 1958 A Dynamic Theory of Human Motivation. Understanding human motivation. (Howard Allen Publishers)26–47 doi:10.1037/11305-004
  • Maslow, A. H. 1981. Motivation and personality Prabhat Prakashan. Addison-Wesley Educational Publishers Inc.
  • Newell, A., S. Langer, and M. Hickey. 1998. The rôle of natural language processing in alternative and augmentative communication. Natural Language Engineering 4 (1):1–16. doi:10.1017/S135132499800182X.
  • Ng, A. Y., S. Russell, et al. 2000. Algorithms for Inverse Reinforcement Learning Proceedings of the Seventeenth International Conference on Machine Learning June 29 - July 2, 2000 San Francisco, CA, USA. (Morgan Kaufmann Publishers Inc.):663–670.
  • Patents, J. 2020. “Patents assigned to TOBII AB.” https://patents.justia.com/assignee/tobii-ab. (Last checked on May 11, 2021).
  • Pieter, A., and A. Y. Ng. 2010 Inverse Reinforcement Learning . Encyclopedia of Machine Learning, 554–558 978-0-387-30164-8 . Boston, MA: Springer US. doi:10.1007/978-0-387-30164-8_417.
  • Russell, S. 1998. “Learning agents for uncertain environments.” In Proceedings of the eleventh annual conference on Computational learning theory July 24 - 26, 1998 New York, NY, United States, 101–03 https://people.eecs.berkeley.edu/~russell/papers/colt98-uncertainty.pdf.
  • Scobee, D. R. R. 2019 Approaches to Safety in Inverse Reinforcement Learning. Berkeley: University of California. https://escholarship.org/uc/item/6j34r5tp
  • Smith, E., and M. Delargy. 2005. Locked-in syndrome. BMJ 330 (7488):406–09. doi:10.1136/bmj.330.7488.406.
  • Sutton, R. S., A. G. Barto, et al. 1998. Introduction to reinforcement learning 1st . vol. 135. Cambridge, MA, USA: MIT press Cambridge. https://dl.acm.org/doi/book/10.5555/551283
  • Sutton, R. S., and A. G. Barto. 2018. Reinforcement learning: An introduction. Cambridge, MA, USA: A Bradford Book.
  • Tatler, B. W., D. Witzner Hansen, and J. B. Pelz. 2019. Eye Movement Recordings in Natural Settings. Eye Movement Research, 549–592. Springer International Publishing. 10.1007/978-3-030-20085-5_13.
  • Tay, L., and E. Diener. 2011. Needs and subjective well-being around the world. Journal of Personality and Social Psychology 101 (2):354–65. doi:10.1037/a0023779.
  • Wahba, M. A., and L. G. Bridwell. 1976. Maslow reconsidered: A review of research on the need hierarchy theory. Organizational Behavior and Human Performance 15 (2):212–40. doi:10.1016/0030-5073(76)90038-6.
  • You, C., L. Jianbo, D. Filev, and P. Tsiotras. 2019. Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning. Robotics and Autonomous Systems 114:1–18. doi:10.1016/j.robot.2019.01.003.
  • Yuxi, L. 2017. Deep reinforcement learning: An overview. CoRR. http://arxiv.org/abs/1701.07274.
  • Yuxi, L. 2018. Deep Reinforcement Learning. CoRR. http://arxiv.org/abs/1810.06339.
  • Zhifei, S., and E. Meng Joo. 2012. “A review of inverse reinforcement learning theory and recent advances.” In 2012 IEEE Congress on Evolutionary Computation, June 10-15, 2012 (IEE) Brisbane, QLD, Australia, 1–8 doi:10.1109/CEC.2012.6256507.
  • Ziebart, B. D., A. L. Maas, A. K. Dey, and J. Andrew Bagnell. 2008b. “Navigate like a cabbie: Probabilistic reasoning from observed context-aware behavior.” In Proceedings of the 10th international conference on Ubiquitous computing, September 21-24, 2008 Seoul Korea, 322–31 doi:10.1145/1409635.1409678.
  • Ziebart, B. D., A. L. Maas, J. Andrew Bagnell, and A. K. Dey. 2008a Maximum entropy inverse reinforcement learning AAAI Conference on Artificial Intelligence July 13–17, 2008. Chicago, Illinois. 8:1433–1438. Chicago, IL, USA: AAAI Press. https://dl.acm.org/doi/10.5555/1620270.1620297