23,866
Views
15
CrossRef citations to date
0
Altmetric
Research Article

The risks associated with Artificial General Intelligence: A systematic review

ORCID Icon, , , , & ORCID Icon
Pages 649-663 | Received 20 Jan 2021, Accepted 28 Jul 2021, Published online: 13 Aug 2021

References

  • Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing peer review. Science, 321(5885), 15. https://doi.org/10.1126/science.1162115
  • Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: Controlling and using an oracle AI. Minds and Machines, 22(4), 299–324. https://doi.org/10.1007/s11023-012-9282-2
  • Barrett, A. M., & Baum, S. D. (2017). A model of pathways to artificial superintelligence catastrophe for risk and decision analysis [Article]. Journal of Experimental and Theoretical Artificial Intelligence, 29(2), 397–414. https://doi.org/10.1080/0952813X.2016.1186228
  • Baum, S. (2017). A survey of artificial general intelligence projects for ethics, risk, and policy. Global Catastrophic Risk Institute Working Paper, 17–11, Global Catastrophic Risk Institute.
  • Baum, S. D., Barrett, A. M., & Yampolskiy, R. V. (2017). Modeling and interpreting expert disagreement about artificial superintelligence. Informatica-Journal of Computing and Informatics, 41(4), 419–427. http://www.informatica.si/index.php/informatica/article/view/1812
  • Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results from an expert assessment. Technological Forecasting Social Change, 78(1), 185–195. https://doi.org/10.1016/j.techfore.2010.09.006
  • Bentley, P. (2018). The Three Laws of Artificial Intelligence: Dispelling Common Myths European Parlimentary Research Service. European Parlimentary Research Service.
  • Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science Technology, 45(1), 197–245. https://doi.org/10.1002/aris.2011.1440450112
  • Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution Technology, 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press Inc.
  • Boyles, R. J. M. (2018). A case for machine ethics in modeling human-level intelligent agents [Article]. Kritike, 12(1), 182–200. https://doi.org/10.25138/12.1.a9
  • Bradley, P. (2020). Risk management standards and the active management of malicious intent in artificial superintelligence [Article]. AI & Society, 35(2), 319–328. https://doi.org/10.1007/s00146-019-00890-2
  • Bringsjord, S., Bringsjord, A., & Bello, P. (2012). Belief in the singularity is fideistic (Singularity Hypotheses (pp. 395-412). Springer.
  • Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 355–372. https://doi.org/10.1080/0952813X.2014.895108
  • Chalmers, D. (2009). The singularity: A philosophical analysis. Science Fiction and Philosophy: From Time Travel to Superintelligence, 171-224. https://www.wiley.com/en-au/Science+Fiction+and+Philosophy%3A+From+Time+Travel+to+Superintelligence-p-9781444327908
  • Chen, S. Y., & Lee, C. (2019). Perceptions of the impact of high-level-machine-intelligence from University students in Taiwan: The case for human professions, autonomous vehicles, and smart homes. Sustainability, 11 (21), 6133. Article 6133. https://doi.org/10.3390/su11216133
  • Cronin, B. (2005). The hand of science: Academic writing and its rewards. Scarecrow Press.
  • Dallat, C., Salmon, P. M., & Goode, N. (2018). Identifying risks and emergent risks across sociotechnical systems: The NETworked hazard analysis and risk management system (NET-HARMS). Theoretical Issues in Ergonomics Science, 19(4), 456–482. https://doi.org/10.1080/1463922X.2017.1381197
  • Dallat, C., Salmon, P. M., & Goode, N. (2019). Risky systems versus risky people: To what extent do risk assessment methods consider the systems approach to accident causation? A review of the literature. Safety Science,119, 266–279.
  • Firt, E. (2020). The missing G. AI & SOCIETY, 1-13.
  • Garis, H. D. (2005). The artilect war: Cosmists vs Terrans. ETC Publications.
  • Goertzel, B. (2006). The hidden pattern. Brown Walker Press.
  • Goertzel, B. (2014). Artificial general intelligence: Concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1–48. https://doi.org/10.2478/jagi-2014-0001
  • Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Springer.
  • Goertzel, B., & Pitt, J. (2014). Nine ways to bias open-source artificial general intelligence toward friendliness. In Russell Blackford, & Damien Broderick (Eds.), Intelligence unbound (pp. 61–89). Wiley.
  • Holmes, D., Murray, S. J., Perron, A., & Rail, G. (2006). Deconstructing the evidence‐based discourse in health sciences: Truth, power and fascism. International Journal of Evidence-Based Healthcare, 4(3), 180–186. https://doi.org/10.1111/j.1479-6988.2006.00041.x
  • Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004
  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
  • Legg, S., & Hutter, M. (2006). A formal measure of machine intelligence. arXiv preprint cs/0605024.
  • Leveson, N. G. (2011). Applying systems thinking to analyze and learn from events. Safety Science,49(1), 55–64.
  • Linstone, H. A., & Turoff, M. (1975). The delphi method. Addison-Wesley Reading, MA.
  • Miller, J. D. (2019). When two existential risks are better than one. Foresight, 21(1), 130–137. https://doi.org/10.1108/FS-04-2018-0038
  • Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & Group, P. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097
  • Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion (Fundamental issues of artificial intelligence (pp. 555-572). Springer.
  • Narain, K., Swami, A., Srivastava, A., & Swami, S. (2019). Evolution and control of artificial superintelligence (ASI): A management perspective. Journal of Advances in Management Research, 16(5), 698–714. https://doi.org/10.1108/JAMR-01-2019-0006
  • Naudé, W., & Dimitri, N. (2020). The race for an artificial general intelligence: Implications for public policy [Article]. AI & Society, 35(2), 367–379. https://doi.org/10.1007/s00146-019-00887-x
  • Nindler, R. (2019). The United Nation’s capability to manage existential risks with a focus on artificial intelligence. International Community Law Review, 21(1), 5–34. https://doi.org/10.1163/18719732-12341388
  • Pueyo, S. (2018). Growth, degrowth, and the challenge of artificial superintelligence. Journal of Cleaner Production, 197, 1731–1736. https://doi.org/10.1016/j.jclepro.2016.12.138
  • Salmon, P. M., Carden, T., & Hancock, P. (2021). Putting the humanity into inhuman systems: How Human factors and ergonomics can be used to manage the risks associated with artificial general intelligence. Human factors and ergonomics in manufacturing & service industries, 31(2), 223-236.
  • Salmon, P. M., Hulme, A., Walker, G. H., Waterson, P., Berber, E., & Stanton, N. A. (2020). The big picture on accident causation: A review, synthesis and meta-analysis of AcciMap studies. Safety Science, 126, 104650. https://doi.org/10.1016/j.ssci.2020.104650
  • Salmon, P. M., Read, G. J., Thompson, J., McLean, S., & McClure, R. (2020). Computational modelling and systems ergonomics: a system dynamics model of drink driving-related trauma prevention. Ergonomics, 63(8), 965-980. https://doi.org/10.1080/00140139.2020.1745268
  • Sotala, K., & Gloor, L. (2017). Superintelligence as a cause or cure for risks of astronomical suffering. Informatica, 41(4). http://www.informatica.si/index.php/informatica/article/view/1877/1098
  • Sotala, K., & Yampolskiy, R. V. (2015). Responses to catastrophic AGI risk: A survey. Physica Scripta, 90(1). 1-33. Article 018001. https://doi.org/10.1088/0031-8949/90/1/018001
  • Stanton, N., Salmon, P. M., & Rafferty, L. A. (2013). Human factors methods: A practical guide for engineering and design. Ashgate Publishing, Ltd.
  • Stanton, N. A., Eriksson, A., Banks, V. A., & Hancock, P. A. (2020). Turing in the driver’s seat: Can people distinguish between automated and manually driven vehicles? Human Factors Ergonomics in Manufacturing Service Industries, 30(6), 418–425. https://doi.org/10.1002/hfm.20864
  • Stanton, N. A., & Harvey, C. (2017). Beyond human error taxonomies in assessment of risk in sociotechnical systems: a new paradigm with the EAST ‘broken-links’ approach. Ergonomics,60(2), 221–233.
  • Stanton, N. A., Eriksson, A., Banks, V. A., & Hancock, P. A. (2020). Turing in the driver's seat: Can people distinguish between automated and manually driven vehicles? Human Factors and Ergonomics in Manufacturing & Service Industries, 30(6), 418-425.
  • Tegmark, M. (2017). Being human in the age of artificial intelligence. Vintage Books.
  • Torres, P. (2019). The possibility and risks of artificial general intelligence. Bulletin of the Atomic Scientists, 75(3), 105–108. https://doi.org/10.1080/00963402.2019.1604873
  • Yampolskiy, R. V. (2012). Leakproofing singularity-artificial intelligence confinement problem. Journal of Consciousness Studies, 19(1-2), 194–214. https://www.ingentaconnect.com/contentone/imp/jcs/2012/00000019/f0020001/art00014
  • Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z