1,757
Views
20
CrossRef citations to date
0
Altmetric
Original Articles

A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

&
Pages 397-414 | Received 28 Aug 2015, Accepted 01 May 2016, Published online: 23 May 2016

References

  • Alstott, J. (2014). Will we hit a wall? Forecasting bottlenecks to whole brain emulation development. Journal of Artificial General Intelligence, 4, 397–414.
  • Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: Using and controlling an Oracle AI. Minds and Machines, 22, 299–324.10.1007/s11023-012-9282-2
  • Barrett, A. M., Baum, S. D., & Hostetler, K. R. (2013). Analyzing and reducing the risks of inadvertent nuclear war between the United States and Russia. Science & Global Security, 21, 106–133.
  • Beckstead, N. (2013). On the overwhelming importance of shaping the far future. New Brunswick, NJ: Department of Philosophy, Rutgers University.
  • Bedford, T. J., & Cooke, R. M. (2001). Probabilistic risk analysis: Foundations and methods. New York, NY: Cambridge University Press.10.1017/CBO9780511813597
  • Bostrom, N. (2013). Existential risk prevention as a global priority. Global Policy, 4, 15–31.10.1111/1758-5899.12002
  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
  • Bringsjord, S., Bringsjord, A., & Bello, P. (2012). Belief in the singularity is fideistic. In A. H. Eden, J. H. Moor, J. H. Soraker, & E. Steinhart (Eds.), Singularity hypotheses: A scientific and philosophical assessment (pp. 395–412). New York, NY: Springer.10.1007/978-3-642-32560-1
  • Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26, 355–372.
  • Cheesebrough, A. J. (2010). Risk assessment process for informed decision-making (RAPID): A progress report. Fourth annual conference on security analysis and risk management, Arlington, Virginia.
  • Ćirković, M. M. (2012). Small theories and large risks – Is risk analysis relevant for epistemology? Risk Analysis, 32, 1994–2004.
  • Clemen, R. T., & Reilly, T. (2001). Making hard decisions (2nd ed.). Pacific Grove, CA: Duxbury.
  • Davis, P. K. (1989). Studying first-strike stability with knowledge-based models of human decision making. Santa Monica, CA: RAND.
  • Eden, A. H., Moor, J. H., Soraker, J. H., & Steinhart, E. (Eds.). (2012). Singularity hypotheses: A scientific and philosophical assessment. New York, NY: Springer.
  • Edwards, W., Miles, R. F., & von Winterfeldt, D. (Eds.). (2007). Advances in decision analysis: From foundations to applications. New York, NY: Cambridge University Press.
  • Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence. New York, NY: Springer Verlag.
  • Goertzel, B., & Pitt, J. (2012). Nine ways to bias open-source AGI towards friendliness. Journal of Evolution and Technology, 22, 116–131.
  • Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.), Advances in Computers (pp. 31–88). New York, NY: Academic Press.
  • Hall, J. S. (2011). Ethics for self-improving machines. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 512–523). New York, NY: Cambridge University Press.10.1017/CBO9780511978036
  • Kumamoto, H., & Henley, E. J. (1996). Probabilistic risk assessment and management for engineers and scientists (2nd ed.). New York, NY: IEEE Press.
  • Matheny, J. G. (2007). Reducing the risk of human extinction. Risk Analysis, 27, 1335–1344.10.1111/risk.2007.27.issue-5
  • Muehlhauser, L. (2013). Mathematical proofs improve but don’t guarantee security, safety, and friendliness. Retrieved from http://intelligence.org/2013/10/03/proofs/
  • Muehlhauser, L., & Bostrom, N. (2014). Why we need friendly AI. Think: Philosophy for Everyone, 13, 41–47.10.1017/S1477175613000316
  • OBA. (2012). United States government policy for oversight of life sciences dual use research of concern. Washington, DC: Office of Biotechnology Activities, United States Department of Health and Human Services.
  • Omohundro, S. (2008). The basic AI drives. In P. Wang, B. Goertzel, & S. Franklin (Eds.), Proceedings of the first AGI conference, frontiers in artificial intelligence and applications (pp. 171). Fairfax, VA: IOS Press.
  • Omohundro, S. (2012). Rational artificial intelligence for the greater good. In A. H. Eden, J. H. Moor, J. H. Soraker, & E. Steinhart (Eds.), Singularity hypotheses: A scientific and philosophical assessment (pp. 161–179). New York, NY: Springer.10.1007/978-3-642-32560-1
  • Omohundro, S. (2014). Autonomous technology and the greater human good. Journal of Experimental & Theoretical Artificial Intelligence, 26, 303–315.
  • Patterson, A. P., Tabak, L. A., Fauci, A. S., Collins, F. S., & Howard, S. (2013). A framework for decisions about research with HPAI H5N1 viruses. Science, 339, 1036–1037.10.1126/science.1236194
  • Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann.
  • Posner, R. A. (2004). Catastrophe: Risk and response. New York, NY: Oxford University Press.
  • Rasmussen, N. C. (1981). The application of probabilistic risk assessment techniques to energy technologies. Annual Review of Energy, 6, 123–138.10.1146/annurev.eg.06.110181.001011
  • Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. Retrieved from http://futureoflife.org/static/data/documents/research_priorities.pdf
  • Sandberg, A., & Bostrom, N. (2008). Whole brain emulation: A roadmap (Technical Report #2008-3). Oxford: Future of Humanity Institute, Oxford University.
  • Shulman, C., & Sandberg, A. (2010). Implications of a Software-limited Singularity. ECAP10: VIII european conference on computing and philosophy. K. Mainzer. Munich: Verlag.
  • Soares, N. (2015). Corrigibility. Austin, TX:  AAAI-15 workshop on AI and ethics.
  • Soares, N., & Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research Agenda (Technical Report 2014–8). Berkeley, CA: Machine Intelligence Research Institute.
  • Sotala, K. (2012). Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness, 4, 275–291.10.1142/S1793843012400161
  • Sotala, K., & Yampolskiy, R. V. (2015). Responses to catastrophic AGI risk: A survey. Physica Scripta, 90(1).
  • Sunstein, C. R. (2009). Worst-case scenarios. Cambridge, MA: Harvard University Press.
  • Waser, M. R. (2008). Discovering the foundations of a universal system of ethics as a road to safe artificial intelligence. Biologically inspired cognitive architectures: Papers from the AAAI fall symposium (pp. 195–200). Menlo Park, CA: AAAI Press.
  • Wilson, G. (2013). Minimizing global catastrophic and existential risks from emerging technolologies through international law. Virginia Environmental Law Journal, 31, 307–364.
  • Yampolskiy, R. V., & Fox, J. (2013). Safety engineering for artificial general intelligence. Topoi, 32, 217–226.
  • Yudkowsky, E. (2001). Creating friendly AI 10: The analysis and design of benevolent goal architectures. Retrieved from http://intelligence.org/files/CFAI.pdf
  • Yudkowsky, E. (2004). Coherent extrapolated volition. Retrieved from https://intelligence.org/files/CEV.pdf
  • Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global catastrophic risks (pp. 308–345). Oxford: Oxford University Press.
  • Yudkowsky, E. (2012). Friendly artificial intelligence. In A. H. Eden, J. H. Moor, J. H. Soraker, & E. Steinhart (Eds.), Singularity hypotheses: A scientific and philosophical assessment (pp. 181–195). New York, NY: Springer.10.1007/978-3-642-32560-1

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.