^ abRussell, Stuart J.; Norvig, Peter (2003). “Section 26.3: The Ethics and Risks of Developing Artificial Intelligence”. Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN978-0137903955. "Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal."
^Bostrom 2014, Chapter 8, p. 123. "An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacturing of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips."
^Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. (2016). "Concrete problems in AI safety". arXiv:1606.06565 [cs.AI]。
^Kaelbling, L. P.; Littman, M. L.; Moore, A. W. (1 May 1996). “Reinforcement Learning: A Survey”. Journal of Artificial Intelligence Research4: 237–285. doi:10.1613/jair.301.
^Ring, M.; Orseau, L. (2011). “Delusion, Survival, and Intelligent Agents”. Artificial General Intelligence. Lecture Notes in Computer Science. 6830. Berlin, Heidelberg: Springer
^Yampolskiy, Roman; Fox, Joshua (24 August 2012). “Safety Engineering for Artificial General Intelligence”. Topoi32 (2): 217–226. doi:10.1007/s11245-012-9128-9.
^Yampolskiy, Roman V.「What to do with the Singularity Paradox?」『Philosophy and Theory of Artificial Intelligence』 5巻〈Studies in Applied Philosophy, Epistemology and Rational Ethics〉、2013年、397–413頁。doi:10.1007/978-3-642-31674-6_30。ISBN978-3-642-31673-9。
^Omohundro, Stephen M. (February 2008). “The basic AI drives”. Artificial General Intelligence 2008. 171. IOS Press. pp. 483–492. ISBN978-1-60750-309-5
^Seward, John P. (1956). “Drive, incentive, and reinforcement.”. Psychological Review63 (3): 195–203. doi:10.1037/h0048229. PMID13323175.
^Dewey, Daniel (2011). "Learning What to Value". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 309–314. doi:10.1007/978-3-642-22887-2_35. ISBN978-3-642-22887-2。
^Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 388–393. doi:10.1007/978-3-642-22887-2_48. ISBN978-3-642-22887-2。
^Bostrom 2014, chapter 7, p. 110 "We humans often seem happy to let our final values drift... For example, somebody deciding to have a child might predict that they will come to value the child for its own sake, even though, at the time of the decision, they may not particularly value their future child... Humans are complicated, and many factors might be in play in a situation like this... one might have a final value that involves having certain experiences and occupying a certain social role, and becoming a parent—and undergoing the attendant goal shift—might be a necessary aspect of that..."
^Schmidhuber, J. R. (2009). “Ultimate Cognition à la Gödel”. Cognitive Computation1 (2): 177–193. doi:10.1007/s12559-009-9014-y.
^Yudkowsky, Eliezer (2008). “Artificial intelligence as a positive and negative factor in global risk”. Global Catastrophic Risks. 303. OUP Oxford. p. 333. ISBN9780199606504