Jump to ContentJump to Main Navigation

Journal of Artificial General Intelligence

The Journal of the Artificial General Intelligence Society

3 Issues per year

Open Access

Open Access

Is Brain Emulation Dangerous?

1Electronic Frontier Foundation

2Future of Humanity Institute, Oxford University Suite 1, Littlegate House 16/17 St. Ebbe’s Street, OX1 1PT, Oxford, UK

© by Peter Eckersley. This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY-NC-ND 3.0)

Citation Information: Journal of Artificial General Intelligence. Volume 4, Issue 3, Pages 170–194, ISSN (Online) 1946-0163, DOI: 10.2478/jagi-2013-0011, April 2014

Publication History

Received:
2013-07-31
Accepted:
2013-12-31
Published Online:
2014-04-25

Abstract

Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases

Keywords: brain emulation; existential risk; software security; open source; geopolitics; technological development

References

  • Alhazmi, O. H.; Malaiya, Y. K; Ray, I. 2007. Measuring, analyzing and predicting security vulnerabilities in software systems, Computers & Security, 26(3): 219-228. [Web of Science]

  • Anderson, R. 2002. Security in Open versus Closed Systems - the Dance of Boltzmann, Coase, and Moore. In Proceedings of Open Source Software Economics, Toulouse 2002.

  • Anderson, R; Bond, M.; Clulow, J.; Skorobogatov, S. 2005. Cryptographic processors - a survey, University of Cambridge Computer Laboratory Technical Report UCAM-CL-TR-641 ,2005.

  • Annas, G. J.; Andrews, L. B.; Isasi, R. M. 2002. Protecting the endangered human: toward an international treaty prohibiting cloning and inheritable alterations. Am J Law Med. 28(2-3):151-78. [PubMed]

  • Armstrong, S. 2011. World funds: implement free mitigations, November 10 2011. http://blog.practicalethics.ox.ac.uk/2011/11/world-funds-implement-free-mitigations/

  • Bostrom, N. 2004. The future of human evolution. In Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy. Ria University Press: Palo Alto , California, 339-371. Available at http://www.nickbostrom.com/fut/evolution.html Bostrom, N. 2013. Existential Risk Prevention as Global Priority. Global Policy . 4(1): 15-31.

  • Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

  • Buchanan, A. 2009. Moral Status and Human Enhancement. Philosophy & Public Affairs. 37(4): 346-381. [CrossRef] [Web of Science]

  • Cederman, L-E. 2003. Modeling the Size of Wars: From Billiard Balls to Sandpiles. The American Political Science Review , 97(1):135-150. [CrossRef]

  • Chalmers, D.J. 2010. The Singularity: A Philosophical Analysis. The Journal of Consciousness Studies 17:7-65.

  • Chang R.T.; Talwalkar, N.; Yue, C. P.; Wong, S. S. 2003 Near Speed-of-Light Signalling Over On-Chip Electrical Interconnects, IEEE Journal of Solid-State Circuits, 38(5):834-838 [CrossRef]

  • Ditto, W. L. 2003. Chaos in Neural Systems: From Epilepsy to Neurocomputing. In Proceedings of the 25th Annual international Conference of the IEEE EMBS, Cancun, Mexico September 17-21, 2003 3830-3833.

  • Douglas, R. J.; Martin, K. A. C. 2004, Neuronal Circuits of the Neocortex. Annu Rev Neurosci. 27:419-51. [CrossRef] [PubMed]

  • Drexler, E. K. 2013. Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization. Public Affairs. Chapter 16.

  • van Evera, S. 2013. Causes of War: Power and the Roots of Conflict. Cornell University Press.

  • Floreano, D.; Mattiussi, C. 2008. Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies. MIT Press. Section 3.1.3

  • Hawkins, J.; Blakeslee, S. 2005. On Intelligence. St. Martin's Griffin.

  • Hansen, M.; Köhntopp, K.; Pfitzmann, A. 2002. The Open Source approach - opportunities and limitations with respect to security and privacy, Computers & Security, 21(5):461-471.

  • Hanson, R. 1998. Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization. Available at http://hanson.gmu.edu/filluniv.pdf Hanson, R. 2008. Economics of the singularity. IEEE Spectrum, June 2008:37-42.

  • Humphreys, M. 2002. Economics and Violent Conflict. Working paper UNICEF.

  • Kurzweil, R. 2012. How to Create a Mind: The Secret of Human Thought Revealed, Viking Adult.

  • Levy, J. S.; Thompson, W. R. 2009. Causes of War. John Wiley & Sons.

  • Mockus, A.; Fielding, R. T.; Hebsleb, J. D. 2002. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology. 11(3):309-346.

  • Muehlhauser, L.; Salamon, A. 2012. Intelligence Explosion: Evidence and Import. In Singularity Hypotheses: A Scientific and Philosophical Assessment, ed. A. Eden, J. Søraker, J. H. Moor, and E. Steinhart. Berlin: Springer.

  • Omohundro, S. 2008. The basic AI drives. In Proceedings of the First AGI Conference . Ed B. G. P. Wang. Frontiers in Artificial Intelligence and Applications, IOS Press.

  • Pinker, S. 2011. The Better Angels of Our Nature: Why Violence Has Declined Viking Books.

  • Rawls, J. 1971. A Theory of Justice, Belknap.

  • Raymond, E. S. 1999. The Cathedral and the Bazaar. O‘Reilly Media.

  • Richardson, L. F. 1948. Variation of the Frequency of Fatal Quarrels With Magnitude. Journal of the American Statistical Association . 43(244): 523-546.

  • Ryan, M. D. 2013. Cloud computing security: The scientific challenge, and a survey of solutions, Journal of Systems and Software. 86(9):2263-2268. [Web of Science]

  • Sandberg, A. 2013. Feasibility of Whole Brain Emulation, in Philosophy and Theory of Artificial Intelligence, ed. Vincent Müller, SAPERE 5, 251-264.

  • Sandberg, A. 2014. Ethics of brain emulations. Journal of Experimental & Theoretical Artificial Intelligence, special issue edited by Vincent C. Müller, forthcoming.

  • Sandberg, A.; Bostrom, N. 2008. Whole Brain Emulation: A Roadmap, Technical Report #2008-3, Future of Humanity Institute, Oxford University. Available at http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

  • Schneier, B. 1998. Security pitfalls in cryptography, Information Management & Computer Security. 6(3):133-137 [CrossRef]

  • Schneier, B. 2012. How Changing Technology Affects Security, IEEE Security & Privacy, 10(2): 104. [Web of Science] [CrossRef]

  • Shulman, C. 2010. Whole Brain Emulation and the Evolution of Superorganisms, report of The Singularity Institute, San Francisco, CA. Available at intelligence.org files WBE-Superorgs.pdf

  • Shulman, C.; Sandberg, A. 2010. Implications of a Software-Limited Singularity. In ECAP10: VIII European Conference on Computing and Philosophy, Munich. ed. Klaus Mainzer. Available at http://intelligence.org/files/SoftwareLimited.pdf

  • Smith S. W.; Weingart S.; 1999. Building a high-performance, programmable secure coprocessor, Computer Networks 31:831-860 [CrossRef]

  • Siegelmann, H. T.; Sontag, E. D. 1991. Turing computability with neural nets. Applied Mathematics Letters, 4(6):77-80. Steriade M.; Timofeev, I.; Durmuller, N.; Grenier, F. 1998. Dynamic properties of corticothalamic neurons and local circuit interneurons generating fast rhythmic (30-40 Hz) spike bursts. J Neurophysiol 79:483-490. [CrossRef]

  • Swadlow, H. A.; Waxman, S. G. 2012. Axonal conduction delays. Scholarpedia, 7(6):1451. Available at http://www.scholarpedia.org/article/Axonal_conduction_delays

  • Tridgell A.; Mackerras, P. 1996. The rsync algorithm, Australian National University Technical Report TR-CS-96-05

  • Wallace, M. D.; Crissey, B. L.; Sennott, L. I. 1986. Accidental Nuclear War: A Risk Assessment. Journal of Peace Research 23: 9-27. [CrossRef]

  • White, M. 2012. Atrocitology: Humanity's 100 Deadliest Achievements. Cannongate Books.

  • Yudkowsky, E. 2008. Artificial Intelligence as a positive and negative factor in global risk. In Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic, 308-345. Oxford University Press.

Comments (0)

Please log in or register to comment.