GenEth: a general ethical dilemma analyzer

Michael Anderson 1  and Susan Leigh Anderson 2
  • 1 University of Hartford, West Hartford,, Connecticut, USA
  • 2 University of Connecticut, Storrs,, Connecticut, USA

Abstract

We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which intelligent autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of this behavior. To provide assistance in discovering ethical principles, we have developed GenEth, a general ethical dilemma analyzer that, through a dialog with ethicists, uses inductive logic programming to codify ethical principles in any given domain. GenEth has been used to codify principles in a number of domains pertinent to the behavior of autonomous systems and these principles have been verified using an Ethical Turing Test, a test devised to compare the judgments of codified principles with that of ethicists.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] M. Anderson, S. L. Anderson, GenEth: A general ethical dilemma analyzer, Proceedings of the 28th AAAI Conference on Artificial Intelligence, July 2014, Quebec City, Quebec, CA

  • [2] N. Lavracˇ, S. Džeroski, Inductive Logic Programming: Techniques and Applications, Ellis Harwood, 1997

  • [3] J. Rawls, Outline for a decision procedure for ethics, The Philosophical Review, 1951, 60(2), 177-197

  • [4] M. Anderson, S. L. Anderson, Machine Ethics: Creating an Ethical Intelligent Agent, Artificial Intelligence Magazine, Winter 2007, 28(4)

  • [5] J. Diederich, Rule Extraction from Support Vector Machines: An Introduction, Studies in Computational Intelligence (SCI), 2008, 80, 3-31

  • [6] D.Martens, J. Huysmans, R. Setiono, J. Vanthienen, B. Baesens, Rule extraction from support vectormachines: An overview of issues and application in credit scoring, Studies in Computational Intelligence (SCI), 2008, 80, 33-63

  • [7] J. R. Quinlan, Induction of decision trees, Machine Learning, 1986, 1, 81-106

  • [8] A. Bundy, F. McNeill, Representation as a fluent: An AI challenge for the next half century, IEEE Intelligent Systems, May/June 2006, 21(3), 85-87

  • [9] L. De Raedt, K. Kersting, Probabilistic inductive logic programming, Algorithmic Learning Theory, Springer Berlin Heidelberg, 2004

  • [10] M. Anderson, S. L. Anderson, C. Armen, MedEthEx: A prototype medical ethics advisor, Proceedings of the Eighteenth Conference on Innovative Applications of Artificial Intelligence, August 2006, Boston, Massachusetts

  • [11] M. Anderson, S. L. Anderson, Robot be Good, Scientific American Magazine, October 2010

  • [12] A. M. Turing, Computing machinery and intelligence, Mind, 1950, 49, 433-460

  • [13] C. Allen, G. Varner, J. Zinser, Prolegomena to any future artificial moral agent, Journal of Experimental and Theoretical Artificial Intelligence, 2000, 12, 251-61

  • [14] M. M. Waldrop, A question of responsibility, Chap. 11 in Man Made Minds: The Promise of Artificial Intelligence, NY: Walker and Company, 1987 (Reprinted in R. Dejoie et al. (Eds.), Ethical Issues in Information Systems, Boston, MA: Boyd and Fraser, 1991, 260-277)

  • [15] J. Gips, Towards the Ethical Robot, Android Epistemology, Cambridge MA: MIT Press, 1995, 243-252

  • [16] A. F. U. Khan, The Ethics of Autonomous Learning Systems. Android Epistemology, Cambridge MA: MIT Press, 1995, 253-265

  • [17] C. Grau, There is no "I" in "Robot”: robots and utilitarianism, IEEE Intelligent Systems, July/ August 2006, 21(4), 52-55

  • [18] T. M. Powers, Prospects for a Kantian Machine, IEEE Intelligent Systems, 2006, 21(4), 46-51

  • [19] R. Rzepka, K. Araki, What could statistics do for ethics? The idea of common sense processing based safety valve, Proceedings of the AAAI Fall Symposium on Machine Ethics, 2005, 85-87, AAAI Press

  • [20] M. Guarini, Particularism and the classification and reclassification of moral cases, IEEE Intelligent Systems, July/ August 2006, 21(4), 22-28

  • [21] B. M. McLaren, Extensionally defining principles and cases in ethics: an AI model, Artificial Intelligence Journal, 2003, 150(1- 2), 145-181

  • [22] S. Bringsjord, K. Arkoudas, P. Bello, Towards a General logicist methodology for engineering ethically correct robots, IEEE Intelligent Systems, 2006, 21(4), 38-44

  • [23] L. M. Pereira, A. Saptawijaya, Modeling morality with prospective logic, Progress in Artificial Intelligence: Lecture Notes in Computer Science, 2007, 4874, 99-111.

OPEN ACCESS

Journal + Issues

Paladyn. Journal of Behavioral Robotics is a fully peer-reviewed, open access journal that publishes original, high-quality research works and review articles on topics broadly related to neuronally and psychologically inspired robots and other behaving autonomous systems. The journal is indexed in SCOPUS.

Search