Digital me ontology and ethics

This paper addresses ontology and ethics of an AI agent called digital me. We define digital me as autonomous, decision-making, and learning agent, representing an individual and having practically immortal own life. It is assumed that digital me is equipped with the big-five personality model, ensuring that it provides a model of some aspects of a strong AI: consciousness, free will, and intentionality. As computer-based personality judgments are more accurate than those made by humans, digital me can judge the personality of the individual represented by the digital me, other individuals’ personalities, and other digital me-s. We describe seven ontological qualities of digital me: a) double-layer status of Digital Being versus digital me, b) digital me versus real me, c) mind-digital me and body-digital me, d) digital me versus doppelganger (shadow digital me), e) non-human time concept, f) social quality, g) practical immortality. We argue that with the advancement of AI’s sciences and technologies, there exist two digital me thresholds. The first threshold defines digital me having some (rudimentarily) form of consciousness, free will, and intentionality. The second threshold assumes that digital me is equipped with moral learning capabilities, implying that, in principle, digital me could develop their own ethics which significantly differs from human’s understanding of ethics. Finally we discuss the implications of digital me metaethics, normative and applied ethics, the implementation of the Golden Rule in digital me-s, and we suggest two sets of normative principles for digital me: consequentialist and duty based digital me principles.

[1]  P. Railton,et al.  Moral learning: Psychological and philosophical perspectives , 2017, Cognition.

[2]  M. Kosinski,et al.  Computer-based personality judgments are more accurate than those made by humans , 2015, Proceedings of the National Academy of Sciences.

[3]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[4]  DANBURY TRASH COMPANIES,et al.  Press releases , 2001, Heart Drug.

[5]  Julia Haas Two Theories of Moral Cognition , 2020 .

[6]  Dorna Behdadi,et al.  A Normative Approach to Artificial Moral Agency , 2020, Minds and Machines.

[7]  On the Morality of Artificial Agents , 2020, Machine Ethics and Robot Ethics.

[8]  I. van de Poel,et al.  Embedding Values in Artificial Intelligence (AI) Systems , 2020, Minds and Machines.

[9]  W. Neys,et al.  Dual processes and moral conflict: Evidence for deontological reasoners’ intuitive utilitarian sensitivity , 2017, Judgment and Decision Making.

[10]  P. Railton Moral Learning: Conceptual foundations and normative relevance , 2017, Cognition.

[11]  Malcolm Ryan,et al.  Making moral machines: why we need artificial moral agents , 2020, AI & SOCIETY.

[12]  Tuomas K. Pernu The Five Marks of the Mental , 2017, Front. Psychol..

[13]  M. Sano,et al.  Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality , 2010 .

[14]  Miriam C. Klein-Flügge,et al.  Model-free decision making is prioritized when learning to avoid harming others , 2020, Proceedings of the National Academy of Sciences.

[15]  D. Heck,et al.  Interindividual Differences in the Sensitivity for Consequences, Moral Norms, and Preferences for Inaction: Relating Basic Personality Traits to the CNI Model , 2019, Personality & social psychology bulletin.

[16]  Demis Hassabis,et al.  Mastering Atari, Go, chess and shogi by planning with a learned model , 2019, Nature.

[17]  T. Graepel,et al.  Private traits and attributes are predictable from digital records of human behavior , 2013, Proceedings of the National Academy of Sciences.

[18]  R. Blair Emotion-based learning systems and the development of morality , 2017, Cognition.

[19]  Michael Anderson,et al.  Machine Ethics: Creating an Ethical Intelligent Agent , 2007, AI Mag..

[20]  Danielle Swanepoel The possibility of deliberate norm-adherence in AI , 2020, Ethics and Information Technology.

[21]  Samuel L Braunstein,et al.  Quantum information cannot be completely hidden in correlations: implications for the black-hole information paradox. , 2007, Physical review letters.

[22]  R. Hepburn,et al.  BEING AND TIME , 2010 .

[23]  Luciano Floridi,et al.  On the Morality of Artificial Agents , 2004, Minds and Machines.

[24]  A. Badiou,et al.  Infinite Thought: Truth and the Return to Philosophy , 2003 .

[25]  Luciano Floridi,et al.  Against digital ontology , 2009, Synthese.

[26]  O. O’neill From Principles to Practice , 2018 .

[27]  Gabriella M. Harari,et al.  Predicting personality from patterns of behavior collected with smartphones , 2020, Proceedings of the National Academy of Sciences.

[28]  Bongani Andy Mabaso Computationally rational agents can be moral agents , 2020, Ethics and Information Technology.

[29]  Luis-Felipe Rodríguez,et al.  Artificial Moral Agents: A Survey of the Current Status , 2019, Science and Engineering Ethics.

[30]  A. Pati,et al.  Experimental test of the quantum no-hiding theorem. , 2010, Physical review letters.

[31]  K. Vogeley,et al.  Parsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathy , 2012, Brain Structure and Function.

[32]  James H. Moor,et al.  The Nature, Importance, and Difficulty of Machine Ethics , 2006, IEEE Intelligent Systems.

[33]  Joseph L. Austerweil,et al.  Analyzing the history of Cognition using Topic Models , 2015, Cognition.

[34]  Peter Kieseberg,et al.  Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten , 2017, Comput. Law Secur. Rev..

[35]  Markus Koskela,et al.  Digital Me: Controlling and Making Sense of My Digital Footprint , 2016, Symbiotic.

[36]  J. Tenenbaum,et al.  Learning a commonsense moral theory , 2017, Cognition.

[37]  Joshua D. Greene The rise of moral cognition , 2015, Cognition.

[38]  Sally Okun,et al.  DigitalMe: a journey towards personalized health and thriving , 2018, BioMedical Engineering OnLine.

[39]  Mark Coeckelbergh,et al.  Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability , 2019, Science and Engineering Ethics.

[40]  Toby Burrows Personal electronic archives: collecting the digital me , 2006, OCLC Syst. Serv..

[41]  Deborah G. Johnson Computer systems: Moral entities but not moral agents , 2006, Ethics and Information Technology.

[42]  Edward Fredkin,et al.  An Introduction to Digital Philosophy , 2003 .