MACHINE LEARNING IN CYBER-PHYSICAL SYSTEMS AND MANUFACTURING SINGULARITY – IT DOES NOT MEAN TOTAL AUTOMATION, HUMAN IS STILL IN THE CENTRE: Part I – MANUFACTURING SINGULARITY AND AN INTELLIGENT MACHINE ARCHITECTURE

In many popular, as well scientific, discourses it is suggested that the “massive” use of Artificial Intelligence, including Machine Learning, and reaching the point of “singularity” through so-called Artificial General Intelligence (AGI), and Artificial Super-Intelligence (ASI), will completely exclude humans from decision making, resulting in total dominance of machines over human race. Speaking in terms of manufacturing systems, it would mean that there will be achieved intelligent and total automation (once the humans will be excluded). The hypothesis presented in this paper is that there is a limit of AI/ML autonomy capacity, and more concretely, that the ML algorithms will be not able to became totally autonomous and, consequently, that the human role will be indispensable. In the context of the question, the authors of this paper introduce the notion of the manufacturing singularity and an intelligent machine architecture towards the manufacturing singularity, arguing that the intelligent machine will be always human dependent, and that, concerning the manufacturing, the human will remain in the centre of Cyber-Physical Systems (CPS) and in I4.0. The methodology to support this argument is inductive, similarly to the methodology applied in a number of texts found in literature, and based on computational requirements of inductive inference based machine learning. The argumentation is supported by several experiments that demonstrate the role of human within the process of machine learning. Based on the exposed considerations, a generic architecture of intelligent CPS, with embedded ML functional modules in multiple learning loops, in order to evaluate way of use of ML functionality in the context of CPPS/CPS. Similarly to other papers found in literature, due to the (informal) inductive methodology applied, considering that this methodology doesn’t provide an absolute proof in favour of, or against, the hypothesis defined, the paper represents a kind of position paper. The paper is divided into two parts. In the first part a review of argumentation from literature, both in favor of and against the thesis on the human role in future, is presented. In this part a concept of the manufacturing singularity is introduced, as well as an intelligent machine

[1]  The possibility and risks of artificial general intelligence , 2019, Bulletin of the Atomic Scientists.

[2]  Roman V Yampolskiy,et al.  Responses to catastrophic AGI risk: a survey , 2014 .

[3]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[4]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.

[5]  Roman V. Yampolskiy,et al.  Unethical Research: How to Create a Malevolent Artificial Intelligence , 2016, ArXiv.

[6]  Stuart J. Russell,et al.  Research Priorities for Robust and Beneficial Artificial Intelligence , 2015, AI Mag..

[7]  Andrew Critch,et al.  AI Research Considerations for Human Existential Safety (ARCHES) , 2020, ArXiv.

[8]  N. Bostrom Existential risks: analyzing human extinction scenarios and related hazards , 2002 .

[9]  Anne Lauscher Life 3.0: being human in the age of artificial intelligence , 2019, Internet Histories.

[10]  Nils J. Nilsson,et al.  Human-Level Artificial Intelligence? Be Serious! , 2005, AI Mag..

[11]  Ben Goertzel,et al.  Artificial General Intelligence: Concept, State of the Art, and Future Prospects , 2009, J. Artif. Gen. Intell..

[12]  S. Baum Quantifying the probability of existential catastrophe: A reply to Beard et al. , 2020, Futures.

[13]  N. Bostrom,et al.  Nicolas de Condorcet: а Theorist of Progress or an Ideologist of Transhumanism? , 2019, Chelovek.

[14]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[15]  David Gunning,et al.  DARPA's explainable artificial intelligence (XAI) program , 2019, IUI.

[16]  W. Ramsey,et al.  The Cambridge Handbook of Artificial Intelligence , 2014 .

[17]  James D. Miller,et al.  The Fermi paradox, Bayes’ rule, and existential risk management , 2017 .

[18]  Hin-Yan Liu,et al.  The power structure of artificial intelligence , 2018, Law, Innovation and Technology.

[19]  J-.G. Castel,et al.  The Road to Artificial Super-intelligence: Has International Law a Role to Play? , 2016 .

[20]  Ryan Calo,et al.  Artificial Intelligence Policy: A Primer and Roadmap , 2017 .

[21]  S. Butler A First Year In Canterbury Settlement: With Other Early Essays , 2008 .

[22]  Roman V. Yampolskiy,et al.  Artificial Superintelligence: A Futuristic Approach , 2015 .

[23]  Roman V. Yampolskiy,et al.  Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures , 2016, ArXiv.

[24]  P. A. Assibong,et al.  Artificial intelligence, alienation and ontological problems of other minds: A critical investigation into the future of man and machines , 2017, 2017 International Conference on Computing Networking and Informatics (ICCNI).

[25]  I. J. Good,et al.  Speculations Concerning the First Ultraintelligent Machine , 1965, Adv. Comput..

[26]  Milan M. Ćirković,et al.  Linking simulation argument to the AI risk , 2015 .

[27]  James Fox,et al.  An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards , 2020 .

[28]  Tim Taylor,et al.  Past Visions of Artificial Futures: One Hundred and Fifty Years under the Spectre of Evolving Machines , 2018, ALIFE.

[29]  Phil Torres Existential risks: a philosophical analysis , 2019, Inquiry.

[30]  Paulius Čerka,et al.  Liability for damages caused by artificial intelligence , 2015, Comput. Law Secur. Rev..

[31]  R. Kurzweil,et al.  The Singularity Is Near: When Humans Transcend Biology , 2006 .

[32]  Siobhan Campbell,et al.  Darwin among the machines , 2009 .

[33]  Nick Bostrom,et al.  Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.

[34]  David Denkenberger,et al.  Classification of global catastrophic risks connected with artificial intelligence , 2018, AI & SOCIETY.

[35]  G. Z. Sun,et al.  Grammatical Inference , 1998, Lecture Notes in Computer Science.

[36]  Goran D. Putnik,et al.  Manufacturing system simulation model synthesis: towards application of inductive inference , 1997 .

[37]  Vernor Vinge,et al.  ==================================================================== the Coming Technological Singularity: How to Survive in the Post-human Era , 2022 .

[38]  Chris Arney Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World , 2014 .

[39]  Matt Boyd,et al.  Catastrophic Risk from Rapid Developments in Artificial Intelligence , 2020, Policy Quarterly.

[40]  Eliezer Yudkowsky Artificial Intelligence as a Positive and Negative Factor in Global Risk , 2006 .

[41]  Ben Goertzel,et al.  The Structure of Intelligence: A New Mathematical Model of Mind , 2013 .