where the reference to the programming domain is explicit: “Analogous to the search function in modern word processors, Cas9 can be guided to specific locations within complex genomes by a short RNA search string. Using this system, DNA sequences within the endogenous genome and their functional outputs are now easily edited or modulated in virtually any organism of choice” [34] (emphasis added). The analogy with word processors is used to stress positively the wide potential use of Cas9. It is this analogy, however, that shows the risks of the Cas9 technique. One of the acknowledged risks of this technique is the possibility of mosaicism (populations of cells with different genomes in an individual), but this is an “internal,” domain-specific risk. If we consider the parallelism with AI and computer algorithms, an external, more general risk is connected to the uncertainty of the consequences of results, that is, to the prediction of the Cas9 processes. Outcomes are not under control in the long run because there are too many factors involved in gene expression and cell reproduction. In this case, the metaphor of information could be useful to cast some light on the genome editing general perspective. If genome editing becomes a science of programming, or is meant as a science of programming, it can inherit some warning from AI and computation theory related to prediction. Moreover, my claim is that the whole SB, of which genome engineering is a subfield, has the same problem of prediction as AI in the long run because of the following: (a) SB aims at producing autonomous systems; and (b) SB products that interact with an openended context. Prediction and Complex Systems 4.2 A second problem of prediction concerns the complex systems involved in the origins of artificial life. In the Hixon symposium of 1948, von Neumann tackled the problem of self-replicative systems. He tried to establish a logical theory of self-replication by addressing the issue of evolution through errors in replication [36]. After his work, the general theory of cellular automata, that is, spaces in which cells change according to specific rules, was developed in subsequent years. In the logical simulation of self-replication, one may distinguish two different senses of self-replication: (1) of a single entity, a cell; and (2) of systems made by cells replicating themselves. This is a replication and self-replication of complex systems, reproducing themselves at some emergent level. John Conway’s Game of Life is a wellknown example of a cellular automaton, and Langton’s Ant is one of the most famous applications of cellular automata to artificial life [37]. These are examples of the second case of self-replication and are important as complex systems with emergent unpredictable behavior. 258 F. Bianchini Complex Systems, 27 © 2018 The study of evolutionary laws and principles that von Neumann began established a bridge between the collective behavior of microentities, from a logical and a mathematical point of view, as well as emergent phenomena in the biological domain. This bridge connects AI and biology from a bottom-up standpoint and according to the perspective of the emergence of complex system behavior from the interaction of the system’s parts. This led to evolutionary computation and evolutionary programming, which are relevant techniques in AI, artificial life and general computer science. In particular, genetic algorithms are the basis of the complex adaptive systems developed since the 1970s [38]. The emergence of new entities in space or time, which is typical of evolution, is an unpredictable process, even though some attempts to go beyond this problem have recently been made [21]. In evolutionary computation, such unpredictability is not only accepted but is also the strength of these kinds of techniques. So where is the problem? The problem lies in the fact that the strength given by unpredictability is due to the system being without control. In an artificial simulation, this is not a problem, but what about in an actual biological synthetic system? SB has complex systems as outcomes or targets, for example, synthetic multicellular systems [39, 40]. Evolutionary techniques in SB engineering are useful for in vitro experiments, but in general, SB does not like overwhelmingly emergent phenomena [41], because it is hard to control SB products if their properties, development and behavior are emergent, especially when SB products have to be inserted in the real world. To study emergent entities, behaviors and products of evolutionary processes, AI uses computer simulation and models, especially in the framework of the synthetic method [15]. The test bench of synthetic method outcomes, however, is often (e.g., in robotics) the interaction with the real world. Negative, namely nonadaptive, autonomous entities or behaviors are changed or deleted. In SB systems with emergent properties, the consequences, the behavior and the new emergent entities are as unpredictable as in AI. SB may use in silico and in vitro analysis and models (see, for example, [42]). But what about in vivo SB entities? Is it possible to use them to predict consequences and emergent properties? Is it possible to eliminate or change negative in vivo products, maybe inside living organisms, in the same way as in the case of AI modeling and artifacts? Do or could AI and SB share the same sort of simulation method and entity building? The answers to these questions are related to the possibility of making predictions and using techniques that are in principle unpredictable as regards their outcomes. Unpredictability is constitutive of these methods, but in the real world of living systems, it may become a problem if there are no means to control the outcomes of unpredictable processes. The possibility of control seems to be a minimal The Problem of Prediction in Artificial Intelligence and Synthetic Biology 259 https://doi.org/10.25088/ComplexSystems.27.3.249 requirement in every aspect related to this problem, the unpredictability of emergence, in SB. Prediction and General Artificial Intelligence and Synthetic Biology Aims 4.3 In his 1950 paper, Turing made a second claim concerning prediction about computing machinery and intelligence: “I believe that in about fifty years’ time it will be possible, to programme computers [... ] to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. [... ] I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research” [22, p. 449]. The issue of AI’s future is hotly debated today. Turing was one of the first thinkers who dealt with this issue. His predictions have been wrong about passing the test (the imitation game), but rather exact about speaking of machine thinking at the end of the twentieth century. The “importance of the conjectures,” underlined by Turing, is significant in the present-day debate on the future of AI. In particular, many authors discuss the possibility of an artificial general intelligence and of a (biological or artificial) superintelligence [43], that is, an intelligence that exceeds human intelligence. Even though the future is mostly unpredictable, this is another relevant problem for prediction in AI, as it is not clear if we already have or we will soon have technological achievements enabling artificial or biological entities that are more intelligent than human beings. The problem is not trivial, as there are at least two cases: (1) a superintelligence we can recognize for its power to do things we want it to do, but are unable to do; and (2) a superintelligence that we will not recognize, as its powers, goals, motivations and methods are far from human understanding. The former case is more predictable than the latter, and the problem is that in the former case it is most likely that we will be in control of the superintelligent entity, whereas we will not in the latter case, because of our not understanding the new superintelligent entity. Prediction is crucial, but it is hard. Therefore, nowadays there are many institutes and centers of research dealing with this problem. 260 F. Bianchini Complex Systems, 27 © 2018 (For example, the Future of Humanity Institute, the Future of Life Institutes and the Leverhulme Center for the Future of Intelligence, among others.) SB is part of this scenario as well. For example, in a technological improved scenario, genetic manipulation, selection and engineering, including genome editing techniques, could lead to biological superintelligence through understanding the biological mating patterns behind intelligence, or maybe in some other differently controlled evolutionary ways. Consider, as an example of methodology, implantation in embryos and embryo selection over many generations. This may lead to a weak form of superintelligence, a biological one, which could produce smarter and smarter human beings by accelerating the evolutionary process. Within the framework of “transhumanism” [25, 44], the point is that a great number of more and more intelligent humans will be able to produce artificial superintelligences. SB techniques and methods can provide control of transhuman entities [25]. Further, in this way, SB could help future AI by solving part of the problem of
[1]
Giuseppe Longo,et al.
The Biological Consequences of the Computational World: Mathematical Reflections on Cancer Biology
,
2017,
1701.08085.
[2]
N. Rescher.
The limits of science
,
1999
.
[3]
John H. Holland,et al.
Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence
,
1992
.
[4]
J. Monod.
Le hasard et la nécessité
,
1970
.
[5]
Edward Regis,et al.
Great Mambo Chicken And The Transhuman Condition: Science Slightly Over The Edge
,
1990
.
[6]
Jean-Paul Delahaye,et al.
Unpredictability and Computational Irreducibility
,
2011,
ArXiv.
[7]
E. Schrödinger.
What is life? : the physical aspect of the living cell
,
1944
.
[8]
Nicholas Rescher,et al.
Predicting the future : an introduction to the theory of forecasting
,
1998
.
[9]
Rossella Lupacchini.
Turing (1936), On Computable Numbers, with an Application to the Entscheidungsproblem
,
2016
.
[10]
E. Lander,et al.
Development and Applications of CRISPR-Cas 9 for Genome Engineering
,
2015
.
[11]
Matthias Schroder.
Mind Children The Future Of Robot And Human Intelligence
,
2016
.
[12]
Angelika Schmidt,et al.
Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system
,
2015,
1508.06538.
[13]
Paul M. B. Vitányi,et al.
The miraculous universal distribution
,
1997
.
[14]
G. Church,et al.
CRISPR-Cas encoding of a digital movie into the genomes of a population of living bacteria
,
2017,
Nature.
[15]
Hans V. Westerhoff,et al.
Emergence and Its Place in Nature: A Case Study of Biochemical Networks
,
2005,
Synthese.
[16]
E. Lander,et al.
Development and Applications of CRISPR-Cas9 for Genome Engineering
,
2014,
Cell.
[17]
Christopher G. Langton,et al.
Studying artificial life with cellular automata
,
1986
.
[18]
A. Church.
An Unsolvable Problem of Elementary Number Theory
,
1936
.
[19]
Stephen Wolfram,et al.
A New Kind of Science
,
2003,
Artificial Life.
[20]
Sahotra Sarkar,et al.
Information in Genetics and Developmental Biology: Comments on Maynard Smith
,
2000,
Philosophy of Science.
[21]
Ming Li,et al.
An Introduction to Kolmogorov Complexity and Its Applications
,
1997,
Texts in Computer Science.
[22]
Hervé Zwirn.
Computational Irreducibility and Computational Analogy
,
2015,
Complex Syst..
[23]
George M. Church,et al.
Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves
,
2012
.
[24]
W. J. Freeman,et al.
Alan Turing: The Chemical Basis of Morphogenesis
,
1986
.
[25]
Nick Bostrom,et al.
Superintelligence: Paths, Dangers, Strategies
,
2014
.
[26]
Denis Noble,et al.
A theory of biological relativity: no privileged level of causation
,
2012,
Interface Focus.
[27]
Giovanni Boniolo,et al.
Biology without information.
,
2003,
History and philosophy of the life sciences.
[28]
Mario di Bernardo,et al.
In-Silico Analysis and Implementation of a Multicellular Feedback Control Strategy in a Synthetic Bacterial Consortium.
,
2017,
ACS synthetic biology.
[29]
R. Cordeschi.
Steps toward the synthetic method. Symbolic information processing and self-organizing systems in early Artificial Intelligence modeling
,
2008
.
[30]
Hector Zenil,et al.
Empirical Encounters with Computational Irreducibility and Unpredictability
,
2011,
Minds and Machines.
[31]
D. Endy.
Foundations for engineering biology
,
2005,
Nature.
[32]
Keith A. Markus,et al.
Making Things Happen: A Theory of Causal Explanation
,
2007
.
[33]
Yvonne Herz,et al.
The Structure Of Science Problems In The Logic Of Scientific Explanation
,
2016
.
[34]
M. Maharbiz.
Synthetic multicellularity.
,
2012,
Trends in cell biology.
[35]
Michael B Elowitz,et al.
Synthetic biology of multicellular systems: new platforms and applications for animal cells and organisms.
,
2014,
ACS synthetic biology.
[36]
M. Resnik,et al.
Aspects of Scientific Explanation.
,
1966
.