Automated Design of Complex Dynamic Systems

Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems.

[1]  Chandana Paul,et al.  Design and control of tensegrity robots for locomotion , 2006, IEEE Transactions on Robotics.

[2]  J. Neumann The General and Logical Theory of Au-tomata , 1963 .

[3]  Auke Jan Ijspeert,et al.  Central pattern generators for locomotion control in animals and robots: A review , 2008, Neural Networks.

[4]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[5]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[6]  R. Pfeifer,et al.  Self-Organization, Embodiment, and Biologically Inspired Robotics , 2007, Science.

[7]  Peter Bühlmann Regression shrinkage and selection via the Lasso: a retrospective (Robert Tibshirani): Comments on the presentation , 2011 .

[8]  K. Doya,et al.  Bifurcations in the learning of recurrent neural networks , 1992, [Proceedings] 1992 IEEE International Symposium on Circuits and Systems.

[9]  Jordan B. Pollack,et al.  Automatic design and manufacture of robotic lifeforms , 2000, Nature.

[10]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[11]  Geert Morthier,et al.  An ultra-small, low power all-optical flip-flop memory on a silicon chip , 2010 .

[12]  Jürgen Schmidhuber,et al.  Framewise phoneme classification with bidirectional LSTM and other neural network architectures , 2005, Neural Networks.

[13]  A. M. Turing,et al.  The chemical basis of morphogenesis , 1952, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences.

[14]  Gang Zhang,et al.  Quantitative assessment on the cloning efficiencies of lentiviral transfer vectors with a unique clone site , 2012, Scientific Reports.

[15]  Michael C. Mozer,et al.  A Focused Backpropagation Algorithm for Temporal Pattern Recognition , 1989, Complex Syst..

[16]  Benjamin Schrauwen,et al.  Time-domain and frequency-domain modeling of nonlinear optical components at the circuit-level using a node-based approach , 2012 .

[17]  Rolf Rannacher,et al.  Adaptive Finite Element Methods for Optimal Control of Partial Differential Equations: Basic Concept , 2000, SIAM J. Control. Optim..

[18]  Phil Husbands,et al.  The Evolution of Reaction-Diffusion Controllers for Minimally Cognitive Agents , 2010, Artificial Life.

[19]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[20]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[21]  N. Vinh Optimal trajectories in atmospheric flight , 1981 .

[22]  N. Olsson,et al.  Self-phase modulation and spectral broadening of optical pulses in semiconductor laser amplifiers , 1989 .

[23]  Geert Morthier,et al.  An ultra-small, low-power all-optical flip-flop memory on a silicon chip , 2010, 2010 Conference on Optical Fiber Communication (OFC/NFOEC), collocated National Fiber Optic Engineers Conference.

[24]  Paul J. Werbos,et al.  The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting , 1994 .

[25]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[26]  L Pesquera,et al.  Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing. , 2012, Optics express.

[27]  N. Olsson Lightwave systems with optical amplifiers , 1989 .

[28]  Hod Lipson,et al.  Resilient Machines Through Continuous Self-Modeling , 2006, Science.

[29]  R. V. Gamkrelidze,et al.  THE THEORY OF OPTIMAL PROCESSES. I. THE MAXIMUM PRINCIPLE , 1960 .

[30]  Yann Le Cun,et al.  A Theoretical Framework for Back-Propagation , 1988 .

[31]  Geoffrey E. Hinton,et al.  Generating Text with Recurrent Neural Networks , 2011, ICML.

[32]  Tetsuya Asai,et al.  Reaction-diffusion computers , 2005 .

[33]  P. Werbos Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities , 2006 .

[34]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[35]  Luigi Fortuna,et al.  An adaptive, self-organizing dynamical system for hierarchical control of bio-inspired locomotion , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[36]  Benjamin Schrauwen,et al.  Toward optical signal processing using photonic reservoir computing. , 2008, Optics express.

[37]  Daniel Brunner,et al.  Parallel photonic information processing at gigabyte per second data rates using transient states , 2013, Nature Communications.

[38]  R. Tibshirani,et al.  Regression shrinkage and selection via the lasso: a retrospective , 2011 .

[39]  Benjamin Schrauwen,et al.  Parallel Reservoir Computing Using Optical Amplifiers , 2011, IEEE Transactions on Neural Networks.

[40]  Masaya Notomi,et al.  All-optical flip-flop circuit composed of coupled two-port resonant tunneling filter in two-dimensional photonic crystal slab. , 2006, Optics express.

[41]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[42]  Benjamin Schrauwen,et al.  Optoelectronic Reservoir Computing , 2011, Scientific Reports.

[43]  Rolf Pfeifer,et al.  How the body shapes the way we think - a new view on intelligence , 2006 .

[44]  Patrizia Pucci,et al.  The Maximum Principle , 2007 .