A cellular automata FPGA architecture that can be trained with neural networks

It is possible to derive a simple FPGA architecture from 1-D cellular automata structures in which a 2-D spatial feedforward network is formed. By permitting each site to take on any possible function in its input space (through LUT substitution) an interesting new Boolean network concept is produced. It can be viewed as an FPGA, and it can be refined in a number of ways to accommodate the addition of configuration circuitry and registration structures. The interesting features of this FPGA include its low descriptive complexity/high regularity, low interconnect demand, interchangeability of logic/routing resources, and defect tolerance. By exploiting a connection between the Vapnik-Chervonenkis dimension of (at least) low-order LUTs and perceptron neural networks, it is relatively straightforward to model these Boolean networks with equivalent artificial neural networks, which can be trained using traditional approaches, such as the backpropagation algorithm. This paper reviews the derivation of this architecture and demonstrates examples of evolved circuit designs.