As highly parallel heterogeneous computers become commonplace, automatic parallelization of software is an increasingly critical unsolved problem. Continued progress on this problem will require large quantities of information about the runtime structure of sequential programs to be stored and reasoned about. Manually formalizing all this information through traditional approaches, which rely on semantic analysis at the language or instruction level, has historically proved challenging. We take a lower level approach, eschewing semantic analysis and instead modeling von Neumann computation as a dynamical system, i.e., a state space and an evolution rule, which gives a natural way to use probabilistic inference to automatically learn powerful representations of this information. This model enables a promising new approach to automatic parallelization, in which probability distributions empirically learned over the state space are used to guide speculative solvers. We describe a prototype virtual machine that uses this model of computation to automatically achieve linear speedups for an important class of deterministic, sequential Intel binary programs through statistical machine learning and a speculative, generalized form of memoization.
[1]
R. Brockett,et al.
Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems
,
1988,
Proceedings of the 27th IEEE Conference on Decision and Control.
[2]
Henry G. Baker,et al.
Thermodynamics and garbage collection
,
1994,
SIGP.
[3]
Marco Giunti,et al.
Computation, Dynamics, and Cognition
,
2001
.
[4]
Amer Diwan,et al.
Computer systems are dynamical systems.
,
2009,
Chaos.
[5]
M. Tribus,et al.
Probability theory: the logic of science
,
2003
.
[6]
Tommaso Toffoli,et al.
Action, or the fungibility of computation
,
1999
.