We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.
[1]
T. Creighton,et al.
Protein Folding
,
1992
.
[2]
Ron Unger,et al.
Genetic Algorithm for 3D Protein Folding Simulations
,
1993,
ICGA.
[3]
Erik D. Goodman,et al.
A Standard GA Approach to Native Protein Conformation Prediction
,
1995
.
[4]
Hans-Paul Schwefel,et al.
Evolution and optimum seeking
,
1995,
Sixth-generation computer technology series.
[5]
Søren Brunak,et al.
Protein Folds: A Distance-Based Approach
,
1995
.
[6]
Nando de Freitas,et al.
Sequential Monte Carlo Methods in Practice
,
2001,
Statistics for Engineering and Information Science.
[7]
Rolf Backofen,et al.
A Constraint-Based Approach to Structure Prediction for Simplified Protein Models That Outperforms Other Existing Methods
,
2003,
ICLP.