Microscopic Equations and Stability Conditions in Optimal Neural Networks
暂无分享,去创建一个
Using the cavity method I derive the microscopic equations and their stability condition for information learning in neural networks, optimized with arbitrary performance functions in terms of the aligning elds of the examples. In the thermodynamic limit the aligning elds are well deened functions of the cavity elds. Iterating the microscopic equations provide a general algorithm for network learning, supported by simulations in the maximally stable per-ceptron and the committee tree. Macroscopic results agree with the replica theory and the Almeida-Thouless stability condition.