Logical inference in symmetric connectionist networks

This work delineates the relation between logic and symmetric neural networks. The motivation is two-fold: (1) to study the capabilities and limitations of connectionist networks with respect to knowledge representation; and (2) to develop a new kind of inference engine that is expressive, massively parallel, capable of coping with nonmonotonic or noisy knowledge and capable of learning. The thesis shows that propositional logic can be implemented efficiently in networks where hidden units allow the representation of arbitrary constraints. An inference engine is constructed which can obtain its knowledge either by compiling symbolic rules or by learning them inductively from examples. The approach is then extended to support unrestricted first-order logic and nonmonotonic reasoning. Experiments with large-scale, randomly generated satisfiability problems indicate that the method is superior to other existing algorithms (when executed in parallel). Finally, the thesis studies the feasibility of finding sound solutions: a new distributed algorithm is given that efficiently finds a global solution for certain network topologies; negative results are proved for networks with less restricted topologies.