Mapping Boolean functions with neural networks having binary weights and zero thresholds

In this paper, the ability of a binary neural-network comprising only neurons with zero thresholds and binary weights to map given samples of a Boolean function is studied. A mathematical model describing a network with such restrictions is developed. It is shown that this model is quite amenable to algebraic manipulation. A key feature of the model is that it replaces the two input and output variables with a single "normalized" variable. The model is then used to provide a priori criteria, stated in terms of the new variable, that a given Boolean function must satisfy in order to be mapped by a network having one or two layers. These criteria provide necessary, and in the case of a one-layer network, sufficient conditions for samples of a Boolean function to be mapped by a binary neural network with zero thresholds. It is shown that the necessary conditions imposed by the two-layer network are, in some sense, minimal.