Classification capabilities of architecture-specific recurrent networks

The classification capabilities of Elman and Jordan architecture-specific recurrent threshold networks are analyzed in terms of the number and possible types of cells the networks are capable of forming in the input and hidden activation spaces. For Elman networks the number of cells is always 2/sup h/, there are no dosed or imaginary cells, and they are therefore not capable of forming disconnected decision regions. For Jordan networks this is only the case when the number of hidden units are less or equal to the sum of input and state units. We have interpreted the equations obtained, compared the results with feedforward threshold networks, and illustrated them with an example.