The delta rule and learning for min-max neural networks

There have been a lot of works discussing (V, /spl and/)-neural network. However, because of the difficulty of mathematical analysis for (V, /spl and/)-functions, most previous works choose bounded-plus (+) and multiply (*) as the operations of V and /spl and/. The (V, /spl and/) neural network with operators (+, *) is much easier than the (V, /spl and/) neural network with some other operators, e.g. min-max operators, because it has little difference to a backpropagation neural network. In this paper, the authors choose min and max as the operation of V and /spl and/. Because of the difficulty of functions invoked with min and max operations, it is very difficult to deal with (V, /spl and/) neural networks with operators (min. max). In Section 1 of the paper, the authors first discuss the differentiations of (V, /spl and/)-functions, and get that "if f/sub 1/(x), f/sub 2/(x), ..., f/sub n/(n) are continuously differentiable in real number line /spl Rfr/, then any function h(x) generated from f/sub 1/(x), f/sub 2/(x), ..., f/sub n/(X) through finite times of (V, /spl and/) operations is continuously differentiable almost everywhere in /spl Rfr/". This statement guarantees that the delta rule given in Section 2 is rational and effective. In Section 3 the authors implement a simple example to show that the delta rule given in Section 2 is capable to train (V, /spl and/) neural networks.<<ETX>>