Prediction of Dough Rheological Properties Using Neural Networks

Cereal Chem. 72(3):308-311 A neural network was designed to predict the rheological properties accurately predicted the Theological properties (>94%) based on the mixer of dough from the torque developed during mixing. Dough rheological torque curve. The ability to measure the rheology of every batch of dough properties were determined using traditional equipment such as farino- enables online process control by modifying subsequent process condigraph and extensigraph. The back-propagation neural network was tions. This development has significant potential to improve product designed and trained with the acquired mixer torque curve (input) and quality and reduce cost by minimizing process variability during dough the measured Theological properties (output). The trained neural network mixing. Dough rheological properties are important for both product quality and process efficiency. Dough rheological properties, indicated by parameters such as the farinograph peak, extensibility, and maximum resistance, can be related to product specific volume and textural attributes. These parameters subsequently determine consumer acceptance. Therefore, accurate prediction of dough rheology could realize many benefits to the baking industry. However, measuring rheology of every batch is impractical, while predicting these Theological properties has historically proved to be complex. Therefore, most plant operations measure the rheological properties of only a few batches of dough per production shift. This makes online and intime process adjustment impossible. Neural networks are new information processing techniques offering solutions to problems that have not been explicitly formulated. Much of the excitement surrounding neural networks is their unique ability to learn by experience. In the past few years, neural networks have shown increased power over many other statistical methods when solving nonlinear prediction problems (Bochereau et al 1992). Neural network technology has been inspired by biological models. The building blocks of neural networks are neurons or processing elements. In biological systems, neurons operate by receiving input from individual dendrites. This input is weighted according to the synapses, and the resulting quantities are summed. If the sum is greater than the neuron threshold, the neuron executes a transfer function on the weighted sum, and passes the value onto the next neuron. Figure 1 illustrates a processing element (PE), the artificial analog of a neuron. Transfer function maps a PE's possibly infinite summation of input to a predefined range, the output. The operation of a processing element parallels its biological equivalent with synapses being replaced by connection weights. In artificial neural networks, PEs are combined into layers. The parallel structure of the neural networks distinguishes them from traditional serial processing computers and results in some of the fundamental properties of neural networks. Neural networks can solve of problems that are traditionally difficult or impossible using alternative computing techniques. These problems can be characterized as involving complex, nonlinear processes, and noisy or incomplete data. The capability of neural networks to solve such problems suggests that neural networks can become valuable tools for food and agricultural industry since complex, nonlinear processes and noisy data are commonplace, and most food and agricultural processing involves estimation, prediction and control. Furthermore, the structure of neural networks provides not only structural parallelism, but also processing parallelism. This enables very fast decisions to be made in real time. The learning or training phase of a neural network typically requires paired input-output data. The input is fed into the network, transferred through the network layers, and ultimately calculates a predicted output. This predicted output is subsequently compared with the actual output, and the connection weights between the PEs are modified to minimize the deviation between the predicted and actual output. This process continues until a defined accuracy has been reached. This is the concept of back-propagation. During this training phase, many factors of a neural network structure, such as the number of hidden nodes, and the number of layers, are varied by a trial-and-error approach to obtain the optimum network. At this point, the network can be fed input data alone, and the model will accurately calculate the predicted output. Two of the key neural network variables studied in this research were learning rate and momentum. Learning rate controls the degree at which connection weights are modified during the training phase. The larger the learning rate, the larger the weight changes, and the faster the learning will proceed. However, if learning rates are set too high, the neural network will not converge to its true optimum. Momentum weights the importance of the previous iterations to the next connection weight modification. Application of neural networks in food, agricultural, and bio