Interactive multiple objective programming procedures via adaptive random search and feed-forward artificial neural networks
暂无分享,去创建一个
This dissertation is devoted to the development, implementation and computational testing of two interactive multiple objective programming (MOP) procedures: (1) the Rolling Ball method, an adaptive random search approach, and (2) a FFANN method, a feed-forward artificial neural network (FFANN) approach. In addition, a quad tree data structure is implemented and new operations are developed to keep track of nondominated solutions, new learning algorithms are developed to train FFANNs, and the Parameter Space Investigation method is investigated computationally.
The Rolling Ball method focuses on complicated nonlinear MOP problems to which currently available solution procedures based on traditional optimization techniques are not applicable and sampling methods can not generate satisfactory solutions, such as problems often encountered in engineering design. The procedure employs an adaptive random search algorithm to search for approximately nondominated solutions and uses a quad tree to manage solutions not dominated by others. Computational results are reported.
Test results show that the quad tree data structure is more effective than currently used linear list approaches in identifying nondominated solutions for large problems. This data structure may be incorporated into many MOP procedures, because it not only helps many of the discrete alternative methods attack larger problems, but also helps in procedures that solve highly nonlinear MOP problems by discrete representation.
Computational results, with theoretical arguments, show that the Parameter Space Investigation method is not very effective. However, this method can be used to generate starting solutions for more sophisticated search procedures.
The FFANN method focuses on the representation and utilization of the decision maker's preference structure with a FFANN. In this approach, preference information is captured by training a FFANN and the trained FFANN is incorporated into traditional optimization techniques to search for improved solutions. The motivation for using a FFANN stems from the desire to effectively utilize preference information, and therefore reduce the burden on the decision maker. The effectiveness and solution behavior of the procedure are tested with different hypothetical value functions.
A common property of the new learning algorithms for training FFANNs is the application of traditional nonlinear unconstrained optimization techniques. Apart from the application to MOP problems, these learning algorithms can also be used to train FFANNs for other applications.