Reducing the size of the nondominated set: Pruning by clustering

The multicriterion simplex methods of Evans and Steuer [1] and Yu and Zeleny[2] have encouraged model builders to consider matrix criteria. When conflicting objectives are simultaneously considered, there is no such thing as an optimum solution. Rather, a preferred class of basic feasible solutions called the nondominated set results. Since this set can be extremely large, some means must be found to prune it. Steuer [3] has proposed a filtering method. Another mechanistic aid to the decision maker (DM), based on cluster analysis, is presented in this paper. The idea is to portray the nondominated set N by a representative subset. Cluster analysis partitions N into groups of relatively homogeneous elements. In this research I added a very general evaluative criterion: minimum redundancy. Since there is a threshold of resolution beyond which the DM cannot perceive the difference between two very similar solution vectors, there is little point in making him waste time processing all of N in the search for a final solution. Two forms of cluster analysis are tested-direct clustering and hierarchical clustering. Within the group of hierarchical methods there are eight algorithms. In the present application the two worst things that could happen are clusters that “chain” and outlying vectors (the residue set) that are obscured. Taking account of these two undesirable outcomes, three algorithms worked best on the particular data used-Ward's Method, the Group Average Method and the Centroid Method. The hierarchical methods are recommended over direct clustering. (However, some similarity between direct and hierarchical clustering is discovered.) Hierarchical clustering serves to minimize redundancy and thereby reduces the chance that the selection of a final solution will stress the decision maker beyond his information endurance. The concepts stressed in this paper are very similar to those expressed in Torn [4]. This article presents computational experience with the cluster analysis which was developed independently by Torn, whose approach and mine will be combined under an algorithmic strategy called Two-Stage Pruning (TSP). TSP first reduces the nondominated set to a representative set. This set, in turn, is interactively manipulated until a decision evolves.

[1]  J. Hartigan Direct Clustering of a Data Matrix , 1972 .

[2]  J. Carmichael,et al.  FINDING NATURAL CLUSTERS , 1968 .

[3]  Ralph E. Steuer A Five Phase Procedure for Implementing a Vector-Maximum Algorithm for Multiple Objective Linear Programming Problems , 1976 .

[4]  M. Zeleny Linear Multiobjective Programming , 1974 .

[5]  Joel N. Morse,et al.  A Theory of Naive Weights , 1978 .

[6]  Ralph E. Steuer,et al.  A revised simplex method for linear multiple objective programs , 1973, Math. Program..

[7]  E. J. Bijnen Cluster analysis : survey and evaluation of techniques , 1973 .

[8]  Ralph E. Steuer,et al.  Intra-set point generation and filtering in decision and criterion space , 1980, Comput. Oper. Res..

[9]  Aimo A. Törn,et al.  A sampling-search-clustering approach for exploring the feasible/efficient solutions of MCDM problems , 1980, Comput. Oper. Res..

[10]  R. Hogarth,et al.  Unit weighting schemes for decision making , 1975 .

[11]  John A. Hartigan,et al.  Clustering Algorithms , 1975 .

[12]  Ralph E. Steuer,et al.  Linear multiple-objective programming: theory and computational experience. , 1973 .

[13]  P. Yu,et al.  The set of all nondominated solutions in linear cases and a multicriteria simplex method , 1975 .

[14]  Derek W. Bunn,et al.  Multiple Criteria Problem Solving , 1979 .

[15]  R. Sokal,et al.  Principles of numerical taxonomy , 1965 .

[16]  Michael R. Anderberg,et al.  Cluster Analysis for Applications , 1973 .

[17]  M. Rao Cluster Analysis and Mathematical Programming , 1971 .

[18]  Peter H. A. Sneath,et al.  Numerical Taxonomy: The Principles and Practice of Numerical Classification , 1973 .