This paper describes a new method for solving a general linear programming problem. THIS PAPER presents a method that permits movement in either the feasible and/or infeasible regions of the given linear programming problem in the search for the optimal solution. The initial point, too, could thus be either feasible or infeasible, and is therefore invariably available from the problem itself. This, in turn, obviates the necessity of starting always with a feasible point, as is prerequisite with some methodsin particular, the simplex method of linear programming. The concepts of Euclidean distance and angle have been used to derive the distances of all the bounding hyperplanes from the origin in the increasing direction of the normal to the jZjj hyperplane with a view to selecting the pivotal row as the particular bounding hyperplane (discussed later) that is nearest to the origin. Once the pivotal row has thus been chosen, it is a trivial matter to select the appropriate pivotal column (the axis) in different iterations. The pivotal element so obtained is used for performing the usual Gaussian eliminational transformations. The method is iterative, with each iteration performing a sequence of steps as detailed in the algorithm given in Section 5. Three examples have been worked out in Section 6 to explain, as well as illustrate, the power of the method.
[1]
George B. Dantzig,et al.
Linear programming and extensions
,
1965
.
[2]
A. Charnes.
Optimality and Degeneracy in Linear Programming
,
1952
.
[3]
P. Morse,et al.
Principles of Numerical Analysis
,
1954
.
[4]
Walter W Garvin,et al.
Introduction to Linear Programming
,
2018,
Linear Programming and Resource Allocation Modeling.
[5]
T. L. Saaty.
The Number of Vertices of a Polyhedron
,
1955
.
[6]
E. Beale.
Cycling in the dual simplex algorithm
,
1955
.