An algorithm is a method for solving a class of problems on a computer. The complexity of an algorithm
is the cost, measured in running time, or storage, or whatever units are relevant, of using the algorithm to
solve one of those problems.
This book is about algorithms and complexity, and so it is about methods for solving problems on
computers and the costs (usually the running time) of using those methods.
Computing takes time. Some problems take a very long time, others can be done quickly. Some problems
seem to take a long time, and then someone discovers a faster way to do them (a ‘faster algorithm’). The
study of the amount of computational effort that is needed in order to perform certain kinds of computations
is the study of computational complexity.
Naturally, we would expect that a computing problem for which millions of bits of input data are
required would probably take longer than another problem that needs only a few items of input. So the time
complexity of a calculation is measured by expressing the running time of the calculation as a function of
some measure of the amount of data that is needed to describe the problem to the computer.
For instance, think about this statement: ‘I just bought a matrix inversion program, and it can invert
an n × n matrix in just 1.2n3 minutes.’ We see here a typical description of the complexity of a certain
algorithm. The running time of the program is being given as a function of the size of the input matrix.
A faster program for the same job might run in 0.8n3 minutes for an n × n matrix. If someone were
to make a really important discovery (see section 2.4), then maybe we could actually lower the exponent,
instead of merely shaving the multiplicative constant. Thus, a program that would invert an n × n matrix
in only 7n2.8 minutes would represent a striking improvement of the state of the art.
For the purposes of this book, a computation that is guaranteed to take at most cn3 time for input of
size n will be thought of as an ‘easy’ computation. One that needs at most n10 time is also easy. If a certain
calculation on an n × n matrix were to require 2n minutes, then that would be a ‘hard’ problem. Naturally
some of the computations that we are calling ‘easy’ may take a very long time to run, but still, from our
present point of view the important distinction to maintain will be the polynomial time guarantee or lack of
it.
The general rule is that if the running time is at most a polynomial function of the amount of input
data, then the calculation is an easy one, otherwise it’s hard.
Many problems in computer science are known to be easy. To convince someone that a problem is easy,
it is enough to describe a fast method for solving that problem. To convince someone that a problem is
hard is hard, because you will have to prove to them that it is impossible to find a fast way of doing the
calculation. It will not be enough to point to a particular algorithm and to lament its slowness. After all,
that algorithm may be slow, but maybe there’s a faster way.
Matrix inversion is easy. The familiar Gaussian elimination method can invert an n ×n matrix in time
at most cn3.
To give an example of a hard computational problem we have to go far afield. One interesting one is
called the ‘tiling problem.’ Suppose* we are given infinitely many identical floor tiles, each shaped like a
regular hexagon. Then we can tile the whole plane with them, i.e., we can cover the plane with no empty
spaces left over. This can also be done if the tiles are identical rectangles, but not if they are regular
pentagons.
In Fig. 0.1 we show a tiling of the plane by identical rectangles, and in Fig. 0.2 is a tiling by regular
hexagons.
That raises a number of theoretical and computational questions. One computational question is this.
Suppose we are given a certain polygon, not necessarily regular and not necessarily convex, and suppose we
have infinitely many identical tiles in that shape. Can we or can we not succeed in tiling the whole plane?
That elegant question has been proved* to be computationally unsolvable. In other words, not only do
we not know of any fast way to solve that problem on a computer, it has been proved that there isn’t any
[1]
A. Turing.
On Computable Numbers, with an Application to the Entscheidungsproblem.
,
1937
.
[2]
E. Wright,et al.
An Introduction to the Theory of Numbers
,
1939
.
[3]
R. Prim.
Shortest connection networks and some generalizations
,
1957
.
[4]
J. Tukey,et al.
An algorithm for the machine calculation of complex Fourier series
,
1965
.
[5]
E. A. Dinic.
Algorithm for solution of a problem of maximal flow in a network with power estimation
,
1970
.
[6]
Stephen A. Cook,et al.
The complexity of theorem-proving procedures
,
1971,
STOC.
[7]
Donald E. Knuth,et al.
The art of computer programming: sorting and searching (volume 3)
,
1973
.
[8]
A. V. Karzanov,et al.
Determining the maximal flow in a network by the method of preflows
,
1974
.
[9]
Robert E. Tarjan,et al.
Network Flow and Testing Graph Connectivity
,
1975,
SIAM J. Comput..
[10]
Eugene L. Lawler,et al.
A Note on the Complexity of the Chromatic Number Problem
,
1976,
Inf. Process. Lett..
[11]
Whitfield Diffie,et al.
New Directions in Cryptography
,
1976,
IEEE Trans. Inf. Theory.
[12]
Robert E. Tarjan,et al.
Finding a Maximum Independent Set
,
1976,
SIAM J. Comput..
[13]
Daniel J. Rosenkrantz,et al.
An Analysis of Several Heuristics for the Traveling Salesman Problem
,
1977,
SIAM J. Comput..
[14]
Adi Shamir,et al.
A method for obtaining digital signatures and public-key cryptosystems
,
1978,
CACM.
[15]
David S. Johnson,et al.
Computers and Intractability: A Guide to the Theory of NP-Completeness
,
1978
.
[16]
Leslie G. Valiant,et al.
Fast probabilistic algorithms for hamiltonian circuits and matchings
,
1977,
STOC '77.
[17]
M. Hellman.
The Mathematics of Public-Key Cryptography
,
1979
.
[18]
Leonard M. Adleman,et al.
On distinguishing prime numbers from composite numbers
,
1980,
21st Annual Symposium on Foundations of Computer Science (sfcs 1980).
[19]
J. Dixon.
Asymptotically fast factorization of integers
,
1981
.
[20]
Don Coppersmith,et al.
On the Asymptotic Complexity of Matrix Multiplication
,
1982,
SIAM J. Comput..
[21]
L. Adleman,et al.
On distinguishing prime numbers from composite numbers
,
1980,
21st Annual Symposium on Foundations of Computer Science (sfcs 1980).
[22]
Adi Shamir,et al.
A method for obtaining digital signatures and public-key cryptosystems
,
1978,
CACM.
[23]
H. Lenstra,et al.
Primalitv Testing and Jacobi Sums
,
2010
.
[24]
J. Dixon.
Factorization and Primality Tests
,
1984
.
[25]
E. Palmer.
Graphical evolution: an introduction to the theory of random graphs
,
1985
.
[26]
Edward A. Bender,et al.
A Theoretical Analysis of Backtracking in the Graph Coloring Problem
,
1985,
J. Algorithms.
[27]
A. Goldberg,et al.
A new approach to the maximum-flow problem
,
1988,
JACM.
[28]
Ronald L. Rivest,et al.
The Design and Analysis of Computer Algorithms
,
1990
.