Old and new concentration inequalities

In the study of random graphs or any randomly chosen objects, the " tools of the trade " mainly concern various concentration inequalities and martingale inequalities. Suppose we wish to predict the outcome of a problem of interest. One reasonable guess is the expected value of the object. However, how can we tell how good the expected value is to the actual outcome of the event? It can be very useful if such a prediction can be accompanied by a guarantee of its accuracy (within a certain error estimate, for example). This is exactly the role that the concentration inequalities play. In fact, analysis can easily go astray without the rigorous control coming from the concentration inequalities. In our study of random power law graphs, the usual concentration inequalities are simply not enough. The reasons are multi-fold: Due to uneven degree distribution , the error bound of those very large degrees offset the delicate analysis in the sparse part of the graph. Furthermore, our graph is dynamically evolving and therefore the probability space is changing at each tick of the clock. The problems arising in the analysis of random power law graphs provide impetus for improving our technical tools. Indeed, in the course of our study of general random graphs, we need to use several strengthened versions of concentration inequalities and martingale inequalities. They are interesting in their own right and are useful for many other problems as well. In the next several sections, we state and prove a number of variations and generalizations of concentration inequalities and martingale inequalities. Many of these will be used in later chapters. 2.1. The binomial distribution and its asymptotic behavior Bernoulli trials, named after James Bernoulli, can be thought of as a sequence of coin flips. For some fixed value p, where 0 ≤ p ≤ 1, the outcome of the coin tossing process has probability p of getting a " head ". Let S n denote the number of heads after n tosses. We can write S n as a sum of independent random variables X i as follows: