Mind on Statistics

As the title indicates, this book discusses using the likelihood function for both modeling and inference. It is written as a textbook with a fair number of examples. The author conveniently provides code using the statistical package R for all relevant examples on his web site. He assumes a list of prerequisites that would typically be covered in the Ž rst year of a master’s degree in statistics (or possibly in a solid undergraduate program in statistics). A good background in probability and theory of statistics, familiarity with applied statistics (such as tests of hypotheses, conŽ dence intervals, least squares and p values), and calculus are prerequisites for using this book. The author presents interesting philosophical discussions in Chapters 1 and 7. In Chapter 1 he explains the differences between a Bayesian versus frequentist approach to statistical inference. He states that the likelihood approach is a compromise between these two approaches and that it could be called a Fisherian approach. He argues that the likelihood approach is non-Bayesian yet has Bayesians aspects and that it has frequentist features but also some nonfrequentist aspects. He references Fisher throughout the book. In Chapter 7 the author discusses the controversial informal likelihood principle, “two datasets (regardless of experimental source) with the same likelihood should lead to the same conclusions.” It is hard to be convinced that how data were collected does not affect conclusions. Chapters 2 and 3 provide deŽ nitions and properties for likelihood functions. Some advanced technical topics are addressed in Chapters 8, 9, and 12, including score function, Fisher information, minimum variance unbiased estimation, consistency of maximum likelihood estimators, goodness-of-Ž t tests, and the EM algorithm. Six chapters deal with modeling. Chapter 4 presents the basic models, binomial and Poisson, with some applications. Chapter 6 focuses on regression models, including normal linear, logistic, Poisson, nonnormal, and exponential family, and deals with the related issues of deviance, iteratively weighted least squares, and the Box–Cox transformations. Chapter 11 covers models with complex data structure, including models for time series data, models for survival data, and some specialized Poisson models. Chapter 14 examines quasi-likelihood models, Chapter 17 covers random and mixed effects models, and Chapter 18 introduces the concept of nonparametric smoothing. The remaining chapters put more emphasis on inference. Chapter 5 deals with frequentist properties including bias of point estimates, p values, conŽ dence intervals, conŽ dence intervals via bootstrapping, and exact inference for binomial and Poisson models. Chapter 10 handles nuisance parameters using marginal and conditional likelihood, modiŽ ed proŽ le likelihood, and estimated likelihood methods. Chapter 13 covers the robustness of a speciŽ ed likelihood. Chapter 15 introduces empirical likelihood concepts, and Chapter 16 addresses random parameters. This book works Ž ne as a textbook, providing a nice introduction to a variety of topics. For engineers, this book can also serve as a good initial exposure to possibly new concepts without overwhelming them with details. But when applying a speciŽ c topic covered in this book to real problems, a more specialized book with greater depth and/or more practical examples may be desired.

[1]  O. J. Dunn Multiple Comparisons Using Rank Sums , 1964 .

[2]  M. Braga,et al.  Exploratory Data Analysis , 2018, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..