Multiple Regression and the Analysis of Variance and Covariance

First Person: "I need some new shelves in my study." Second Person: "I'll send a plumber along." That piece of conversation would be regarded as merely comic, but in all seriousness people in need of a book on statistics seem to think they can go to a psychologist as with this book (or physicist, or geographer). Why is this? Why do scientists decline the statistician's expertize: is it just a craze for DIY? It can hardly be that: statisticians do not write on psychology (or botany, or medicine) preferring to read what the experts say. The reason usually given is that statisticians do not write in the way psychologists (or chemists, or economists) appreciate. This would be understandable if the resulting statistics book had a discussion of the use of statistical methods in the field, but in the one under review there are no concessions to psychology except for occasional references to matters like "scores on a mid-term examination" though most of the time it is just "treatments and levels". This is entirely a book on mathematical statistics. There is no talk of the value of statistics in helping psychological understanding. A second reason often given is that statisticians make the mathematics too hard. This is nonsense, at least for the better books like Snedecor & Cochran. The present book has plenty of mathematics, and pretty awful stuff it is. One of the things that annoys me about psychologists (or biologists, or sociologists) is that their mathematics is usually so clumsy. Edwards trundles along with his laboured equations missing all beauty and simplicity. Thus he wants to show that ~L(Y — c) has a minimum at c = Y. (He omits suffixes for replicates.) This is relegated to a footnote because it needs forbidden calculus. But X(K — c) = YiY — Y + Y-cf = 1(Y Yf + n(Ycf, since 1(Y Y) = 0, and the minimum is at c = Y. This would also give him the first glimpse of orthogonality and the loss of a degree of freedom. This book is based on the following idea. First introduce the student to multiple regression of Kon-f,, X2,... Xm\ then pass to analysis of variance using dummy variables as the A"s. Thus for two groups, the analysis of variance of Y can be interpreted as multiple regression with X, = 0 and 1 in the two groups. This is surely a clumsy method in comparison with a study of the linear model, with its special cases of regression and treatment comparisons. Psychologists love correlations, and there is a plethora of these including semipartials. All that are needed are the sums of squares. The writing is not good. Orthogonal variables are defined (p 15) but not orthogonal components (p 17). On p 67, "assign n subjects at random to each of two treatments" should have "either" in place of "each". What does afij mean (p 146)? Not, apparently, the product of ff, and fij. There is one merit in the book. The many examples are simple and the arithmetic is well-explained. Students are encouraged to calculate before they pass to the computer which mysteriously produces F or t. This is sound, because thereby the student will appreciate the reasoning. But otherwise there is little joy to be found. Know of a good carpenter?