Taguchi's Quality Engineering Handbook

points, decisions are made based on results for the study endpoints. In clinical trials, the decisions are usually whether to stop the trial because the efficacy and safety of the treatment can be confirmed already, the safety risks are too great, or the treatment is very unlikely to achieve its therapeutic goal (called stopping for futility). Rules for stopping the trial are made prior to collecting any data. Such rules, called stopping rules, are typically formally defined in a protocol that is completed and approved prior to the start of the trial. Adaptive procedures add the following features to the possible decisions at the interim analyses: (1) the addition or deletion of trial arms in a multiple armed clinical trial, (2) an increase or decrease in the total sample at the end of the study (based on interim estimates of variability and/or other assumed parameters, e.g., effect size), and (3) other changes to the design (such as changes to the inclusion/exclusion criteria for the study subjects). Statisticians in the pharmaceutical and the medical device industries as well as at the National Institutes of Health (NIH) and other medical research institutes will find this book invaluable. Given that most readers of Technometrics are statisticians and practitioners in either the physical, chemical, or engineering sciences, they may not find it as immediately applicable as a biostatistician would. Prior to the development of group sequential procedures there were sequential procedures. Sequential procedures are just like group sequential procedures except that the interim analyses occur after each newly observed data point. These sequential methods were developed (both theory and applications) by Abraham Wald in the United States and George Barnard in the United Kingdom in the 1940s as part of the war effort during World War II. The motivating application was reliability testing of military products such as ammunition. There was a desire to determine that the ammunition was safe and reliable without wasting a lot of ammunition in testing. The same reasoning could apply to any product that requires destructive testing to determine its reliability and is expensive or time consuming to produce. After the war, the practical application was hindered by the need to make and the difficulty of making real-time decisions after every sample. Group sequential methods made the whole idea of sequential testing or monitoring much more useful. The reliability applications may be of interest to the general Technometrics reader, but this book and the text by Jennison and Turnbull (2000) include only clinical trial applications. The authors of the text under review are among the top researchers in the field, and this text by Proschan, Lan, and Wittes very well written and provides thorough and nearly complete coverage of the latest developments in group sequential methods. It also contains a chapter on adaptive sample size methods (Chap. 11). These methods are a subset of the adaptive procedures, and include Stein’s method and others for constructing two-stage designs to deal with nuisance parameters. Among other sample size adjustment methods, the authors include adjusting the sample size based on an interim assessment of the effect size. The few topics in group sequential methods that are not covered in detail are outlined in Chapter 12 (titled “Topics Not Covered”). The text by Jennison and Turnbull (2000) was the first major text on group sequential methods. It came out in 2000 and is considered by many to be a classic on the subject. Both, Jennison and Turnbull are well-known statisticians and they have published widely in the statistics and biostatistics literature. These two texts cover mostly the same topics, are both very current, and both give examples in clinical trials. So a reader, like me, who already owns a copy of Jennison and Turnbull might ask what would be the added value of purchasing Proschan, Lan, and Wittes? I would give the following reasons: