[What to do if statistical power is low? A practical strategy for pre-post-designs].

This article deals with the issue of statistical validity when evaluating interventions. The most common study design with two groups and two points of measurement is discussed. In clinical research settings, unsatisfactory statistical validity is often seen due to small sample sizes. In order to resolve this problem, a strategy based on an approach by Hager is proposed which takes both significance testing and effects sizes systematically into account. Using an example from clinical research practice the problematic issue of statistical power is introduced and methods to increase the power of tests are discussed. Within this framework, Erdfelder's compromise power analysis (computing alpha levels according to a predetermined beta/alpha error ratio) is crucial as well as a lowering of the number of applied tests by data reduction and the improved detection of potential effects by methods to reduce error variance. The results show that significance tests should not be used in case of small sample and effect sizes. In these cases different approaches should be used.