Accuracy of Long Momentary Time-Sampling Intervals: Effects of Errors in the Timing of Observations

The effects of random variation in the timing of observation samples on the accuracy of momentary time-sampling (MTS) estimates were assessed. Continuous interval recording records were simulated to provide standards against which to assess percent occurrence estimates derived from accurately timed (Fixed Interval) and inaccuratelytimed (Random Interval) simulated observation samples using MTS interval lengths of 30 seconds and 5, 10, and 20 minutes. The standards were constructed to simulate various levels of behavior occurrence (i.e., occurrence during 20%, 40%, 60%, or 80% of the observation intervals). Each Random Interval MTS percent occurrence estimate was compared to the corresponding Fixed Interval percent occurrence estimate. The data revealed that random variance (i.e., human error) in the ability to conduct MTS observations at precise time intervals is not likely to reduce substantially the accuracy of MTS estimates.