If error rate is such a simple concept, why don't I have one for my forensic tool yet?
暂无分享,去创建一个
The Daubert decision motivates attempts to establish error rates for digital forensic tools. Many scientific procedures have been devised that can answer simple questions. For example, does a soil sample contain component X? A procedure can be followed that gives an answer with known rates of error. Usually the error rate of a process that tries to detect something is associated with a random component of some measurement. Typically there are two types of error, type I, also called a false positive (detecting it when it is not really there), and type II, also called a false negative (missing it when it really is there). At first thought, an error rate for a forensic acquisition tool or a write blocking tool is a simple concept. An obvious possibility for the error rate of an acquisition is k/n, where n is the total number of bits acquired and k is the number of incorrectly acquired bits. However, the kinds of errors in the soil test and in digital acquisition are fundamentally different. The errors in the soil test can be modeled with a random distribution that can be treated statistically, but the errors that occur in a digital acquisition are systematic and triggered by specific conditions. The purpose of this paper is not to define any error rates for forensic tools, but identification of some of the basic issues to stimulate discussion and further work on the topic.
[1] James R. Lyle,et al. Issues with imaging drives containing faulty sectors , 2007 .
[2] Douglas Allchin,et al. Error Types , 2001, Perspectives on Science.
[3] Simson L. Garfinkel,et al. Carving contiguous and fragmented files with fast object validation , 2007, Digit. Investig..