Google Classroom
GeoGebraGeoGebra Classroom

Limit of Detection

This applet provides an interactive demonstration of some of the principles involved in the concept of a limit of detection. This is a complicated subject, and has various interpretations in various application fields. The purpose of this graphic is to permit the illustration and exploration of some basic ideas in a simple detection problem. Consider a measurement, subject to measurement uncertainty ("error"), that is made in the presence of an irreducible "background" (or, "noise"), which is also random in nature. If the background were constant, we could just subtract it, but a major part of the detection problem is that the background fluctuates around some mean level; this mean level (average) is assumed to be constant. As usual we assume that both the background and measurement (signal plus background) are Normally distributed. This is just for mathematical convenience, since the PDFs of these could in principle have any shape (nuclear counting data, for example, is Poisson distributed). One way to define LLD (lower limit of detection) is to say that it is the smallest amount of a "signal" that can be reliably detected. Detection is an example of a hypothesis test, and there are two kinds of error that we can make. A Type I error, or false detection, or false positive, is claiming that a signal is present when in fact it is not present. A Type II error, or failure to detect, or false negative, is claiming that a signal is not present when in fact it is present. The keyword "reliably" relates to keeping these error rates at reasonable levels. What "reasonable" means will vary with the specific application. By convention the Type I error rate (denoted by alpha) is usually set at 0.05, the complement of which is the famous "95% level". The Type II rate (beta) is, in some definitions of LLD, also set to 0.05. There is no good reason for this number, it just has become accepted over the years. These error rates (and the "operating characteristic" curve, which shows the probability of detection as a function of signal strength) for any form of hypothesis test should be defined with careful consideration of the consequences of making these errors. Often this is a policy, not scientific or mathematical, question. What happens in practice is that we characterize the background, and then define a "threshold" level based on that background level. With a measurement having been made (the "a posteriori" situation) there are four possibilities. The measurement (of signal plus background) might be above the threshold, so we declare the signal to be present, but in truth it might be present, or it might not be present. The above-threshold measurement could just have been an unusually large sample from the background distribution. Or, the measurement could fall below the threshold and we declare the signal is not present, but in fact it could either be present or not. In this case the measurement might be an unusually small sample from the signal-plus-background distribution. Here is the procedure for setting up an LLD problem in this applet. (1) The background mean is fixed at 10 units, for graphing convenience; adjust the width (sigma) of the background PDF as desired. (2) Adjust the "threshold" slider at the upper right so that the alpha probability is 0.05 (use the arrow keys for finer control). (3) Adjust the signal distribution's width (sigma). (4) Adjust the signal distribution's mean until the indicated beta level is also (near) 0.05. When the alpha and beta levels are equal, read off the mean of the signal PDF; this is the minimum detectable level for the signal measurement. There is a display of the LLD that is activated only when the alpha and beta levels are very nearly equal, since that is the conventional way of defining an LLD (note that the alpha and beta levels do not have to be 0.05 although this is very commonly used). Note that (1 - beta) is the probability of correctly concluding that a signal is present when in fact it is present; in statistical practice this is called the "power" of the test. Much, much more can be said about this problem- see any statistics text, or signal processing text, or quality control text, etc. This applet should help students to see the basic ideas of the detection / hypothesis test problem.