In a classic error model the exposure metric used in an epidemiologic study is measured with error and is an imperfect surrogate for the true exposure. If misclassification of exposure is nondifferential in terms of the health outcome, the effect is generally to bias risk estimates toward the null, thus potentially missing true associations (Copeland et al. 1977; Flegal et al. 1986). To evaluate the degree of misclassification that may occur in an epidemiologic study, it is important to consider the sensitivity and specificity of the exposure metric employed. Sensitivity is the ability of an exposure metric to correctly classify as exposed those who are truly exposed. Specificity is the ability of the metric to correctly classify as unexposed those who are unexposed. Most epidemiologists do not formally assess the validity of their exposure metric before a study is launched; however, small reductions in sensitivity and/or specificity of the exposure metric can have substantial effects on the estimates of risk. When the true prevalence of exposure is low (e.g., less than 10%) small reductions in specificity cause substantial reductions in the risk estimates, whereas reductions in sensitivity have smaller effects. When the exposure is common in the study population, the sensitivity of the exposure metric becomes more important (Stewart and Correa-Villasenor 1991).