Search for a command to run...
Summary The local false discovery rate of Efron et al. (2001)enjoys major conceptual and decision-theoretic advantages over the false discovery rate as an error criterion in multiple testing, but it is well-defined only in Bayesian models where the truth status of each null hypothesis is random. We define a frequentist counterpart to the local false discovery rate based on the relative frequency of nulls at each point in the sample space. The frequentist local false discovery rate is defined without reference to any prior, but preserves several important properties of the Bayesian version. For continuous test statistics, evaluating the local false discovery rate for some test statistic value $ (t) $ gives the probability, conditional on observing $ t $, that the corresponding null hypothesis is true. Evaluating the local false discovery rate at an individual test statistic also yields a calibrated forecast of whether its null hypothesis is true. Finally, thresholding the local false discovery rate at $ 1/(1+\lambda) $ gives the best separable rejection rule under the weighted classification loss where Type I errors are $ \lambda $ times as costly as Type II errors. Importantly, these properties do not require independence of the test statistics to hold. The local false discovery rate can be estimated efficiently using parametric or nonparametric methods, and a closely related error criterion can be provably controlled in finite samples under an independence assumption. Whereas the false discovery rate measures the average quality of all discoveries in a given rejection region, our local false discovery rate measures how the quality of discoveries varies across the rejection region, allowing a more fine-grained analysis without requiring the introduction of a prior.