Search for a command to run...
Difference-in-differences is a popular method for observational health policy evaluation. It relies on a causal assumption that in the absence of intervention, treatment groups' outcomes would have evolved in parallel to those of comparison groups. Researchers frequently look for parallel trends in the pre-intervention period to bolster confidence in this assumption. The popular "parallel trends test" evaluates a null hypothesis of parallel trends and, failing to find evidence against the null, concludes that the assumption holds. This tightly controls the probability of falsely concluding that trends are not parallel, but may have low power to detect non-parallel trends. When used as a screening step, it can also introduce bias in treatment effect estimates. We propose a non-inferiority/equivalence approach that tightly controls the probability of missing large violations of parallel trends, measured on the scale of the treatment effect. Our framework nests several common use cases, including linear trend tests and event studies. We show that our approach may induce no or minimal bias when used as a screening step under commonly assumed error structures and, absent violations, can offer a higher-power alternative to testing treatment effects in more flexible models. We illustrate our ideas by reconsidering a study of the impact of the Affordable Care Act's dependent coverage provision.