In this work, we demonstrate that estimates of differential program impact from comparison group designs that evaluate differences in outcomes for subgroups of individuals are less prone to selection bias. First, we argue for the importance of the routine evaluation of moderated impacts. Second, using formal and graphical arguments, we show that under specific conditions, cross-site comparisons of performance gradients across subgroups lead to cancelation of bias from site-specific third variable confounds. This means we can expect a reduction in standard selection bias resulting from differences between study and inference site in average performance; however, cross-site comparisons can introduce bias due to cross-site differences in the subgroup performance gradient. To examine this tradeoff in biases, we apply Within Study Comparison methods to obtain estimates of Root Mean Squared Bias from six studies, and empirically evaluate levels of each form of bias. We conclude that accuracy of estimates of subgroup differences in impact from comparison groups studies are less prone to bias overall. By yielding results with limited bias, routine analysis of moderated impacts using quasi-experiments can help broaden our understanding of the conditions under which programs are more effective.
Reference the working paper: Jaciw, A. P. (2020). Are Estimates of Differential Impact from Quasi-Experiments Less Prone to Selection Bias than Average Impact Quantities? (Working Paper No. Empirical_AJE-WP1-2020-O.1). San Mateo, CA: Empirical Education Inc. Retrievable from https://www.empiricaleducation.com/selection-bias