Working Paper Number
Building on a recently developed methodology for sensitivity analysis that parametrizes omitted variable bias in terms of partial R-Squared measures, I propose a simple statistic to capture the severity of omitted variable bias in any observational study: the probability of omitted variable bias overturning the reported result. The central element of my proposal is formal covariate benchmarking, whereby researchers choose an observed regressor (or a group of observed regressors) to benchmark the relative strength of association of the omitted regressor with the outcome variable and with the treatment variable. These relative strengths of association function as the two sensitivity parameters of the analysis. By allowing these sensitivity parameters to take all permissible values, we get the most conservative estimate of the probability that omitted variable bias can overturn the reported results. By using absolute and relative limits on the maximum values of the sensitivity parameters based on institutional knowledge or other details of the particular study, a researcher can generate less conservative estimates of that probability. For empirical studies with relatively large number of regressors and sample sizes, I suggest bounds for the sensitivity parameters based on simulation studies. I illustrate the methodology using an empirical example that studies the effect of exposure to violence on attitudes towards peace.
UMass Amherst Open Access Policy
Basu, Deepankar, "How likely is it that omitted variable bias will overturn your results?" (2024). Economics Department Working Paper Series. 354.
Retrieved from https://scholarworks.umass.edu/econ_workingpaper/354