A standard method for evaluating social programs uses the outcomes of nonparticipants to estimate what participants would have experienced had they not participated. The difference between participant and nonparticipant outcomes is the estimated gross impact of a program reported in many evaluations read more.
The outcomes of nonparticipants may differ systematically from what the outcomes of participants would have been without the program, producing selection bias in estimated impacts. A variety of nonexperimental estimators adjust for this selection bias under different assumptions. Under certain conditions, randomized social experiments eliminate this bias.
Social experiments are costly and the identifying assumptions required to justify them are not always satisfied. However, it is widely held that there is no valid alternative to experimentation as a method for evaluating social programs (see, e.g., Burtless, 1995). In an important paper, LaLonde (1986) combines data from a social experiment with data from nonexperimental comparison groups to evaluate the performance of many commonly-used nonexperimental estimators. For the particular group of parametric estimators that he investigates, and for his particular choices of regressors, he finds that the estimators chosen by econometric model selection criteria produce a range of impact estimates that is unacceptably large.
This paper uses data from a social experiment on a prototypical social program combined with data on comparison groups of persons who chose not to participate in the program evaluated by the experiment. As documented by Heckman, LaLonde and Smith (1998), many programs in place around the world are very similar to the program we analyze in this paper.