Abadie, Alberto and Guido Imbens (2006) “Large Sample Properties of Matching Estimators for Average Treatment Effects,” Econometrica, 74(1):235–267.CrossrefGoogle Scholar

Alcott, Hunt (2012) *Site Selection Bias in Program Evaluation*. Nber Working Paper Series No. 18373. National Bureau of Economic Research: Cambridge, USA. http://www.nber.org/papers/w18373. September 2012.

Angrist, Joshua, Guido Imbens and Donald Rubin (1996) “Identification of Causal Effects Using Instrumental Variables,” Journal of the American Statistical Association, 91(434):444–455.Google Scholar

Angrist, Joshua, Eric Bettinger, Erik Bloom, Elizabeth King and Michael Kremer (2002) “Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment,” American Economic Review, 92(5):1535–1558.CrossrefGoogle Scholar

Barnard, John, Constantine E. Frangakis, Jennifer L. Hill and Donald B. Rubin (2003) “Principal Stratification Approach to Broken Randomized Experiments: A Case Study of School Choice Vouchers in New York City,” Journal of the American Statistical Association, 98(462):299–323.CrossrefGoogle Scholar

Behrman, Jere, Yingmei Cheng and Petra Todd (2004) “Evaluating Preschool Programs When Length of Exposure to the Program Varies: A Non-Parametric Approach,” Review of Economics and Statistics, 86(1):108–132.CrossrefGoogle Scholar

Buddlemeyer, Hielke and Emmanuel Skoufias (2004) *An Evaluation of the Performance of Regression Discontinuity Design on PROGRESA.* World Bank Policy Research Working Paper No. 3386; IZA Discussion Paper No. 827.Google Scholar

Busso, Matias, John DiNardo and Justin McCrary (2009) *New Evidence on the Finite Sample Properties of Propensity Score Matching and Reweighting Estimators*. forthcoming Review of Economics and Statistics. IZA Working Paper No. 3998.Google Scholar

Camacho, Adriana and Emily Conover (2011) “Manipulation of Social Program Eligibility,” American Economic Journal: Economic Policy, 3(2):41–65.CrossrefGoogle Scholar

Card, David and Alan Krueger (1994) “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania,” American Economic Review, 84(4):772–793.Google Scholar

Chalmers, T., J. Block and S. Lee (1970) “Controlled Studies in Clinical Cancer Research,” New England Journal of Medicine, 287:75–78.Google Scholar

Deaton, Angus (1989) “Rice Prices and Income Distribution in Thailand: A Non-Parametric Analysis,” Economic Journal, 99:1–37.CrossrefGoogle Scholar

Dehejia, Rajeev (2003) “Was There a Riverside Miracle? A Hierarchical Framework for Evaluating Programs with Grouped Data,” Journal of Business and Economic Statistics, 21(1):1–11.CrossrefGoogle Scholar

Dehejia, Rajeev and Sadek Wahba (1997) “Causal Effects in Non-Experimental Studies: Re-Evaluating the Evaluation of Training Programs.” In: (R. Dehejia, ed.) *Econometric Methods for Program Evaluation*, Chapter 1, PhD Dissertation: Harvard University.Google Scholar

Dehejia, Rajeev and Sadek Wahba (1999) “Causal Effects in Non-Experimental Studies: Re-Evaluating the Evaluation of Training Programs,” Journal of the American Statistical Association, 94(448):1053–1062.CrossrefGoogle Scholar

Dehejia, Rajeev and Sadek Wahba (2002) “Propensity Score Matching Methods for Non-Experimental Causal Studies,” Review of Economics and Statistics, 84:151–161.CrossrefGoogle Scholar

Delong, Bradford and Kevin Lang (1992) “Are All Economic Hypotheses Wrong?” Journal of Political Economy, 100(6):1257–1272.CrossrefGoogle Scholar

Diaz, Juan Jose and Sudhanshu Handa, 2006. “An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico’s PROGRESA Program,” Journal of Human Resources, University of Wisconsin Press, 41(2).Google Scholar

Doucouliagos, Hristos and Martin Paldam (2008) “Aid Effectiveness on Growth: A Meta Study,” European Journal of Political Economy, 24:1–24.CrossrefGoogle Scholar

Duflo, Esther (2001) “Schooling and Labor Market Consequences of School Construction in Indonesia: Evidence from an Unusual Policy Experiment,” American Economic Review, 91(4):795–813.CrossrefGoogle Scholar

Fink, Günther, Margaret McConnell and Sebastian Vollmer (2011) *Testing for Heterogeneous Treatment Effects in Experimental Data: False Discovery Risks and Correction Procedures*. manuscript.Google Scholar

Flores, Carlos and Oscar Mitnik (2013) “Comparing Treatments Across Labor Markets: An Assessment of Nonexperimental Multiple-Treatment Strategies,” Review of Economics and Statistics, 95(5):1691–1707.CrossrefGoogle Scholar

Fraker, T. and R. Maynard (1987) “The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs,” Journal of Human Resources, 22:194–227.CrossrefGoogle Scholar

Frölich, Markus (2004) “Finite-Sample Properties of Propensity-Score Matching and Weighting Estimators,” Review of Economics and Statistics, 86(1):77–90.CrossrefGoogle Scholar

Gelman, Andrew, Xiao-Li Meng and Bruce Sacerdote (2005) “Fixing Broken Experiments using the Propensity Score.” In: (A. Gelman and X.-L. Meng, eds.) *Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives: An Essential Journey with Donald Rubin’s Statistical Family*, Chapter 6, New York, NY: Wiley.Google Scholar

Glewwe, Paul, Michael Kremmer and Sylvia Moulin (2009) “Many Children Left Behind: Textbooks and Test Scores in Kenya,” American Economic Journal: Applied Economics, 1(1):112–135.CrossrefGoogle Scholar

Godtland, Erin, Elizabeth Sadoulet, Alain De Janvry, Rinku Murgai and Oscar Ortiz (2004) “The Impact of Famer Field Schools on Knowledge and Productivity: A Study of Potato Farmers in the Peruvian Andes,” Economic Development and Cultural Change, 53(1):63–93.CrossrefGoogle Scholar

Hahn, J. (1998) “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects,” Econometrica, 66(2):315–331.CrossrefGoogle Scholar

Hammermesh, Daniel (2007) “Replication in Economics,” Canadian Journal of Economics, 40(5):715–733.CrossrefGoogle Scholar

Heckman, James J. (2001) “Micro Data, Heterogeneity, and the Evaluation of Public Policy: Nobel Lecture,” Journal of Political Economy, 109(4):673–748.CrossrefGoogle Scholar

Heckman, James and Joseph Hotz (1989) “Choosing Among Alternative Non-experimental Methods for Estimating the Impact of Social Programs: The Case of Manpower Training,” Journal of the Amerian Statistical Association, 84:862–874.Google Scholar

Heckman, James and Richard Robb (1985). “Alternative Methods for Evaluating the Impact of Interventions.” In: (J. Heckman and B. Singer, eds.) *Longitudinal Analysis of Labor Market Data*. Econometric Society Monograph No. 10, Cambridge: Cambridge University Press.Google Scholar

Heckman, James J., Hidehiko Ichimura, Jeffrey A. Smith and Petra E. Todd (1998) “Characterizing Selection Bias Using Experimental Data,” Econometrica, 66:1017–1098.CrossrefGoogle Scholar

Heckman, James, Robert LaLonde and Jeffrey Smith (1999) “The Economics of Active Labor Market Programs.” In: (O. Ashenfelter and D. Card, eds.) *Handbook of Labor Economics, 3*. Amsterdam: Elsevier Science.Google Scholar

Hirano, Kei, Guido Imbens and Geert Ridder (2003) “Efficient Estimation of Average Treatment Effects using the Estimated Propensity Score,” Econometrica, 71:1161–1189.CrossrefGoogle Scholar

Huber, Martin (2011) “Testing for Covariate Balance Using Quantile Regression and Resampling Methods,” Journal of Applied Statistics, 38(12):2881–2899.CrossrefGoogle Scholar

Imbens, Guido and Joshua Angrist (1994) “Identification and Estimation of Local Average Treatment Effects,” Econometrica, 62(2):467–475.CrossrefGoogle Scholar

Imbens, Guido and Thomas Lemieux (2008) “Regression Discontinuity: A Guide To Practice,” Journal of Econometrics, 142(2):615–635.CrossrefGoogle Scholar

Imbens, Guido and Jeffrey Wooldridge (2009) “Recent Developments in the Econometrics of Program Evaluation,” Journal of Economic Literature, 47(1):5–86.CrossrefGoogle Scholar

Jalan, Jyostna and Martin Ravallion (2003) “Estimating the Benefit Incidence for an Anti-Poverty Program Using Propensity Score Matching,” Journal of Business and Economic Statistics, 21(1):19–30.CrossrefGoogle Scholar

Lalonde, Robert (1986) “Evaluating the Econometric Evaluation of Training Programs with Experimental Data,” American Economic Review, 76:604–620.Google Scholar

Li, Qi, Jeffrey Racine and Jeffrey Wooldridge (2009) “Efficient Estimation of Average Treatment Effects with Mixed Categorical and Continuous Data,” Journal of Business and Economic Statistics, 27(2):206–223.CrossrefGoogle Scholar

Lucas, Robert (1976) “Econometric Policy Evaluation: A Critique.” In: (K. Brunner and A. Meltzer, eds.) *The Phillips Curve and Labor Markets, Carnegie-Rochester Conference Series on Public Policy*, Vol. 1, New York: Elsevier, pp. 19–48.CrossrefGoogle Scholar

Martin, Brownen, Sunggoan Ji, Stuart Maudsley and Mark Mattson (2010) “Control’ Laboratory Rodents Are Metabolically Morbid: What It Matters,” Proceedings of the National Academy of Sciences, 107(14):6127–6133.Google Scholar

McCrary, Justin (2008) “Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test,” Journal of Econometrics, 142(2):698–714.CrossrefGoogle Scholar

Meier, P. (1972) “The Biggest Public Health Experiment Ever: The 1954 Field Trial of the Salk Poliomyelitis Vaccine.” In (J. Tanur, ed.) *Statistics: A Guide to the Unknown*. San Francisco: Holden Day, pp. 2–13.Google Scholar

Mekasha, Tseday and Finn Tarp (2011) *Aid and Growth: What Meta-Analysis Reveals*. UNU-WIDER, World Institute for Development Economics Research, Working Paper No. 2011/22.Google Scholar

Miller, Grant, Diana Pinto and Marcos Vera-Hernández (2009) *High-Powered Incentives in Developing Country Health Insurance*. National Bureau of Economic Research Working Paper No. 15456.Google Scholar

Muralidharan, Kartik and Venkatesh Sundararaman (2012) “Teacher Performance Pay: Experimental Evidence from India,” forthcoming Journal of Political Economy.Google Scholar

Ozier, Owen (2011) *The Impact of Secondary School in Kenya: A Regression Discontinuity Analysis*. Owen Ozier Department of Economics University of California at Berkeley January 17, 2011 JOB MARKET PAPER.Google Scholar

Prichett, Lant and Justin Sandefeur (2013) *Context Matters for Size: Why External Validity Claims and Development Practice Don’t Mix*. Center for Global Development, Working Paper No. 336.Google Scholar

Ravallion, Martin (2009) “Evaluation in the Practice of Development,” World Bank Research Observer, 24(1):29–53.CrossrefGoogle Scholar

Rosenbaum, Paul (1995). *Observational Studies*. New York: Springer-Verlag.Google Scholar

Rosenbaum, Paul and Donald Rubin (1983) “The Central Role of the Propensity Score in Observational Studies for Causal Effects,” Biometrika, 40:41–55.CrossrefGoogle Scholar

Rubin, Donald (2005) “The Design versus the Analysis of Observational Studies for Causal Effects: Parallels with the Designs of Randomized Trials,” Statistics in Medicine, 26(1):20–36.CrossrefGoogle Scholar

Rubin, Donald (2008) “For Objective Causal Inference, Design Trumps Analysis,” The Annals of Applied Statistics, 2(4):808–840.CrossrefGoogle Scholar

Shaikh, Azeem M., Marianne Simonsen, Edward Vytlacil and Nese Yildiz (2009) “A Specification Test For The Propensity Score Using Its Distribution Conditional On Participation,” Journal of Econometrics, 151(1):33–46.CrossrefGoogle Scholar

Smith, Jeffrey and Petra Todd (2005) “Does Matching Overcome Lalonde’s Critique of Nonexperimental Estimators?” Journal of Econometrics, 125(1–2):305–353.CrossrefGoogle Scholar

Subramanian, Shanker and Angus Deaton (1996) “The Demand for Food and Calories,” Journal of Political Economy, 104(1):133–162.CrossrefGoogle Scholar

Thistlewaite, D. and D. Campbell (1960) “Regression-Discontinuity Analysis: An Alter- native to the Ex-Post Facto Experiment,” Journal of Educational Psychology, 51:309–317.CrossrefGoogle Scholar

VandenBosch, Terry (2011) “International Human Subjects Research Risks,” Office of Human Research Compliance Review, University of Michigan.Google Scholar

van der Klaauw, W. (2002) “Estimating the Effect of Financial Aid Offers on College Enrollment: A Regression Discontinuity Approach,” International Economic Review, 43:1249–1287.CrossrefGoogle Scholar

van der Klaauw, Wilbert (2008) “Regression-Discontinuity Analysis: A Survey of Recent Developments in Economics,” Labor, 22(2):219–245.CrossrefGoogle Scholar

Wooldridge, Jeffrey (2001) *Econometric Analysis of Cross Section and Panel Data*. Cambridge, MA: MIT Press.Google Scholar

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.