Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Econometric Methods

Ed. by Giacomini, Raffaella / Li, Tong


Mathematical Citation Quotient (MCQ) 2018: 0.06

Online
ISSN
2156-6674
See all formats and pricing
More options …

Local Average and Quantile Treatment Effects Under Endogeneity: A Review

Martin Huber / Kaspar Wüthrich
Published Online: 2018-10-09 | DOI: https://doi.org/10.1515/jem-2017-0007

Abstract

This paper provides a review of methodological advancements in the evaluation of heterogeneous treatment effect models based on instrumental variable (IV) methods. We focus on models that achieve identification by assuming monotonicity of the treatment in the IV and analyze local average and quantile treatment effects for the subpopulation of compliers. We start with a comprehensive discussion of the binary treatment and binary IV case as for instance relevant in randomized experiments with imperfect compliance. We then review extensions to identification and estimation with covariates, multi-valued and multiple treatments and instruments, outcome attrition and measurement error, and the identification of direct and indirect treatment effects, among others. We also discuss testable implications and possible relaxations of the IV assumptions, approaches to extrapolate from local to global treatment effects, and the relationship to other IV approaches.

Keywords: instrument; LATE; selection on unobservables; treatment effects

JEL Classification: C26

References

  • Abadie, A. 2002. “Bootstrap Tests for Distributional Treatment Effects in Instrumental Variable Models.” Journal of the American Statistical Association 97: 284–292.CrossrefGoogle Scholar

  • Abadie, A. 2003. “Semiparametric Instrumental Variable Estimation of Treatment Response Models.” Journal of Econometrics 113: 231–263.CrossrefGoogle Scholar

  • Abadie, A., J. Angrist, and G. W. Imbens. 2002. “Instrumental Variables Estimates of the Effect of Subsidized Training on the Quantiles of Trainee Earnings.” Econometrica 70: 91–117.CrossrefGoogle Scholar

  • Aizer, A., and J. J. Doyle. 2013. “Juvenile Incarceration, Human Capital and Future Crime: Evidence from Randomly-Assigned Judges.” Technical report, NBER.Google Scholar

  • Aliprantis, D. 2012. “Redshirting, Compulsory Schooling Laws, and Educational Attainment.” Journal of Educational and Behavioral Statistics 37: 316–338.CrossrefGoogle Scholar

  • Andresen, M. E., and M. Huber. 2018. “Instrument-based Estimation with Binarized Treatments: Issues and Tests for the Exclusion Restriction.” SES Working Paper 492, University of Fribourg.Google Scholar

  • Angrist, J. 1990. “Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records.” American Economic Review 80: 313–336.Google Scholar

  • Angrist, J., and W. Evans. 1998. “Children and their Parents Labor Supply: Evidence from Exogeneous Variation in Family Size.” American Economic Review 88: 450–477.Google Scholar

  • Angrist, J., and I. Fernández-Val. 2010. “Extrapolate-ing: External Validity and Overidentification in the Late Framework.” NBER working paper 16566.Google Scholar

  • Angrist, J., and G. W. Imbens. 1995. “Two-Stage Least Squares Estimation of Average Causal Effects in Models with Variable Treatment Intensity.” Journal of American Statistical Association 90: 431–442.CrossrefGoogle Scholar

  • Angrist, J., G. W. Imbens, and D. Rubin. 1996. “Identification of Causal Effects Using Instrumental Variables.” Journal of American Statistical Association 91: 444–472 (with discussion).CrossrefGoogle Scholar

  • Angrist, J., and A. Krueger. 1991. “Does Compulsory School Attendance Affect Schooling and Earnings?” Quarterly Journal of Economics 106: 979–1014.CrossrefGoogle Scholar

  • Angrist, J. D. 2004. “Treatment Effect Heterogeneity in Theory and Practice.” The Economic Journal 114: C52–C83.Google Scholar

  • Angrist, J. D., and J.-S. Pischke. 2009. Mostly Harmless Econometrics: An Epiricist’s Companion. Princeton University Press.Google Scholar

  • Angrist, J. D., and J.-S. Pischke. 2015. Mastering ’Metrics: The Path from Cause to Effect, Princeton: Princeton University Press.Google Scholar

  • Aronow, P. M., and A. Carnegie. 2013. “Beyond Late: Estimation of the Average Treatment Effect with an Instrumental Variable.” Political Analysis 21: 492–506.CrossrefGoogle Scholar

  • Balke, A., and J. Pearl. 1997. “Bounds on Treatment Effects from Studies with Imperfect Compliance.” Journal of the American Statistical Association 92: 1171–1176.CrossrefGoogle Scholar

  • Barua, R., and K. Lang. 2009. “School Entry, Educational Attainment, and Quarter of Birth: A Cautionary Tale of Late.” NBER Working Paper 15236.Google Scholar

  • Battistin, E., M. De Nadai, and B. Sianesi. 2014. “Misreported Schooling, Multiple Measures and Returns to Educational Qualifications.” Journal of Econometrics 181: 136–150.CrossrefGoogle Scholar

  • Bedard, K., and E. Dhuey. 2006. “The Persistence of Early Childhood Maturity: International Evidence of Long-Run Age Effects.” The Quarterly Journal of Economics 121: 1437–1472.Google Scholar

  • Behaghel, L., B. Crépon, and M. Gurgand. 2013. “Robustness of the Encouragement Design in a Two-Treatment Randomized Control Trial.” IZA Discussion Paper No 7447.Google Scholar

  • Belloni, A., V. Chernozhukov, I. Fernández-Val, and C. Hansen. 2017. “Program Evaluation and Causal Inference with High-Dimensional Data.” Econometrica 85: 233–298.CrossrefGoogle Scholar

  • Bertanha, M., and G. Imbens. 2015. “External Validity in Fuzzy Regression Discontinuity Designs.” NBER working paper 20773.Google Scholar

  • Black, D. A., J. Joo, R. J. LaLonde, J. A. Smith, and E. J. Taylor. 2015. “Simple Tests for Selection Bias: Learning More from Instrumental Variables.” IZA Discussion Paper No 9346.Google Scholar

  • Blackwell, M. 2015. “Identification and Estimation of Joint Treatment Effects with Instrumental Variables.” working paper, Department of Government, Harvard University.Google Scholar

  • Bloom, H. S. 1984. “Accounting for No-Shows in Experimental Evaluation Designs.” Evaluation Review 8: 225–246.CrossrefGoogle Scholar

  • Bound, J., D. A. Jaeger, and R. M. Baker. 1995. “Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogeneous Explanatory Variable is Weak.” Journal of the American Statistical Association 90: 443–450.Google Scholar

  • Brinch, C. N., M. Mogstad, and M. Wiswall. 2017. “Beyond Late with a Discrete Instrument.” Journal of Political Economy 125: 985–1039.CrossrefGoogle Scholar

  • Buckles, K. S., and D. M. Hungerman. 2013. “Season of Birth and Later Outcomes: Old Questions, New Answers.” Review of Economics and Statistics 95: 711–724.CrossrefGoogle Scholar

  • Card, D. 1995. “Using Geographic Variation in College Proximity to Estimate the Return to Schooling.” In Aspects of Labor Market Behaviour: Essays in Honour of John Vanderkamp, edited by L. Christofides, E. Grant, and R. Swidinsky, 201–222. Toronto: University of Toronto Press.Google Scholar

  • Card, D., and T. Lemieux. 2001. “Going to College to Avoid the Draft: The Unintended Legacy of the Vietnam War.” The American Economic Review 91: 97–102.CrossrefGoogle Scholar

  • Carneiro, P., J. J. Heckman, and E. J. Vytlacil. 2011. “Estimating Marginal Returns to Education.” American Economic Review 101: 2754–2781.CrossrefGoogle Scholar

  • Carneiro, P., and S. Lee. 2009. “Estimating Distributions of Potential Outcomes Using Local Instrumental Variables With an Application to Changes in College Enrollment and Wage Inequality.” Journal of Econometrics 149 (2): 191–208.CrossrefGoogle Scholar

  • Chalak, K. 2016 . “Instrumental Variables Methods with Heterogeneity and Mismeasured Instruments.” Econometric Theory 33: 1– 36.Google Scholar

  • Chalak, K., and H. White. 2011. “An Extended Class of Instrumental Variables for the Estimation of Causal Effects.” Canadian Journal of Economics 44: 1–51.CrossrefGoogle Scholar

  • Chen, X., and C. A. Flores. 2015. “Bounds on Treatment Effects in the Presence of Sample Selection and Noncompliance: The Wage Effects of Job Corps.” Journal of Business & Economic Statistics 33: 523–540.CrossrefGoogle Scholar

  • Chen, X., C. A. Flores, and A. Flores-Lagunes. 2017. “Going Beyond Late: Bounding Average Treatment Effects of Job Corps Training.” Journal of Human Resources. DOI: .CrossrefGoogle Scholar

  • Cheng, J., and D. S. Small. 2006. “Bounds on Causal Effects in Three-Arm Trials with Non-compliance.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68: 815–836.CrossrefGoogle Scholar

  • Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins. 2017. “Double/Debiased Machine Learning for Treatment and Structural Parameters.” Econometrics Journal 21: C1–C68.Google Scholar

  • Chernozhukov, V., and C. Hansen. 2005. “An IV Model of Quantile Treatment Effects.” Econometrica 73: 245–261.CrossrefGoogle Scholar

  • Chernozhukov, V., S. Lee, and A. Rosen. 2013. “Intersection Bounds: Estimation and Inference.” Econometrica 81: 667–737.CrossrefGoogle Scholar

  • Chiburis, R. C. 2010. “Semiparametric Bounds on Treatment Effects.” Journal of Econometrics 159: 267–275.CrossrefGoogle Scholar

  • Conley, T. G., C. B. Hansen, and P. E. Rossi. 2012. “Plausibly Exogenous.” Review of Economics and Statistics 94: 260–272.CrossrefGoogle Scholar

  • Cornelissen, T., C. Dustmann, A. Raute, and S. Uta. 2016. “From Late to MTE: Alternative Methods for the Evaluation of Policy Interventions.” IZA DP No. 10056.Google Scholar

  • Dahl, C. M., M. Huber, and G. Mellace. 2016. “It’s Never Too Late. A New Look at the Identification of Local Average Treatment Effects with or Without Defiers.” working paper, University of Southern Denmark, Dept. of Economics.Google Scholar

  • de Chaisemartin, C. 2016. “Tolerating Defiance? Identification of Treatment Effects Without Monotonicity.” working paper, University of Warwick.Google Scholar

  • de Luna, X., and P. Johansson. 2014. “Testing for the Unconfoundedness Assumption Using an Instrumental Assumption.” Journal of Causal Inference 2: 187–199.Google Scholar

  • Deaton, A. S. 2010. “Instruments, Randomization, and Learning about Development.” Journal of Economic Literature 48: 424–455.CrossrefGoogle Scholar

  • DiNardo, J., and D. S. Lee. 2011. “Program Evaluation and Research Designs,” In Handbook of Labor Economics, edited by Orley Ashenfelter, and David Card, Vol. 4, 463–536. New York: Elsevier.Google Scholar

  • DiTraglia, F., and C. Garcia-Jimeno. 2016. “On Mis-Measured Binary Regressors: New Results and Some Comments on the Literature.” working paper, University of Pennsylvania.Google Scholar

  • Donald, S. G., Y.-C. Hsu, and R. P. Lieli. 2014a. “Inverse Probability Weighted Estimation of Local Average Treatment Effects: A Higher Order MSE Expansion.” Statistics and Probability Letters 95: 132–138.CrossrefGoogle Scholar

  • Donald, S. G., Y.-C. Hsu, and R. P. Lieli. 2014b. “Testing the Unconfoundedness Assumption via Inverse Probability Weighted Estimators of (L)ATT.” Journal of Business & Economic Statistics 32 (3): 395–415.CrossrefGoogle Scholar

  • Dzemski, A., and F. Sarnetzki. 2014. “Overidentification Test in a Nonparametric Treatment Model with Unobserved Heterogeneity.” mimeo, University of Mannheim.Google Scholar

  • Fiorini, M., and K. Stevens. 2014. “Monotonicity in IV and fuzzy RD designs - A Guide to Practice.” mimeo, University of Sydney.Google Scholar

  • Flores, C. A., and A. Flores-Lagunes. 2013. “Partial Identification of Local Average Treatment Effects With an Invalid Instrument.” Journal of Business & Economic Statistics 31: 534–545.CrossrefGoogle Scholar

  • Frangakis, C., and D. Rubin. 1999. “Addressing Complications of Intention-to-Treat Analysis in the Combined Presence of All-or-None Treatment-Noncompliance and Subsequent Missing Outcomes.” Biometrika 86: 365–379.CrossrefGoogle Scholar

  • Fricke, H., M. Frölich, M. Huber, and M. Lechner. 2015. “Endogeneity and Non-Response Bias in Treatment Evaluation: Nonparametric Identification of Causal Effects by Instruments.” IZA Discussion Paper No 9428.Google Scholar

  • Frölich, M. 2007. “Nonparametric IV Estimation of Local Average Treatment Effects with Covariates.” Journal of Econometrics 139: 35–75.CrossrefGoogle Scholar

  • Frölich, M., and M. Huber. 2014. “Treatment Evaluation with Multiple Outcome Periods Under Endogeneity and Attrition.” Journal of the American Statistical Association 109: 1697–1711.CrossrefGoogle Scholar

  • Frölich, M., and M. Huber. 2017. “Direct and Indirect Treatment Effects - Causal Chains and Mediation Analysis with Instrumental Variables.” Journal of the Royal Statistical Society Series B 79: 1645–1666.CrossrefGoogle Scholar

  • Frölich, M., and M. Lechner. 2015. “Combining Matching and Nonparametric Instrumental Variable Estimation: Theory and an Application to the Evaluation of Active Labour Market Policies.” Journal of Applied Econometrics 30: 718–738.CrossrefGoogle Scholar

  • Frölich, M., and B. Melly. 2013a. “Identification of Treatment Effects on the Treated with One-Sided Non-Compliance.” Econometric Reviews 32: 384–414.CrossrefGoogle Scholar

  • Frölich, M., and B. Melly. 2013b. “Unconditional Quantile Treatment Effects Under Endogeneity.” Journal of Business & Economic Statistics 31: 346–357.CrossrefGoogle Scholar

  • Frumento, P., F. Mealli, B. Pacini, and D. B. Rubin. 2012. “Evaluating the Effect of Training on Wages in the Presence of Noncompliance, Nonemployment, and Missing Outcome Data.” Journal of the American Statistical Association 107: 450–466.CrossrefGoogle Scholar

  • Hausman, J. A. 1978. “Specification Tests in Econometrics.” Econometrica 46: 1251–1271.CrossrefGoogle Scholar

  • Heckman, J. J. 1997. “Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations.” The Journal of Human Resources 32: 441–462.CrossrefGoogle Scholar

  • Heckman, J. J., and R. Pinto. 2018. “Unordered Monotonicity.” Econometrica 86: 1–35.CrossrefGoogle Scholar

  • Heckman, J. J., D. Schmierer, and S. Urzua. 2010. “Testing the Correlated Random Coefficient Model.” Journal of Econometrics 158: 177–203.CrossrefGoogle Scholar

  • Heckman, J. J., and S. Urzúa. 2010. “Comparing IV with Structural Models: What Simple IV Can and Cannot Identify.” Journal of Econometrics 156: 27–37.CrossrefGoogle Scholar

  • Heckman, J. J., and E. Vytlacil. 1999. “Local Instrumental Variables and Latent Variable Models for Identifying and Bounding Treatment Effects.” Proceedings National Academic Sciences USA, Economic Sciences 96, 4730–4734.Google Scholar

  • Heckman, J. J., and E. Vytlacil. 2001a. “Instrumental Variables, Selection Models, and Tight Bounds on the Average Treatment Effects.” In Econometric Evaluation of Labour Market Policies, edited by M. Lechner, M. Pfeiffer, 1–15. New York: Center for European Economic Research.Google Scholar

  • Heckman, J. J., and E. Vytlacil. 2001b. “Local Instrumental Variables.” In Nonlinear Statistical Inference: Essays in Honor of Takeshi Amemiya, edited by C. Hsiao, K. Morimune, J. Powell. Cambridge: Cambridge University Press.Google Scholar

  • Heckman, J. J., and E. Vytlacil. 2005. “Structural Equations, Treatment Effects, and Econometric Policy Evaluation 1.” Econometrica 73: 669–738.CrossrefGoogle Scholar

  • Heckman, J. J., and E. J. Vytlacil. 2007. “Chapter 71 Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast their Effects in New Environments.” In Part B of Handbook of Econometrics, edited by James J. Heckman, Edward E. Leamer, Vol. 6, 4875–5143. North-Holland: Elsevier.Google Scholar

  • Hernan, M. A., and J. M. Robins. 2006. “Instruments for Causal Inference. An Epidemiologist’s Dream?” Epidemiology 17: 360–372.CrossrefGoogle Scholar

  • Hirano, K., G. W. Imbens, and G. Ridder. 2003. “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score.” Econometrica 71: 1161–1189.CrossrefGoogle Scholar

  • Hong, H., and D. Nekipelov. 2010. “Semiparametric Efficiency in Nonlinear Late Models.” Quantitative Economics 1: 279–304.CrossrefGoogle Scholar

  • Hsu, Y.-C., T.-C. Lai, and R. P. Lieli. Estimation and Inference for Distribution Functions and Quantile Functions in Endogenous Treatment Effect Models 2015 Working Paper, Central European University.

  • Huber, M. 2013. “A Simple Test for the Ignorability of Non-Compliance in Experiments.” Economics Letters 120: 389–391.CrossrefGoogle Scholar

  • Huber, M. 2014. Sensitivity Checks for the Local Average Treatment Effect.” Economics Letters 123: 220–223.CrossrefGoogle Scholar

  • Huber, M., L. Laffers, and G. Mellace. 2017. “Sharp IV Bounds on Average Treatment Effects on the Treated and Other Populations Under Endogeneity and Noncompliance.” Journal of Applied Econometrics 32: 56–79.CrossrefGoogle Scholar

  • Huber, M., and G. Mellace. 2015. “Testing Instrument Validity for LATE Identification Based on Inequality Moment Constraints.” Review of Economics and Statistics 97: 398–411.CrossrefGoogle Scholar

  • Hull, P. 2015. “Isolateing: Identifying Counterfactual-Specific Treatment Effects with Cross-Stratum Comparisons.” working paper, MIT Department of Economics.Google Scholar

  • Imbens, G. W. 2004. “Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review.” The Review of Economics and Statistics 86: 4–29.CrossrefGoogle Scholar

  • Imbens, G. W. 2010. “Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009).” Journal of Economic Literature 48 (2): 399–423.CrossrefGoogle Scholar

  • Imbens, G. W. 2014. “Instrumental Variables: An Econometrician’s Perspective.” IZA Discussion Paper No. 8048.Google Scholar

  • Imbens, G. W., and J. Angrist. 1994. “Identification and Estimation of Local Average Treatment Effects.” Econometrica 62: 467–475.CrossrefGoogle Scholar

  • Imbens, G. W., and D. Rubin. 1997. “Estimating Outcome Distributions for Compliers in Instrumental Variables Models.” Review of Economic Studies 64: 555–574.CrossrefGoogle Scholar

  • Imbens, G. W., and J. M. Wooldridge. 2009. “Recent Developments in the Econometrics of Program Evaluation.” Journal of Economic Literature 47: 5–86.CrossrefGoogle Scholar

  • Jones, D. 2015. “The Economics of Exclusion Restrictions in IV Models.” NBER working paper 21391, Cambridge, MA.Google Scholar

  • Kédagni, D., and I. Mourifié. 2016. “Empirical Content of the IV Zero-Covariance Assumption: Testability, Partial Identification.” working paper, University of Toronto.Google Scholar

  • Kirkeboen, L., E. Leuven, and M. Mogstad. 2016. “Fields of Study, Earnings, and Self-Selection.” Quarterly Journal of Economics 131: 1057–1111.CrossrefGoogle Scholar

  • Kitagawa, T. 2009. “Identification Region of the Potential Outcome Distribution Under Instrument Independence.” CeMMAP working paper 30/09.Google Scholar

  • Kitagawa, T. 2015. “A Test for Instrument Validity.” Econometrica 83: 2043–2063.CrossrefGoogle Scholar

  • Klein, T. J. 2010. “Heterogeneous Treatment Effects: Instrumental Variables Without Monotonicity?” Journal of Econometrics 155: 99–116.CrossrefGoogle Scholar

  • Kolesar, M. 2013. Estimation in an Instrumental Variable Model with Treatment Effect Heterogeneity, Working Paper, Princeton University.Google Scholar

  • Kowalski, A. E. 2016. “Doing More When You’re Running Late: Applying Marginal Treatment Effect Methods to Examine Treatment Effect Heterogeneity in Experiments.” working paper, Yale University.Google Scholar

  • Lee, J. 2008. “Sibling Size and Investment in Children’s Education: An Asian Instrument.” Journal of Population Economics 21: 855–875.CrossrefGoogle Scholar

  • Lee, S., and B. Salanie. 2015. “Identifying Effects of Multivalued Treatments.” cemmap working paper CWP72/15.Google Scholar

  • Little, R., and D. Rubin. 1987. Statistical Analysis with Missing Data. New York: Wiley.Google Scholar

  • Machado, C., A. Shaikh, and E. Vytlacil. 2018. Instrumental Variables, and the Sign of the Average Treatment Effect, Working Paper, University of Chicago.Google Scholar

  • Maestas, N., K. J. Mullen, and A. Strand. 2013. “Does Disability Insurance Receipt Discourage Work? Using Examiner Assignment to Estimate Causal Effects of SSDI Receipt.” The American Economic Review 103: 1797–1829.CrossrefGoogle Scholar

  • Manski, C. F. 1990. “Nonparametric Bounds on Treatment Effects.” American Economic Review, 319–323. Papers and Proceedings 80.Google Scholar

  • Marshall, J. 2016. “Coarsening Bias: How Coarse Treatment Measurement Upwardly Biases Instrumental Variable Estimates.” Political Analysis 24: 157–171.CrossrefGoogle Scholar

  • Mealli, F., G. Imbens, S. Ferro, and A. Biggeri. 2004. “Analyzing a Randomized Trial on Breast Self-examination with Noncompliance and Missing Outcomes.” Biostatistics 5: 207–222.CrossrefGoogle Scholar

  • Mealli, F., and B. Pacini. 2013. “Using Secondary Outcomes and Covariates to Sharpen Inference in Instrumental Variable Settings.” Journal of the American Statistical Association 108: 1120–1131.CrossrefGoogle Scholar

  • Melly, Blaise, and Kaspar Wüthrich. 2018. “Local Quantile Treatment Effects,” In Handbook of Quantile Regression, edited by Roger Koenker, Victor Chernozhukov, Xiuming He, and Limin Peng, 145–164. Chapman and Hall/CRC.Google Scholar

  • Miquel, R. 2002. “Identification of Dynamic Treatment Effects by Instrumental Variables.” University of St. Gallen Economics Discussion Paper Series 2002–2011.Google Scholar

  • Mogstad, M., A. Santos, and A. Torgovitsky. 2017. Using Instrumental Variables for Inference about Policy Relevant Treatment Effects, NBER Working Paper No. 23568.Google Scholar

  • Mourifié, I., and Y. Wan. 2017. “Testing Late Assumptions.” The Review of Economics and Statistics 99: 305–313.Google Scholar

  • Richardson, T. S., and J. M. Robins. 2010. “Analysis of the Binary Instrumental Variable Model.” In Heuristics, Probability and Causality: A Tribute to Judea Pearl, edited by R. Dechter, H. Geffner, and J. Y. Halpern, 415–440. London, UK: College Publications.Google Scholar

  • Rosenbaum, P., and D. Rubin. 1983. “The Central Role of the Propensity Score in Observational Studies for Causal Effects.” Biometrika 70: 41–55.CrossrefGoogle Scholar

  • Roy, A. 1951. “Some Thoughts on the Distribution of Earnings.” Oxford Economic Papers 3: 135–146.CrossrefGoogle Scholar

  • Rubin, D. B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66: 688–701.CrossrefGoogle Scholar

  • Rubin, D. B. 1976. “Inference and Missing Data.” Biometrika 63: 581–592.CrossrefGoogle Scholar

  • Shaikh, A., and E. Vytlacil. 2011. “Partial Identification in Triangular Systems of Equations with Binary Dependent Variables.” Econometrica 79: 949–955.CrossrefGoogle Scholar

  • Sharma, A. 2016. “Necessary and Probably Sufficient Test for Finding Valid Instrumental Variables.” working paper, Microsoft Research, New York.Google Scholar

  • Slichter, D. 2014. Testing Instrument Validity and Identification with Invalid Instruments. Mimeo: University of Rochester.Google Scholar

  • Small, D. S., and Z. Tan. 2007. “A Stochastic Monotonicity Assumption for the Instrumental Variables Method.” Technical report, Department of Statistics, Wharton School, University of Pennsylvania.Google Scholar

  • Small, D. S., Z. Tan, R. R. R. S. A. Lorch, and M. A. Brookhart. 2017. “Instrumental Variable Estimation with a Stochastic Monotonicity Assumption.” Statistical Science 32: 561–579.CrossrefGoogle Scholar

  • Tan, Z. 2006. “Regression and Weighting Methods for Causal Inference Using Instrumental Variables.” Journal of the American Statistical Association 101: 1607–1618.CrossrefGoogle Scholar

  • Ura, T. 2016. “Heterogeneous Treatment Effects with Mismeasured Endogenous Treatment.” working paper, Duke University.Google Scholar

  • Uysal, S. D. 2011. Doubly Robust IV Estimation of the Local Average Treatment Effects. Mimeo: University of Konstanz.Google Scholar

  • Vuong, Q., and H. Xu. 2017. “Counterfactual Mapping and Individual Treatment Effects in Nonseparable Models with Binary Endogeneity.” Quantitative Economics 8 (2): 589–610.CrossrefGoogle Scholar

  • Vytlacil, E. 2002. “Independence, Monotonicity, and Latent Index Models: An Equivalence Result.” Econometrica 70: 331–341.CrossrefGoogle Scholar

  • Wüthrich, Kaspar 2018. “A Comparison of Two Quantile Models with Endogeneity.” Journal of Business and Economic Statistics, Accepted for publication.Google Scholar

  • Yamamoto, T. 2013. “Identification and Estimation of Causal Mediation Effects with Treatment Noncompliance.” unpublished manuscript, MIT Department of Political Science.Google Scholar

  • Yanagi, T. 2017. “Inference on Local Average Treatment Effects for Misclassified Treatment.” working paper, Hitotsubashi University, Tokyo.Google Scholar

  • Yu, P. 2014. Marginal Quantile Treatment Effect and Counterfactual Analysis, Working Paper, The University of Hong Kong.Google Scholar

  • Zhang, J., D. Rubin, and F. Mealli. 2009. “Likelihood-Based Analysis of Causal Effects of Job-Training Programs Using Principal Stratification.” Journal of the American Statistical Association 104: 166–176.CrossrefGoogle Scholar

About the article

Published Online: 2018-10-09


Citation Information: Journal of Econometric Methods, Volume 8, Issue 1, 20170007, ISSN (Online) 2156-6674, DOI: https://doi.org/10.1515/jem-2017-0007.

Export Citation

©2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Kaspar Wüthrich
Journal of Business & Economic Statistics, 2018, Page 1

Comments (0)

Please log in or register to comment.
Log in