Chernozhukov, V., and I. Fernandez-Val. 2005. “Subsampling Inference on Quantile Regression Processes.” Sankhya 67: 253–276.Google Scholar
Chernozhukov, V., I. Fernandez-Val, and B. Melly. 2009. “Inference on counterfactual distributions.” Cemmap working paper, CWP09/09.Google Scholar
Davies, R. 1977. “Hypothesis Testing When a Nuisance Parameter is Present only Under the Alternative.” Biometrika 64: 247–254.Google Scholar
Davies, R. 1987. “Hypothesis Testing When a Nuisance Parameter is Present Only Under the Alternative.” Biometrika 74: 33–43.Google Scholar
Ferguson, T. S. 1999. “Asymptotic Joint Distribution of Sample Mean and A Sample Quantile.” http://www.math.ucla.edu/tom/papers/unpublished/meanmed.pdf.
He, X., and Q.-M. Shao. 1996. “A General Bahadur Representation of m-Estimators and its Applications to Linear Regressions with Nonstochastic Designs.” The Annals of Statistics 24: 2608–2630.CrossrefGoogle Scholar
Khmaladze, E. V. 1981. “Martingale Approach in the Theory of Goodness-of-fit tests.” Theory of Probability and its Applications 26: 240–257.Google Scholar
Knight, K. 1998. “Limiting Distributions for L1 Regression Estimators Under General Conditions.” Annals of Statistics 26: 755–770.Google Scholar
Koenker, R. 2005. Quantile regression. New York: Cambridge University Press.Google Scholar
Kosorok, M. 2008. Introduction to Empirical Processes and Semiparametric Inference. New York: Springer.Google Scholar
Loeve, M. 1977. Probability Theory. New York: Springer-Verlag.Google Scholar
Manski, C. 1991. “Regression.” Journal of Economic Literature 29: 34–50.Google Scholar
Rao, C. R. 1973. Linear Statistical Inference and Its Applications. New York: Wiley.Google Scholar
Sethuraman, J., and B. V. Sukhatme. 1959. “Joint Asymptotic Distribution of u-Statistics and Order Statistics.” Sankhya 21: 289–298.Google Scholar
Whang, Y.-J. 2006. “Smoothed Empirical Likelihood Methods for Quantile Regression Models.” Econometric Theory 22: 173–205.Google Scholar
White, H. 2001. Asymptotic Theory for Econometricians. San Diego, California: Academic Press.Google Scholar
About the article
Published Online: 2013-11-02
Published in Print: 2014-01-01
See Manski (1991) for a general discussion on different forms of regression.
Wald tests designed for linear hypothesis were suggested by Koenker and Bassett (1982a,b), Koenker and Machado (1999) and more recently by Goh and Knight (2009). It is possible to formulate a wide variety of tests using variants of the proposed Wald test, from simple tests on a single quantile regression coefficient to joint tests involving many covariates and distinct quantiles at the same time. More recently, there is a large literature on goodness of fit for quantile regression models. See e.g., He and Zhu (2003), Whang (2006), and Escanciano and Velasco (2010).
Koenker and Xiao (2002) consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving the quantile regression process. In a related work Chernozhukov and Fernandez-Val (2005) develop tests for quantile regression process based on subsampling. The test is not based on Khmaladzation.
Intuitively, the income has positive effect on the food expenditure, and also the economic theory conjectures that the food expenditure should be more volatile for the higher income households than for the lower income households; that is to say, there should be heteroskedasticity.
In Remarks (A.1) and (A.2) in Supplemental Appendix A1 we discuss the extensions of the results for the instrumental variables and nonlinear regression cases, respectively.
Here we show the results for only one quantile. The results for the general cases are simple extensions.
We plot size adjusted power function for the sup-Wald and KH tests. The empirical size of the sup-Wald tests for N(0,1), t(3), Exp(1), χ2(2), F(2,7), and F(7,7) are 0.030, 0.019, 0.039, 0.047, 0.036, and 0.036, respectively. The corresponding sizes for KH test are 0.136, 0.129, 0.172, 0.177, 0.200, and 0.142.
This is true in our case since the Engel data is a cross-sectional data set.
We use the heteroskedasticity-robust estimator here.