SEARCH CONTENT

You are looking at 1 - 10 of 11,684 items :

  • Statitistics x
Clear All
Maschinelles Lernen in der Versicherung
Series: De Gruyter STEM

Abstract

This paper presents a Bayesian sampling approach to bandwidth estimation for the local linear estimator of the regression function in a nonparametric regression model. In the Bayesian sampling approach, the error density is approximated by a location-mixture density of Gaussian densities with means the individual errors and variance a constant parameter. This mixture density has the form of a kernel density estimator of errors and is referred to as the kernel-form error density (c.f. Zhang, X., M. L. King, and H. L. Shang. 2014. “A Sampling Algorithm for Bandwidth Estimation in a Nonparametric Regression Model with a Flexible Error Density.” Computational Statistics & Data Analysis 78: 218–34.). While (Zhang, X., M. L. King, and H. L. Shang. 2014. “A Sampling Algorithm for Bandwidth Estimation in a Nonparametric Regression Model with a Flexible Error Density.” Computational Statistics & Data Analysis 78: 218–34) use the local constant (also known as the Nadaraya-Watson) estimator to estimate the regression function, we extend this to the local linear estimator, which produces more accurate estimation. The proposed investigation is motivated by the lack of data-driven methods for simultaneously choosing bandwidths in the local linear estimator of the regression function and kernel-form error density. Treating bandwidths as parameters, we derive an approximate (pseudo) likelihood and a posterior. A simulation study shows that the proposed bandwidth estimation outperforms the rule-of-thumb and cross-validation methods under the criterion of integrated squared errors. The proposed bandwidth estimation method is validated through a nonparametric regression model involving firm ownership concentration, and a model involving state-price density estimation.

Abstract

It is widely recognized that aggregate employment dynamics is characterized by hysteresis. In the presence of hysteresis, the long run level of employment instead of being unique and history-independent, depends on the adjustment path that is taken, which includes the monetary and fiscal measures. It is thus important to study the presence of hysteresis in the macrodynamics of employment to understand whether the recession followed 2007s financial crisis will have permanent effects, and prospectively to conduct fiscal and monetary policies. The main contribution of this paper is to analyse the relative impact of the main sources hysteresis (non-convex adjustment costs, uncertainty and the flexibility of working time arrangements) to the width of the employment band of inaction. For that purpose, a switching employment equation was estimated from a computational implementation of the linear play model of hysteresis. From our results we found significant hysteresis effects in the aggregate employment dynamics caused by the presence of non-convex adjustment costs as uncertainty. We also found that the flexibility firms may have to adjust labour input by varying the number of hours of work per employee helps to mitigate the effect of uncertainty upon the band of inaction.

Abstract

We present the findings of a new time-series model that estimates short-term health effects of particulate matter and ozone, as applied to three U.S. cities. The model is based on observed fluctuations of daily death counts and estimates the corresponding daily subpopulations at-risk of imminent death; it also shows that virtually all elderly deaths are preceded by a brief period of extreme frailty. We augment previous research by allowing new entrants to this at-risk population to be influenced by the environment, rather than be random. The mean frail subpopulations in the three cities, each containing between 3000 and 5000 daily observations on mortality, pollution, and temperature, are estimated to be about 0.1% of those aged 65 or more, and their life expectancies in this frail status are about one week. We find losses in life expectancy due to air pollution and temperature to be at most one day. Air pollution effects on new entrants into the frail population tend to exceed those on mortality. Our results provide context to the many time-series studies that have found significant short-term relationships between air quality and survival, and they suggest that benefits of air quality improvement should be based on increased life expectancy rather than estimated numbers of excess deaths.

Abstract

We analyze Australian electricity price returns and find that they exhibit volatility clustering, long memory, structural breaks, and multifractality. Consequently, we let the return mean equation follow two alternative specifications, namely (i) a smooth transition autoregressive fractionally integrated moving average (STARFIMA) process, and (ii) a Markov-switching autoregressive fractionally integrated moving average (MSARFIMA) process. We specify volatility dynamics via a set of (i) short- and long-memory GARCH-type processes, (ii) Markov-switching (MS) GARCH-type processes, and (iii) a Markov-switching multifractal (MSM) process. Based on equal and superior predictive ability tests (using MSE and MAE loss functions), we compare the out-of-sample relative forecasting performance of the models. We find that the (multifractal) MSM volatility model keeps up with the conventional GARCH- and MSGARCH-type specifications. In particular, the MSM model outperforms the alternative specifications, when using the daily squared return as a proxy for latent volatility.

Abstract

In data envelopment analysis (DEA), returns to scale (RTS) are a widely accepted instrument for a company to reveal its activity scaling potentials. In the case of increasing returns to scale (IRS), a company learns that upsizing activities improves its productivity. For decreasing returns to scale (DRS), the instrument likewise should depict a downsizing force, again for improving productivity. Unfortunately, here the classical RTS concept shows misbehavior. Under certain circumstances, it is the wrong indicator for scaling activities and even hides respective productivity improvement potentials. In this paper, we study this phenomenon, using the DEA concept, and illustrate it via little numerical examples and a real-world application consisting of 37 Brazilian banks.

Abstract

For policy decisions, capturing seasonal effects in impulse responses are important for the correct specification of dynamic models that measure interaction effects for policy-relevant macroeconomic variables. In this paper, a new multivariate method is suggested, which uses the score-driven quasi-vector autoregressive (QVAR) model, to capture seasonal effects in impulse response functions (IRFs). The nonlinear QVAR-based method is compared with the existing linear VAR-based method. The following technical aspects of the new method are presented: (i) mathematical formulation of QVAR; (ii) first-order representation and infinite vector moving average, VMA (∞), representation of QVAR; (iii) IRF of QVAR; (iv) statistical inference of QVAR and conditions of consistency and asymptotic normality of the estimates. Control data are used for the period of 1987:Q1 to 2013:Q2, from the following policy-relevant macroeconomic variables: crude oil real price, United States (US) inflation rate, and US real gross domestic product (GDP). A graphical representation of seasonal effects among variables is provided, by using the IRF. According to the estimation results, annual seasonal effects are almost undetected by using the existing linear VAR tool, but those effects are detected by using the new QVAR tool.

Abstract

This paper proposes parametric two-step procedures for assessing the stability of cross-sectional dependency measures in the presence of potential breaks in the marginal distributions. The procedures are based on formerly proposed sup-LR tests in which restricted and unrestricted likelihood functions are compared with each other. First, we show theoretically that standard asymptotics do not hold in this situation. We propose a suitable bootstrap scheme and derive test statistics in different commonly used settings. The properties of the test statistics and precision of the associated change-point estimator are analysed and compared with existing non-parametric methods in various Monte Carlo simulations. These studies reveal advantages in test power for higher-dimensional data and an almost uniform superiority of the sup-LR test in terms of precision of the change-point estimator. We then apply this method to equity returns of European banks during the financial crisis of 2008.

Abstract

We report the results of applying several long-memory models to the historical monthly U.S. inflation rate series and analyze their out-of-sample forecasting performance over different horizons. We find that the time-varying approach to estimating inflation persistence outperforms the models that assume a constant long-memory process. In addition, we examine the link between inflation persistence and exchange rate regimes. Our results support the hypothesis that floating exchange rates associate with increased inflation persistence. This finding, however, is less pronounced during the era of the Great Moderation and the Federal Reserve System’s commitment to inflation targeting.