This review covers several of the core methodological and empirical developments surrounding stochastic frontier models that incorporate various new forms of dependence. Such models apply naturally to panels where cross-sectional observations on firm productivity correlate over time, but also in situations where various components of the error structure correlate between each other and with input variables. Ignoring such dependence patterns is known to lead to severe biases in the estimates of production functions and to incorrect inference.
The roots of stochastic frontier analysis (SFA) can be traced back to the origins of classical growth accounting  and, perhaps, production planning . These fields deal with production relationships, which are usually modeled through production functions or, more generally, tranfsformation functions (e.g., Shephard’s distance functions, directional distance functions, cost functions, etc.). In the classical growth accounting approach, all variation in growth, apart from the variation of inputs, is attributed to the so-called Solow’s residual, which under certain restrictions measures what is referred to as the change in total factor productivity (TFP).
Early models assumed that all the decision making units (DMUs) represented in the data as observations (e.g., firms, countries, etc.) were independent of one another and fully efficient. This ignored the numerous inefficiencies that arise in practice, which have arbitrary dependence and can arise due to such factors as simultaneity in DMU decisions, unobserved heterogeneity, common sources of information asymmetry and other market imperfections , managerial practices , cultural beliefs , traditions, expectations, and other unobserved factors inducing unaccounted dependence in models of production .
A key feature of several recent developments in SFA is the construction of a statistical model with as few restrictions on its dependence properties as possible. This implicitly recognizes that various forms of dependence are empirical questions that can and should be statistically tested against the data. Modern implements of SFA provide a framework where shortfalls from the production potential are decomposed into two terms – statistical noise and inefficiency, both of which are unobserved by a researcher but can be estimated for the sample as a whole (e.g., representing an industry) or for each individual DMU under a variety of possible dependence scenarios.
The extensions of SFA allowing for dependence that has until recently been ignored or overly restrictive are the focus of this review. To a large extent, they are triggered by the fact that restricting the nature of dependence of the composed error can lead to severe biases in estimators and incorrect inference. For example, within the confines of the traditional SFA approach one can test for the presence of either inefficiency or noise [25,73]. Thus, the model encompasses the classical approach with a naive assumption of full efficiency (conditional mean) and the deterministic production frontier as special cases. However, such estimators and tests themselves have been derived under the assumption of independence between inefficiency and noise; empirical results suggest that allowing for dependence changes the estimates and tests significantly, potentially distorting the conclusions of the classical models.
Thus, SFA under dependence is a natural relaxation of the extreme assumptions of full efficiency and independence, yet it also encompasses them as special cases, which can still be followed if the data and the statistical tests do not recommend otherwise. If there is evidence in favor of the full efficiency hypothesis after allowing for dependence, one can proceed with regression techniques or growth accounting, but this inference would now be robust to these assumptions.
Accounting for dependence within a production model could be critical for both quantitative and qualitative conclusions and, perhaps more importantly, for the resulting policy implications. For example, El Mehdi and Hafner  found that estimated technical efficiency scores across the financing of Moroccan rural districts allowing for dependence tend to be lower than under the assumption of independence but the rankings remained basically the same. Thus, a key difference emerges if one is looking to identify the best versus measure how much improvement can be made.
While some of the methods and models we present here can also be found in previous reviews, e.g.,  and , and it is impossible to give a good review without following them to some degree, here we also summarize many of (what we believe to be) key recent developments as well as (with their help) shed some novel perspectives onto the workhorse methods. We do not claim, however, that this survey comprehensively covers of the relevant recent developments in modelling dependence in SFA. Many other important references can be found elsewhere.
The rest of the article is structured as follows. Section 2 introduces the classical cross-sectional stochastic frontier models (SFMs) and focuses on dependence between error components in such models. Section 3 considers dependence via sample selection. Section 4 surveys dependence models used in panels. Section 5 discusses dependence that underlies endogeneity in SFM, which is a situation when there is dependence between production inputs and error terms. Section 6 discusses how dependence can help obtain more precise estimates of inefficiency. Section 7 concludes.
2 The benchmark SFM and dependence within the composed error
In cross-sectional settings, one of the main approaches to study productivity and efficiency of firms is the SFM, independently proposed by Aigner et al.  and Meeusen and van den Broeck . Using conventional notation, let be the single output for observation (e.g., firm) and let . The cross-sectional SFM can be written for a production frontier as
Here represents the production frontier of a firm (or more generally a DMU), with given input vector . Observations indexed by are assumed to be independent and identically distributed. Our use of is to clearly signify that we are parametrically specifying our production function, most commonly as a Cobb-Douglas or translog (see, e.g.,  or  for a detailed treatment on nonparametric estimation of the SFM).
2.1 Canonical independence framework
The main difference between a standard production function setup and the SFM is the presence of two distinct error terms in the model. The term captures inefficiency, shortfall from maximal output dictated by the production technology, while the term captures stochastic shocks. The standard neoclassical production function model assumes full efficiency – so SFA embraces it as a special case, when , , and allows the researcher to test this statistically. It is commonly assumed that inputs are exogenous, in the sense that is independent of and , , and the two components of the error term are independent, .
Many estimation methods require distributional assumptions for both and (beyond the assumption of independence). For an assumed distributional pair, one can obtain the implied distribution for and then estimate all of the parameters of the SFM with the maximum likelihood estimator (MLE). The most common assumption is that and is from a Half Normal distribution, , or is from an Exponential distribution with parameter .
The most popular case for the density of the composed error is obtained for the Normal Half Normal specification under independence . According to Aigner et al. , the distribution function of a sum of a normal and Truncated Normal was first derived by Weinstein . Let and denote the density of and , respectively. For the Normal Half Normal case, and , where is the standard Normal probability density function (pdf). The closed form expression for the pdf can be obtained by convolution as follows:
where is the standard normal cumulative distribution function (cdf), with the parameterization and . is commonly interpreted as the proportion of variation in due to inefficiency. The density of in (2.2) can be characterized as that of a Skew Normal random variable with location parameter 0, scale parameter , and skew parameter . This connection has only recently appeared in the efficiency and productivity literature .
It is worth noting that the closed form expression in (2.2) is equivalent to
where expectations are taken with respect to the relevant distribution. This suggests an alternative, simulation-based, way to construct the density by sampling from the distribution of or and evaluating the corresponding sample averages. Among the two sampling options (from the distribution of or from the distribution of ), sampling the ’s is more practical as it avoids to need to ensure that . Sampling can be easily done by sampling from the standard normal distribution and taking the absolute values (in the case of the Half Normal distribution).
Our mathematical formulation will focus on a production frontier as it is the most popular object of study. The framework for dual characterizations (e.g., cost, revenue, profit) or other frontiers is similar and follows with only minor changes in notation. For example, the cost function formulation is obtained by changing the sign in front of to a “ ,” which will represent excess, rather than shortfall, of cost above the minimum level.
2.2 Modeling dependence
Smith  relaxed the assumption of independence between and by introducing a copula function to model their joint distribution. This is one of the first relaxations of the independence assumptions available for SFA and it allowed testing the adequacy of this assumption. If the marginal distributions of and are linked by a copula density , then their joint density can be expressed as follows:
where and denote the respective cdfs. It then follows by a similar construction to (2.2) that the density of can be written as
For commonly used copula families, this density does not have a close form expression similar to (2.2), even in the Normal Half Normal case, so a simulation-based approach would often need to be used, where we simulate many draws of and evaluate the sample analogue of the following expectation with respect to the distribution of :
Smith  found that ignoring the dependence can lead to biased estimates and discussed how one can test whether the independence assumption nested within this model is adequate. It is easy to see that the model in (2.2) is a special case of (2.3) when is the independence (or product) copula.
From , along with the assumption of independence over , the log-likelihood function can be written as follows:
where . The SFM can be estimated using the traditional MLE, if an analytic expression for the integrals is available, or the maximum simulated likelihood estimator (MSLE), if we need to use a simulation-based approach to evaluate the integrals of the density . The benefit of MLE/MSLE is that under the assumption of correct distributional specification of , the MLE is asymptotically efficient (i.e., consistent, asymptotically Normal, and its asymptotic variance reaches the Cramer–Rao lower bound). A further benefit is that a range of testing options are available. For instance, tests related to can easily be undertaken using any of the classic trilogy of tests: Wald, Lagrange multiplier, or likelihood ratio. The ability to readily and directly conduct asymptotic inference is one of the major benefits of SFA over data envelopment analysis (DEA).
Two main issues that practitioners face when confronting dependence are the choice of copula model and the assumed error distributions that best fit the data. As Wiboonpongse et al. (, p. 34) note “The impact of the independence assumption on technical efficiency estimation has long remained an open issue.” Analytical criteria such as AIC or BIC can be used for these purposes, see both  and  for detailed reviews.
More specifically, Wiboonpongse et al.  use MSLE and systematically consider several copula families including the Student-t, Clayton, Gumbel and Joe families as well as their relevant rotated versions. Their data is a cross section from coffee production in Thailand and they use AIC and BIC to determine which copula model is most appropriate. Wiboonpongse et al.  also assume the marginals of the two error components are Normal and Half Normal, then apply a range of copulas to inject dependence. In their empirical application they have a total of 111 observations. The Clayton copula is found to be the best and plots of technical efficiencies across 111 farmers for independence and the best fitting copula model found near uniformly lower TE scores (though not much different). Finally, it also appears that the ranks are preserved (see their Figure 3).
An unintended benefit of modelling dependence is that it may alleviate the “wrong skewness” issue that is common in the canonical Normal Half Normal SFM [77,89]. The wrong skewness issue arises when the OLS residuals display skewness that is of the wrong sign compared to that stemming from the frontier model (so positive when estimating a production frontier). For specific distributional pairs the model cannot separately identify the variance parameters for both and . It was noted in  that the third central moment of the composed error is
It is clear that the skewness of only depends on the skewness of when is assumed to be symmetric and independent of . Once and are allowed to be dependent and/or is allowed to be asymmetric, then the skewness of the composed error does not have to align with the skewness of inefficiency. Thus, modelling dependence is one way in which some of the empirical vagaries of the SFM can be overcome .
2.2.1 Asymmetric dependence
A common feature of all of the papers that have allowed dependence in the SFM is the use of copulas that introduce symmetric dependence. Symmetric dependence assumes that the noise and inefficiency components are treated equally in the SFM. However, a recent suggestion by Wei et al.  offers a set of copulas that allow for asymmetric dependence. As Wei et al. (, p. 57) note “[…]in practical situations, the inefficiency component and the error component often play different roles in global inefficiency, and in such cases, the symmetric copulas are not suitable.” They define asymmetric copulas as those that have non-exchangeable and/or radial asymmetric properties .
Wei et al.  introduced the Skew Normal copula and used it to construct their SFM with dependence. An interesting feature of their general setup is that they allow both and to be asymmetric along with an asymmetric copula (see their Proposition 3.1). As in , Wei et al.  recommended selecting the copula model based on AIC/BIC. In their empirical application 31 out of 108 farms have the same efficiency rank (the bottom 5 are in complete agreement as are 4 of the top 5) across the standard SFM and the asymmetric copula SFM. The point estimates of technical efficiency however show large differences among the two competing models, which again provides evidence that ignoring dependence can have an undue influence on the point estimates of technical inefficiency.
3 Dependence via sample selection
Another way in which dependence can arise is through sample selection. By itself sample selection has only recently been a serious area of focus in the stochastic frontier literature. Several early approaches to deal with potential sample selection follow the two-step correction . In the first stage, the probability of selection is estimated and the inverse Mill’s ratio is calculated for each observation. This estimated inverse Mill’s ratio is then included as a regressor in the final SFM. An example of this is the Finnish farming study . This limited information two-step approach works in a standard linear regression framework because of linearity, which Greene  makes clear. However, as shown in , when inefficiency is present no two-step approach will work and full information maximum likelihood estimation is required.
Recognizing the limitations of direct application of the two-stage approach, both Kumbhakar et al.  and Greene  proposed alternative stochastic frontier selection models. The two approaches differ in how selection arises in the model. The Greene  model allows the choice of technology to be influenced by correlation between random error in the selection and frontier models, whereas Kumbhakar et al.  constructed a model where the choice of technology is based on some aspect of inefficiency, inducing a different form of sample selection. Beyond the difference in how selection arises, the sample selection stochastic production frontier models  and  are identical.
Sriboonshitta et al.  were the first to recognize that dependence could enter the sample selection model. They work with the Greene  stochastic frontier sample selection model and admit dependence into the composite error term. This is termed a double-copula because they have a copula in the sample selection equation and a copula in the SFM. See (, equation (20)) for the likelihood function.
Beyond a small set of simulations, Sriboonshitta et al.  applied the double copula sample selection SFM to 200 rice farmers from Kamphaeng Phet province, Thailand, in 2012 using a Cobb-Douglas production frontier and considered eight different copula functions (see their Table 4). Their preferred model based on the AIC is a Gaussian copula with 270-degree rotated Clayton model. They find a substantial difference in estimated TE scores between the Greene  model which assumes independence and their double-copula model (see their Figure 5). As Sriboonshitta et al. (, p. 183) note “[…]improperly assuming independence between the two components of the error term in the SFM may result in biased estimates of technical efficiency scores, hence potentially leading to wrong conclusions and recommendations.”
As a further extension of , Liu et al.  noted that “this double-copula model neglects the correlation between the unobservables in the selection model and the random error in the SFM, in contrast to Greene’s model.” Liu et al.  generalized the Greene  model by modeling the dependence between the unobservables in the selection equation and the two error terms in the production equation using a trivariate Gaussian copula. The key feature is that the trivariate and double copula models rely on different assumptions concerning the joint distribution of , , and ( here is the error in the selection equation). Liu et al.  made note of the decomposition
and note that the double copula model assumes that , i.e., the distribution of only depends on the composite error, not the individual pieces. This also implies that the double copula model and the trivariate copula model are nonnested.
Liu et al.  provided an application that focuses on Jasmine/non-Jasmine rice farming in Thailand. The data suggest is Gamma distributed for the most preferred model. As with some of the earlier papers, Liu et al. (, p. 193) noted that “[…]both Greene’s model and the double-copula model appear to overestimate technical efficiency. According to the [trivariate Gaussian copula] model, farmers also exhibit a wider range of production technical efficiency in Jasmine rice farming […].”
4 Dependence in panel SFM
When repeated observations of the firms are available, then we can allow for richer models that incorporate unobserved components and various other dependence structures. Most importantly, we can extract information about likely time trends in inefficiency and time constant firm-specific characteristics. Pitt and Lee  seem to be the first to extend the cross-sectional SFM to a panel structure, and Schmidt and Sickles  were the first to propose a panel-specific methodology for SFA.
4.1 A benchmark specification
The benchmark panel SFM can be written as follows:
This model differs from (2.1) in many ways. All observed variables and error terms inherited from (2.1) now have a double-index for both firms, , and time, In addition, the model contains the so-called firm-specific heterogeneity, , and the time-invariant component of inefficiency . Compared with (2.1), encapsulates any unobserved factors that affect output (other than inputs) without changing over time such as unmeasured management or operational specifics of the firm. If such factors are present, the dependence between and causes omitted variable biases and invalidates inference based on cross-sectional SFM. In panel models, when ignored, such factors can serve as common sources of dependence in the error term which also leads to invalid inference.
Another distinguishing feature of (4.1) is the presence of , a component of inefficiency which is time-invariant. This means that inefficiency is composed of both time-invariant and time variant components, which are sometimes interpreted as long-run and short-run inefficiency. Since both and are unobserved, it will generally be difficult to decompose into its subsequent firm-specific and time-invariant inefficiency components.
Classical panel methods (i.e., methods that assume that and do not exist) allow for various forms of dependence between and . For example, estimation under the fixed effects (FE) framework allows to be correlated with and uses a within transformation to obtain a consistent estimator of . Alternatively, estimation in the random effects (RE) framework assumes that and are independent and uses OLS or GLS. The difference between OLS and GLS arises due to the fact that the variance-covariance matrix of the composed error term is no longer diagonal, and so, feasible GLS is asymptotically efficient.
The early work on panel SFM assumed inefficiency to be time-invariant. This allowed handling dependence within panels using classical panel methods such as FE and RE estimation . The standard time-invariant SFM is
where . Under the FE framework, the serially independent inefficiency , is allowed to have arbitrary dependence with . In cases in which there are time-invariant variables of interest in the production model, one can use the RE framework, which also requires no distributional assumptions on and and can be estimated with OLS or GLS. Alternatively, in such cases, one can rely on distributional assumptions as in , where is assumed to follow a Normal distribution and Half Normal.
Table 1 contains a summary of the classical SFMs allowing for specific forms of serial dependence in . It also lists any additional dependence structures permitted in these different models such as dependence between and . See  and  for a detailed discussion of these methods.
|Paper||Serial dependence in inefficiency||Other dependence allowed|
|Schmidt and Sickles ||None||Between and|
|Cornwell et al. ||Between and|
|Battese and Coelli ||None|
|Lee and Schmidt ||Between and|
|Battese and Coelli ||None|
|Kumbhakar and Heshmati ||None|
|Greene ||None||Between and|
|Greene ||None||Between and|
|Wang and Ho ||None||Between and|
|Badunenko and Kumbhakar |
4.2 Quasi MLE
If there is no or if is assumed to be part of or , which are independent of , then we can view the panel model as a special case of the cross-sectional model (2.1), only with the double-index . The MLE method described in Section 2 applies in this case but it uses the sample likelihood obtained leveraging the assumption of independence over both and , not just . Because independence over is questionable in panels, this version of MLE is often referred to as quasi-MLE (QMLE).
Let and let denote the density of the composed error term evaluated at . Then,
and the QMLE of can be written as
The QMLE is known to be consistent even if there is no independence over but to obtain the correct standard errors one needs to use the so-called “sandwich,” or misspecification-robust, estimator of the QMLE asymptotic variance matrix. QMLE is known to be dominated in terms of precision by several other estimators that use the dependence information explicitly (and correctly). However, an appeal of the QMLE in this setting is that assuming independence is more innocuous in the sense that it does not lead to estimation bias, only to a lack of precision, when compared to a misspecification of the type of dependence that can lead to distinct biases.
Amsler et al.  proposed several estimators that model time dependence in panels. One such estimator can be obtained in the Generalized Method of Moments (GMM) framework. Let denote the score of the density function , i.e.,
where denotes the gradient with respect to . Then, the QMLE of solves
and is identical to the GMM estimator based on the moment condition
where expectation is with respect to the distribution of . However, under time dependence, summation (over ) of the scores in (4.6) is not the optimal weighting. The theory of optimal GMM suggests using correlation of over by applying the GMM machinery to the score functions written as follows:
The optimal GMM estimator based on these moment conditions has the smallest asymptotic variance than that of any other estimator using these moment conditions. In a classical (non-SFA) panel data setting, Prokhorov and Schmidt  call this estimator Improved QMLE (IQMLE).
4.3 Using a Copula
Alternative estimators that allow explicit modelling of dependence between cross-sectional errors over have to construct a joint distribution of those errors. Amsler et al.  offered two ways of doing so. One is to apply a copula to form , the joint (over ) density of the composed errors . The other is to use a copula to form , the joint distribution of .
Given the Normal/Half Normal marginals of ’s in (2.2) and a copula density , the joint density can be written as follows:
where, as before, is the pdf of the composed error term evaluated at and is the corresponding cdf. Once the joint density is obtained we can construct a log-likelihood and run MLE. If we let the copula density have a scalar parameter , then the sample log-likelihood can be written as follows:
The first term in the summation is what distinguishes this likelihood from QMLE – an explicit modelling of dependence between the composed errors at different .
In a GMM framework, the MLE that maximizes (4.7) is identical to the GMM estimator based on the moment conditions
where . Again, efficiency improvement is, in some circumstances, possible if we instead use the optimal GMM machinery on the moment conditions
However, this improvement may now come at the price of a bias as the copula-based moment conditions may be misspecified causing inconsistency of GMM and offsetting any benefit of higher precision. So assuming a wrong kind of time dependence may be worse than assuming independence (over time). Prokhorov and Schmidt  explored these circumstances.
The alternative copula-based specification is to form the joint distribution of rather than that of . A challenge of this specification is that a -dimensional integration will be needed to form the likelihood in this case. Let denote the copula-based joint density of the one-sided error vector and let denote the marginal density of an individual one-sided (Half Normal) error term. Then,
where is the cdf of the Half Normal error term.
To form the sample likelihood we need the joint density of the composed error vector . Given the density of and assuming, as before, that , this density can be obtained as follows:
where denotes the expectation with respect to and is the multivariate Normal pdf of , where all ’s are independent and have equal variance .
Similar to the previous section, this integral has no analytical form. Additionally, this is a -dimensional integral, which is computationally strenuous to evaluate using numerical methods. However, it has the form of an expectation over a distribution we can sample from and this, as before, permits application of MSLE, where we simulate the ’s and estimate by averaging over the draws. To be precise, let denote the number of simulations. The direct simulator of can be written as follows:
where is a draw from constructed in (4.9). Then, a simulated log-likelihood can be obtained as follows:
where, as before, and .
This method is a multivariate extension to the simulation-based estimation of univariate densities discussed earlier. An important additional requirement is the ability to sample from the copula; see (, Ch. 2) for a discussion of how to sample from the copula to allow dependence. Other than that, similar asymptotic arguments suggest that MSLE is asymptotically equivalent to MLE .
5 Dependence due to endogeneity
A common assumption in the SFM is that is either exogenous or independent of both and . If either of these conditions are violated, then all of the estimators discussed so far will be biased and most likely inconsistent. Yet, it is not difficult to think of settings where endogeneity is likely to exist. For example, if shocks are observed before inputs are chosen, then producers may respond to good or bad shocks by adjusting inputs, leading to correlation between and . Alternatively, if managers know they are inefficient, they may use this information to guide their level of inputs, again, producing endogeneity. In a regression model, dealing with endogeneity is well understood. However, in the composed error setting, these methods cannot be simply transferred over, but require care in how they are implemented .
To incorporate endogeneity into the SFM in (2.1), we set , where are our exogenous inputs, and are the endogenous inputs, where endogeneity may arise through correlation of with , , or both. To deal with endogeneity we require instruments, , and identification necessitates that the dimension of is at least as large as the dimension of . The natural assumption for valid instrumentation is that is independent of both and .
Why worry about endogeneity? Economic endogeneity means that the inputs in question are choice variables and chosen to optimize some objective function such as cost minimization or profit maximization. Statistical endogeneity arises from simultaneity, omitted variables, and measurement errors. For example, if the omitted variable is managerial ability, which is part of inefficiency, inefficiency is likely to be correlated with inputs because managerial ability affects inputs. This is the Mundlak argument for why omitting a management quality variable (for us inefficiency) will cause biased parameter estimates. Endogeneity can also be caused by simultaneity meaning that more than one variable in the model are jointly determined. In many applied settings, it is not clear what researchers mean when they attempt to handle endogeneity inside the SFM. An excellent introduction into the myriad of influences that endogeneity can have on the estimates stemming from the SFM can be found in . Mutter et al.  used simulations designed around data based on the California nursing home industry to understand the impact of endogeneity of nursing home quality on inefficiency measurement.
The simplest approach to accounting for endogeneity is to use a corrected two-stage least squares (C2SLS) approach, similar to the common correct ordinary least squares (COLS) approach that has been used to estimate the SFM. This method estimates the SFM using standard two-stage least squares (2SLS) with instruments . This produces consistent estimators for and but not , as this is obscured by the presence of (to ensure that the residuals have mean zero). The second and third moments of the 2SLS residuals are then used to recover estimators of and . Once is determined, the intercept can be corrected by adding . See Section 4.1 of  for details.
This represents a simple avenue to account for endogeneity, and it does not require specifying how endogeneity enters the model, i.e., through correlation with , with or both. However, as with other corrected procedures based on calculations of the second and third (or even higher) moments of the residuals, from  and , if the initial 2SLS residuals have positive skew (instead of negative), then cannot be identified and its estimator is 0. Furthermore, the standard errors from this approach need to be modified for the estimator of the intercept to account for the step-wise nature of the estimation.
5.1 A likelihood framework
Likelihood-based alternatives allow for explicit modelling and estimation of the dependence structure that underlies endogeneity. This has recently been studied by Kutlu , Karakaplan and Kutlu , Tran and Tsionas [87,88], and Amsler et al. . Our discussion here follows  as their derivation of the likelihood relies on a simple conditioning argument as opposed to the earlier work relying on the Cholesky decomposition or alternative approaches. While all approaches lead to a likelihood function, the conditioning idea of Amsler et al.  is simpler and more intuitive.
Consider the stochastic frontier system:
where , , is the vector of instruments, (different from in Sections 3.1–3.2) is uncorrelated with and endogeneity of arises through . Here simultaneity bias (and the resulting inconsistency) exists because is correlated with either , , or both.
We start with the case of dependence between and while is independent of . Assume that, conditional on , , where
To derive the likelihood function,  condition on the instruments, . Doing this yields . With the density in this form, the log-likelihood follows suite: , where corresponds to and corresponds to . These two components can be written as
where , , , , and . The subtraction of in is an endogeneity correction while it should be noted that is nothing more than the standard likelihood function of a multivariate normal regression model (as in (5.2)). Estimates of the model parameters and can be obtained by maximizing .
While direct estimation of the likelihood function is possible, a two-step approach is also available . However, as pointed out by both Kutlu  and Amsler et al. , this two-step approach will have incorrect standard errors. Even though the two-step approach might be computationally simpler, it is, in general, different from full optimization of the likelihood function of Amsler et al. . This is due to the fact that the two-step approach ignores the information provided by and in . In general, full optimization of the likelihood function is recommended as the standard errors (obtained in a usual manner from the inverse of the Fisher information matrix) are valid.
5.2 A GMM framework
An insightful avenue to model dependence due to endogeneity in the SFM that differs from the traditional corrected methods or maximum likelihood stems from the GMM framework as proposed by Amsler et al. , who used the insights of Hansen et al. . Similar to our discussion on the use of GMM in panel estimation, the idea is to use the first-order conditions for maximization of the likelihood function under exogeneity as a GMM problem:
where and . Note that these expectations are taken over and (and by default, ) and solved for the parameters of the SFM.
The key here is that these first-order conditions (one for , one for , and the vector for ) are valid under exogeneity and this implies that the MLE is equivalent to the GMM estimator. Under endogeneity however, this relationship does not hold directly. But the seminal idea of Amsler et al.  is that the first-order conditions (5.3) and (5.4) are based on the distributional assumptions on and , not on the relationship of with and/or . Thus, these moment conditions are valid whether contains endogenous components or not. The only moment condition that needs to be adjusted is (5.5). In this case, the first-order condition needs to be taken with respect to , the exogenous variable, not . Doing so results in the following amended first-order condition:
where and are identical to those in (5.5). It is important to acknowledge that this moment condition is valid when and are independent. This is a more stringent requirement than the typical regression setup with . As with the C2SLS approach, the source of endogeneity for does not need to be specified (through and/or ).
5.3 An economic model of dependence
From an economic theory perspective, there are grounds to model dependence between and , not only between and . A system similar to (5.1)–(5.2) arises as a result of appending the SFM with the first-order conditions of cost minimization (, Chapter 8):
for input prices , the first-order conditions in this case are
where is the partial derivative of with respect to . These first-order conditions are exact, which usually does not arise in practice, rather, a stochastic term is added which is designed to capture allocative inefficiency. That is, our empirical first-order conditions are for , where captures allocative inefficiency for the jth input relative to input one (the choice of input to compare is without loss of generality).
The idea behind allocative inefficiency is that firms could be fully technically efficient, and still have room for improvement due to over or under use of inputs, relative to another input, given the price ratio. On the other hand, firms can be technically inefficiency because of allocative inefficiency and vice versa so independence between and is hard to justify. Additionally, if firms are cost minimizers and one estimates a production function, the inputs will be endogenous as these are choice variables to the firm. In this case, input prices can serve as instruments.
Combining the SFM, under the Cobb–Douglas production function, with the information in the conditions in (5.8) with allocative inefficiency built in, results in the following system:
where is the log of input of firm , is the log of input price, is the coefficient on input in (5.9), and are the allocative inefficiencies for inputs with respect to input one. See Schmidt and Lovell [74,75] for details.
5.4 A copula-based approach
Amsler et al.  used copulas to obtain a joint distribution for , and , whereas Amsler et al.  developed a new copula family for and with properties that reflect the nature of allocative (symmetric) and technical (one-sided) inefficiencies. Here we provide the derivation of a copula-based likelihood for the most general case that allows us to model dependence between all the components of .
We keep the Half Normal marginal for , Normal marginals for the elements of as before, and assume a copula density . Amsler et al.  used the Gaussian copula, which implies that the joint distribution of is Normal but this is largely done for convenience. This gives the joint density of :
However, we need the joint density of and in order to form a sample log-likelihood. This density can be obtained by integrating out of as follows:
Again, we can use simulation techniques to evaluate this density. Specifically, given draws of , the direct simulator can be written as
and this leads to MSLE using the log-likelihood
The MSLE will produce estimates of all the parameters of the model, that is, , variances of and whatever copula parameters appear in . This permits modelling and testing the validity of independence assumptions between all error terms in the system including the assumption of exogeneity.
5.5 Dependence on determinants of inefficiency
To conclude this section, we consider the extension to a setting when inefficiency depends on covariates and some of these determinants of inefficiency may be endogenous [7,55]. These models can be estimated using traditional instrumental variable methods. However, given that the determinants of inefficiency enter the model nonlinearly, nonlinear methods are required.
Amsler et al.  considered the model
where is the baseline inefficiency and has the property that the scale of its distribution (relative to the distribution of ) changes depending on the determinants (the so-called scaling property). The covariates and are partitioned as
where and are exogenous and and are endogenous. The set of instruments used to combat endogeneity are defined as
where are the traditional outside instruments. Identification of all the parameters requires that the dimension of be at least as large as the dimension of plus the dimension of (the rank condition).
In the model of Amsler et al. , endogeneity arises through dependence between a variable in the model ( and/or ) and noise, . That is, both and are assumed to be independent of baseline inefficiency . Given that is not constant, the COLS approach to deal with endogeneity proposed by Amsler et al.  cannot be used here. To develop an appropriate estimator, add and subtract the mean of inefficiency to produce a composed error term that has mean 0,
Proper estimation through instrumental variables requires that the following moment condition holds
The nonlinearity of these moment conditions would necessitate use of nonlinear two-stage least squares (NL2SLS) .
Latruffe et al.  have a similar setup to Amsler et al. , using the model in (5.13), but develop a four-step estimator for the parameters; additionally, only is treated as endogenous. Latruffe et al.’s  approach is based on  using the construction of efficient moment conditions. The vector of instruments proposed in  is defined as
where captures the linear projection of on the external instruments . The four-stage estimator is defined as
Regress on to estimate . Denote the OLS estimator of as .
Use NLS to estimate the SFM in (5.13). Denote the NLS estimates of as . Use the NLS estimate of and the OLS estimate of in Step 1 to construct the instruments .
Using the estimated instrument vector , calculate the NL2SLS estimator of as . Use the NL2SLS estimate of and the OLS estimate of in Step 1 to construct the instruments .
Using the estimated instrument vector , calculate the NL2SLS estimator of as .
This multi-step estimator is necessary in the context of efficient moments because the actual set of instruments is not used directly, rather is used, and this instrument vector requires estimates of and . The first two steps of the algorithm are designed to construct estimates of these two unknown parameter vectors. The third step then is designed to construct a consistent estimator of , which is not done in Step 2 given that the endogeneity of is ignored (note that NLS is used as opposed to NL2SLS). The iteration from Step 2 to Step 3 does produce a consistent estimator of , and as such, Step 4 produces consistent estimators for and . While Latruffe et al.  proposed a set of efficient moment conditions to handle endogeneity, the model of Amsler et al.  is more general because it can handle endogeneity in the determinants of inefficiency as well. Finally, the presence of is attractive since this allows the researcher to dispense with distributional assumptions on and .
6 Estimation of individual inefficiency using dependence information
Once the parameters of the SFM have been estimated, estimates of firm level productivity and efficiency can be recovered. Observation-specific estimates of inefficiency are one of the main benefits of the SFM relative to neoclassical models of production. Firms can be ranked according to estimated efficiency; the identity of under-performing firms as well as those who are deemed best practice can also be gleaned from the estimated SFM. All of this information is useful in helping to design more efficient public policy or subsidy programs aimed at improving the market, for example, insulating consumers from the poor performance of heavily inefficient firms.
As a concrete illustration, consider firms operating electricity distribution networks that typically possess a natural local monopoly given that the construction of competing networks over the same terrain is prohibitively expensive. It is not uncommon for national governments to establish regulatory agencies which monitor the provision of electricity to ensure that abuse of the inherent monopoly power is not occurring. Regulators face the task of determining an acceptable price for the provision of electricity while having to balance the heterogeneity that exists across the firms (in terms of size of the firm and length of the network). Firms which are inefficient may charge too high a price to recoup a profit, but at the expense of operating below capacity. However, given production and distribution shocks, not all departures from the frontier represent inefficiency. Thus, precise measures designed to account for noise are required to parse information from regarding .
Alternatively, further investigation could reveal what it is that makes these establishments attain such high levels of performance. This could then be used to identify appropriate government policy implications and responses or identify processes and/or management practices that should be spread (or encouraged) across the less efficient, but otherwise similar, units. This is the essence of the determinants of inefficiency approach discussed in previous section. More directly, efficiency rankings are used in regulated industries such that regulators can set tougher future cost reduction targets for the more inefficient companies, in order to ensure that customers do not pay for the inefficiency of firms.
The only direct estimate coming from the Normal Half Normal SFM is . This provides context regarding the shape of the Half Normal distribution on and the industry average efficiency , but not on the absolute level of inefficiency for a given firm. If we are only concerned with the average level of technical efficiency for the population, then this is all the information that is needed. Yet, if we want to know about a specific firm, then something else is required. The main approach to estimating firm-level inefficiency is the conditional mean estimator , commonly known as the JLMS estimator. Their idea was to calculate the expected value of conditional on the realization of composed error of the model, , i.e., . This conditional mean of given gives a point prediction of . The composed error contains individual-specific information, and the conditional expectation is one measure of firm-specific inefficiency.
JLMS  shows that for the Normal Half Normal specification of the SFM, the conditional density function of given , , is , where
Given results on the mean of a Truncated Normal density it follows that
The individual estimates are then obtained by replacing the true parameters in (6.3) with MLE (or MSMLE or GMM) estimates from the SFM.
Another measure of interest is the Afriat-type level of technical efficiency, defined as . This is useful in cases where output is measured in logarithmic form. Furthermore, technical efficiency is bounded between 0 and 1, making it somewhat easier to interpret relative to a raw inefficiency score. Since is not directly observable, the idea of JLMS  can be deployed here, and can be calculated [12,56]. For the Normal Half Normal model, we have
where and were defined in (6.1) and (6.2), respectively. Technical efficiency estimates are obtained by replacing the true parameters in (6.4) with MLE estimates from the SFM. When ranking efficiency scores, one should use estimates of , which is the first-order approximation of (6.4). Similar expressions for the JMLS  and Battese and Coelli  efficiency scores can be derived under the assumption that is Exponential (, p. 82), Truncated Normal (, p. 86), and Gamma (, p. 89); see also .
An interesting and important finding from  and  is that when we allow for dependence of the kinds described in Sections 4 and 5, we can potentially improve estimation of inefficiency through the JLMS estimator. We focus on the case of endogeneity (Section 5) but the case of dependence over in panels (Section 4) is similar. The traditional predictor  is . However, more information is available when dependence is allowed, namely via . This calls for a modified JLMS estimator, . Note that even though it is assumed that is independent from , similar to , because is correlated with , there is information that can be used to help predict even after conditioning on .
Amsler et al.  showed that is independent of :
and that the distribution of conditional on is with and , which is identical to the original JLMS estimator, except that is replaced with and taking the place of . The modified JLMS estimator in the presence of endogeneity becomes with . Note that is a better predictor than because . The improvement in prediction follows from the textbook identity for variances, where for any random vector , where and are random sub-vectors, we have
In this case, by conditioning on both and the conditioning set is larger than simply conditioning on and so it must hold that the unexplained portion of is smaller than that of . It then holds that there is less variation in as a predictor than , which is a good thing. A similar result is obtained by  in a panel setting, where the new estimator dominates the traditional estimator due to dependence over . While it is not obvious at first glance, one benefit of allowing for a richer dependence structure in SFM is that researchers may be able to more accurately predict firm-level inefficiency, though it comes at the expense of having to deal with a more complex model. This improvement in prediction may also be accompanied by narrower prediction intervals; however, this is not known as Amsler et al.  did not study the prediction intervals.
A prediction interval for was first derived by Taube  and also appeared in , , and  (see a discussion of this in ). The prediction interval is based on . The lower ( ) and upper ( ) bounds for a % prediction interval are
where and are defined in (6.1) and (6.2), respectively, and replacing them with their MLE estimates will give estimated prediction intervals for . Using the above result from  that conditional on is with and , one can easily obtain analogous prediction intervals for . These new intervals will potentially be narrower.
In this article, we surveyed the workhorse SFM and various recent extensions that permit a wide range of dependence modelling within SFM. We discussed dependencies that arise in panels and that underpin endogeneity in production systems. Copulas play a key role in these settings because they naturally permit the construction of a likelihood, often based on simulated draws, while preserving the desired Half Normal distribution of technical inefficiency.
While this is not a survey of SFA applications, it is worth pointing out that SFA has become a popular tool to isolate inefficiency in the behavior of economic agents, e.g., banks and non-financial firms. As just a couple of examples, Koetter et al.  showed that ignoring inefficiency, i.e., assuming all banks in the US economy are on the frontier, leads to substantial downward bias in the estimates of the banks’ market power, as measured by the Lerner index of price mark-ups; Henry et al.  applied SFA to obtain a correct estimate of TFP at the country level in a panel of 57 developing economies over the 1970–1998 period. The list of applications is long and growing.
This survey’s goal was to provide a comprehensive overview of the state of the art in methods of dependence modeling (either directly with noise, through endogeneity, through sample selection or across time), but it is clear that many important issues still remain and this is an active area of research for the field. We remain excited about the potential developments in this area and the insights that they can shed in applications. At present there is little cognizance into the direction of impact of unmodelled, or misspecified, dependence on efficiency scores.
Helpful comments from two anonymous referees, the Editor, as well as from Robert James, are gratefully acknowledged. Mamonov acknowledges the financial support by the MGIMO University, research grant number 1921-01-08. Prokhorov’s research for this article was supported by a grant from the Russian Science Foundation (Project No. 20-78-10113).
Conflict of interest: Artem Prokhorov is a member of the Editorial Advisory Board for this journal but had no involvement in the review of this manuscript or the final editorial decision.
 Aigner, D., & Chu, S. (1968). On estimating the industry production function. American Economic Review, 58, 826–839. Search in Google Scholar
 Aigner, D. J., Lovell, C. A. K., & Schmidt, P. (1977). Formulation and estimation of stochastic frontier production functions. Journal of Econometrics, 6(1), 21–37. 10.1016/0304-4076(77)90052-5Search in Google Scholar
 Amsler, C., Prokhorov, A., & Schmidt, P. (2014). Using copulas to model time dependence in stochastic frontier models. Econometric Reviews, 33(5–6), 497–522. 10.1080/07474938.2013.825126Search in Google Scholar
 Amsler, C., Prokhorov, A., & Schmidt, P. (2017). Endogeneity environmental variables in stochastic frontier models. Journal of Econometrics, 199, 131–140. 10.1016/j.jeconom.2017.05.005Search in Google Scholar
 Amsler, C., Prokhorov, A., & Schmidt, P. (2021). A new family of copulas, with application to estimation of a production frontier system. Journal of Productivity Analysis, 55, 1–4. 10.1007/s11123-020-00590-wSearch in Google Scholar
 Amsler, C., & Schmidt, P. (2021). A survey of the use of copulas in stochastic frontier models. In C. F. Parmeter & R. C. Sickles (Eds.), Advances in efficiency and productivity analysis (pp. 125–138). Cham, Switzerland: Springer Nature. 10.1007/978-3-030-47106-4_6Search in Google Scholar
 Azzalini, A. (1985). A class of distributions which includes the normal ones. Scandinavian Journal of Statistics, 12(2), 171–178. Search in Google Scholar
 Badunenko, O., & Kumbhakar, S. C. (2017). Economies of scale, technical change and persistent and time-varying cost efficiency in Indian banking: Do ownership, regulation and heterogeneity matter? European Journal of Operational Research, 260, 789–803. 10.1016/j.ejor.2017.01.025Search in Google Scholar
 Battese, G. E., & Coelli, T. J. (1988). Prediction of firm-level technical efficiencies with a generalized frontier production function and panel data. Journal of Econometrics, 38, 387–399. 10.1016/0304-4076(88)90053-XSearch in Google Scholar
 Battese, G. E., & Coelli, T. J. (1992). Frontier production functions, technical efficiency and panel data: With application to paddy farmers in India. Journal of Productivity Analysis, 3, 153–169. 10.1007/978-94-017-1923-0_10Search in Google Scholar
 Battese, G. E., & Coelli, T. J. (1995). A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empirical Economics, 20(1), 325–332. 10.1007/BF01205442Search in Google Scholar
 Battese, G. E., & Corra, G. S. (1977). Estimation of a production frontier model: With application to the pastoral zone off Eastern Australia. Australian Journal of Agricultural Economics, 21(3), 169–179. 10.1111/j.1467-8489.1977.tb00204.xSearch in Google Scholar
 Benabou, R., & Tirole, J. (2016). Mindful economics: The production, consumption, and value of beliefs. Journal of Economic Perspectives, 30(3), 141–164. 10.1257/jep.30.3.141Search in Google Scholar
 Bera, A. K., & Sharma, S. C. (1999). Estimating production uncertainty in stochastic frontier production function models. Journal of Productivity Analysis, 12(2), 187–210. 10.1023/A:1007828521773Search in Google Scholar
 Bloom, N., Lemos, R., Sadun, R., Scur, D., & Van Reenen, J. (2016). International data on measuring management practices. American Economic Review, 106(5), 152–156. 10.1257/aer.p20161058Search in Google Scholar
 Bonanno, G., De Giovanni, D., & Domma, F. (2015). The “wrong skewness” problem: A re-specification of stochastic frontiers. Journal of Productivity Analysis, 47(1), 49–64. 10.1007/s11123-017-0492-8Search in Google Scholar
 Bravo-Ureta, B. E., & Rieger, L. (1991). Dairy farm efficiency measurement using stochastic frontiers and neoclassical duality. American Journal of Agricultural Economics, 73(2), 421–428. 10.2307/1242726Search in Google Scholar
 Burns, R. (2004). The simulated maximum likelihood estimation of stochastic frontier models with correlated error components. Sydney, Australia: The University of Sydney. Search in Google Scholar
 Case, B., Ferrari, A., & Zhao, T. (2013). Regulatory reform and productivity change in indian banking. The Review of Economics and Statistics, 95(3), 1066–1077. 10.1162/REST_a_00298Search in Google Scholar
 Chen, Y.-Y., Schmidt, P., & Wang, H.-J. (2014). Consistent estimation of the fixed effects stochastic frontier model. Journal of Econometrics, 181(2), 65–76. 10.1016/j.jeconom.2013.05.009Search in Google Scholar
 Coelli, T. J. (1995). Estimators and hypothesis tests for a stochastic frontier function: A Monte Carlo analysis. Journal of Productivity Analysis, 6(4), 247–268. 10.1007/BF01076978Search in Google Scholar
 Cornwell, C., Schmidt, P., & Sickles, R. C. (1990). Production frontiers with cross-sectional and time-series variation in efficiency levels. Journal of Econometrics, 46(2), 185–200. 10.1016/0304-4076(90)90054-WSearch in Google Scholar
 Dugger, R. (1974). An application of bounded nonparametric estimating functions to the analysis of bank cost and production functions. (Ph.D. thesis). University of North Carolina, Chapel Hill. Search in Google Scholar
 ElMehdi, R., & Hafner, M. (2014). Inference in stochastic frontier analysis with dependent error terms. Mathematics and Computers in Simulation, 102, 104–116. 10.1016/j.matcom.2013.09.008Search in Google Scholar
 Greene, W. (2004). Distinguishing between heterogeneity and inefficiency: Stochastic frontier analysis of the World Health Organization’s panel data on national health care systems. Health Economics, 13(9), 959–980. 10.1002/hec.938Search in Google Scholar PubMed
 Greene, W. H. (2005b). Reconsidering heterogeneity in panel data estimators of the stochastic frontier model. Journal of Econometrics, 126(2), 269–303. 10.1016/j.jeconom.2004.05.003Search in Google Scholar
 Hansen, C., McDonald, J. B., & Newey, W. K. (2010). Instrumental variables estimation with flexible distributions. Journal of Business and Economic Statistics, 28, 13–25. 10.1920/wp.cem.2007.2107Search in Google Scholar
 Hattori, T. (2002). Relative performance of U.S. and Japanese electricity distribution: An application of stochastic frontier analysis. Journal of Productivity Analysis, 18(3), 269–284. 10.1023/A:1020695709797Search in Google Scholar
 Henry, M., Kneller, R., & Milner, C. (2009). Trade, technology transfer and national efficiency in developing countries. European Economic Review, 53(2), 237–254. 10.1016/j.euroecorev.2008.05.001Search in Google Scholar
 Horrace, W. C., & Schmidt, P. (1996). Confidence statements for efficiency estimates from stochastic frontier models. Journal of Productivity Analysis, 7, 257–282. 10.1007/BF00157044Search in Google Scholar
 Jondrow, J., Lovell, C. A. K., Materov, I. S., & Schmidt, P. (1982). On the estimation of technical efficiency in the stochastic frontier production function model. Journal of Econometrics, 19(2/3), 233–238. 10.1016/0304-4076(82)90004-5Search in Google Scholar
 Kantorovich, L. (1939). Mathematical methods of organizing and planning production. Leningrad: Publishing House of Leningrad State University. Search in Google Scholar
 Karakaplan, M. U., & Kutlu, L. (2013). Handling endogeneity in stochastic frontier analysis. Unpublished manuscript. Search in Google Scholar
 Knittel, C. R. (2002). Alternative regulatory methods and firm efficiency: Stochastic frontier evidence form the U.S. electricity industry. The Review of Economics and Statistics, 84(3), 530–540. 10.1162/003465302320259529Search in Google Scholar
 Koetter, M., Kolari, J. W., & Spierdijk, L. (2012). Enjoying the Quiet Life under Deregulation? Evidence from Adjusted Lerner Indices for U.S. Banks. The Review of Economics and Statistics, 94(2), 462–480. 10.1162/REST_a_00155Search in Google Scholar
 Kumbhakar, S. C., & Heshmati, A. (1995). Efficiency measurement in Swedish dairy farms: An application of rotating panel data, 1976–88. American Journal of Agricultural Economics, 77(3), 660–674. 10.2307/1243233Search in Google Scholar
 Kumbhakar, S. C., & Parmeter, C. F. (2019). Implementing generalized panel data stochastic frontier estimators. In E. G. Tsions (Ed.), Panel data econometrics: Theory and empirical applications, Chapter 9. London, United Kingdom: Elsevier. 10.1016/B978-0-12-814367-4.00009-5Search in Google Scholar
 Kumbhakar, S. C., Tsionas, E. G., & Sipiläinen, T. (2009). Joint estimation of technology choice and technical efficiency: An application to organic and conventional dairy farming. Journal of Productivity Analysis, 31(2), 151–161. 10.1007/s11123-008-0081-ySearch in Google Scholar
 Kumbhakar, S. C., Wang, H.-J., & Horncastle, A. (2015). A practitioners guide to stochastic Frontier analysis using stata. Cambridge, United Kingdom: Cambridge University Press. 10.1017/CBO9781139342070Search in Google Scholar
 Kuosmanen, T. (2012). Stochastic semi-nonparametric frontier estimation of electricity distribution networks: Application of the StoNED method in the Finnish regulatory model. Energy Economics, 34, 2189–2199. 10.1016/j.eneco.2012.03.005Search in Google Scholar
 Latruffe, L., Bravo-Ureta, B. E., Carpentier, A., Desjeux, Y., & Moreira, V. H. (2017). Subsidies and technical efficiency in agriculture: Evidence from European dairy farms. American Journal of Agricultural Economics, 99, 783–799. 10.1093/ajae/aaw077Search in Google Scholar
 Lee, L.-F., & Tyler, W. G. (1978). The stochastic frontier production function and average efficiency: An empirical analysis. Journal of Econometrics, 7, 385–389. 10.1016/0304-4076(78)90061-1Search in Google Scholar
 Lee, Y., & Schmidt, P. (1993). A production frontier model with flexible temporal variation in technical efficiency. In K. L. H. Fried & S. Schmidt (Eds.), The measurement of productive efficiency. Oxford, United Kingdom: Oxford University Press. Search in Google Scholar
 Lien, G., Kumbhakar, S.C., & Hardaker, J.B. (2017). Accounting for risk in productivity analysis: an application to Norwegian dairy farming. Journal of Productivity Analysis, 47(3), 247–257.10.1007/s11123-016-0482-2Search in Google Scholar
 Liu, J., Sriboonchitta, J., Wiboonpongse, A., & Denœux, T. (2021). A trivariate Gaussian copula stochastic frontier model with sample selection. International Journal of Approximate Reasoning, 137, 181–198. 10.1016/j.ijar.2021.06.016Search in Google Scholar
 McFadden, D. (1989). A method of simulated moments for estimation of discrete response models without numerical integration. Econometrica, 57(5), 995–1026. 10.2307/1913621Search in Google Scholar
 Meeusen, W., & van den Broeck, J. (1977a). Efficiency estimation from Cobb-Douglas production functions with composed error. International Economic Review, 18(2), 435–444. 10.2307/2525757Search in Google Scholar
 Meeusen, W., & van den Broeck, J. (1977b). Technical efficiency and dimension of the firm: Some results on the use of frontier production functions. Empirical Economics, 2(2), 109–122. 10.1007/BF01767476Search in Google Scholar
 Mutter, R. L., Greene, W. H., Spector, W., Rosko, M. D., & Mukamel, D. B. (2013). Investigating the impact of endogeneity on inefficiency estimates in the application of stochastic frontier analysis to nursing homes. Journal of Productivity Analysis, 39(1), 101–110. 10.1007/s11123-012-0277-zSearch in Google Scholar
 Nelsen, R. (2006). An introduction to copulas, 2nd ed. New York City, NY: Springer Science and Business Media. Search in Google Scholar
 Olson, J. A., Schmidt, P., & Waldman, D. A. (1980). A Monte Carlo study of estimators of stochastic frontier production functions. Journal of Econometrics, 13, 67–82. 10.1016/0304-4076(80)90043-3Search in Google Scholar
 Parmeter, C. F., & Zelenyuk, V. (2019). Combining the virtues of stochastic frontier and data envelopment analysis. Operations Research, 67, 1628–1658. 10.1287/opre.2018.1831Search in Google Scholar
 Pitt, M. M., & Lee, L.-F. (1981). The measurement and sources of technical inefficiency in the Indonesian weaving industry. Journal of Development Economics, 9(1), 43–64. 10.1016/0304-3878(81)90004-3Search in Google Scholar
 Prokhorov, A., & Schmidt, P. (2009). Likelihood-based estimation in a panel setting: robustness, redundancy and validity of copulas. Journal of Econometrics, 153(1), 93–104. 10.1016/j.jeconom.2009.06.002Search in Google Scholar
 Schmidt, P., & Lin, T.-F. (1984). Simple tests of alternative specifications in stochastic frontier models. Journal of Econometrics, 24(3), 349–361. 10.1016/0304-4076(84)90058-7Search in Google Scholar
 Schmidt, P., & Lovell, C. (1979). Estimating technical and allocative inefficiency relative to stochastic production and cost frontiers. Journal of Econometrics, 9(3), 343–366. 10.1016/0304-4076(79)90078-2Search in Google Scholar
 Schmidt, P., & Lovell, C. (1980). Estimating stochastic production and cost frontiers when technical and allocative inefficiency are correlated. Journal of Econometrics, 13(1), 83–100.10.1016/0304-4076(80)90044-5Search in Google Scholar
 Simar, L., & Wilson, P. W. (2013). Estimation and Inference in Nonparametric Frontier Models: Recent developments and perspectives. Foundations and Trends in Econometrics, 5(2), 183–337. 10.1561/0800000020Search in Google Scholar
 Simar, L., & Wilson, P. W. (2015). Statistical Approaches for Nonparametric Frontier Models: A Guided Tour. International Statistical Review, 83(1), 77–110. 10.1111/insr.12056Search in Google Scholar
 Sipiläinen, T., & Oude Lansink, A. (2005). Learning in switching to organic farming. Nordic Association of Agricultural Scientists NJF Report, 1(1), 169–172. Search in Google Scholar
 Sriboonchitta, S., Liu, J., Wiboonpongse, A., & Denœux, T. (2017). A double-copula stochastic frontier model with dependent error components and correction for sample selection. International Journal of Approximate Reasoning, 80, 174–184. 10.1016/j.ijar.2016.08.006Search in Google Scholar
 Stiglitz, J. E., & Greenwald, B. C. (1986). Externalities in economies with imperfect information and incomplete markets. Quarterly Journal of Economics, 101(2), 229–264. 10.2307/1891114Search in Google Scholar
 Taube, R. (1988). Möglichkeiten der effizienzmess ung von öffentlichen verwaltungen. Berlin: Duncker & Humbolt GmbH. Search in Google Scholar
 Tran, K., & Tsionas, M. (2015). Endogeneity in stochastic frontier models: Copula approach without external instruments. Economics Letters, 133(C), 85–88. 10.1016/j.econlet.2015.05.026Search in Google Scholar
 Tran, K. C., & Tsionas, E. G. (2013). GMM estimation of stochastic frontier models with endogenous regressors. Economics Letters, 118, 233–236. 10.1016/j.econlet.2012.10.028Search in Google Scholar
 Wang, H.-J., & Ho, C.-W. (2010). Estimating fixed-effect panel stochastic frontier models by model transformation. Journal of Econometrics, 157(2), 286–296. 10.1016/j.jeconom.2009.12.006Search in Google Scholar
 Wei, Z., Conlon, E. M., & Wang, T. (2021). Asymmetric dependence in the stochastic frontier model using skew normal copula. International Journal of Approximate Reasoning, 128, 56–68. 10.1016/j.ijar.2020.10.011Search in Google Scholar
 Wei, Z., & Kim, D. (2018). On multivariate asymmetric dependence using multivariate skew-normal copula-based regression. International Journal of Approximate Reasoning, 92, 376–391. 10.1016/j.ijar.2017.10.016Search in Google Scholar
 Wei, Z., Zhu, X., & Wang, T. (2021). The extended skew-normal-based stochastic frontier model with a solution to “wrong skewness” problem. Statistics, 55, 1387–1406.10.1080/02331888.2021.2004142Search in Google Scholar
 Weinstein, M. (1964). The sum of values from a normal and a truncated normal distribution (with some additional material, pp. 469-470). Technometrics, 6(4), 104–105. 10.2307/1266751Search in Google Scholar
 Wiboonpongse, A., Liu, J., Sriboonchitta, S., & Denœux, T. (2015). Modeling dependence between error components of the stochastic frontier model using copula: Application to intercrop coffee production in Northern Thailand. International Journal of Approximate Reasoning, 65, 34–44. 10.1016/j.ijar.2015.04.001Search in Google Scholar
© 2022 Mikhail E. Mamonov et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.