Clark, Don. 2011. ”Rambus Loses Antitrust Case.” Wall Street Journal, November 17.Google Scholar
Fisher, Franklin. 1980. “Multiple Regression in Legal Proceedings.” Columbia Law Review 702–736.Google Scholar
Freed, Dan. 2012. “Bank of America, Citigroup Face Billions In Losses in Antitrust Case (Update 1).” The Street, January 12.Google Scholar
Higgins, Richard, and Paul Johnson. 2003. “The Mean Effect of Structural Change in the Dependent Variable is Accurately Measured by the Intercept Change Alone.” Economics Letters 80: 255–259.Google Scholar
Hovenkamp, Herbert. 2005. Federal Antitrust Policy: The Law of Competition and Its Practice. St. Paul, MN: Thomson-West.Google Scholar
Imbens, Guido W. 2004. “Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review.” Review of Economics and Statistics 86 (1): 4–29.Google Scholar
Kmenta, Jan. 2000. Elements of Econometrics. 2nd ed. Ann Arbor, MI: The University of Michigan Press.Google Scholar
Lande, Robert H., and Joshua P. Davis. 2008. “Benefits From Private Antitrust Enforcement: An Analysis of Forty Cases.” University of San Francisco Law Review 42: 817–918.Google Scholar
Marshall, Kevin S. 2008. The Economics of Antitrust Injury and Firm-Specific Damages. Tucson: Lawyers & Judges.Google Scholar
Rubin, Donald B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66 (5): 688–701.Google Scholar
Rubinfeld, Daniel L. 1985. “Econometrics in the Courtroom.” Columbia Law Review 1048–1097.Google Scholar
Rubinfeld, Daniel L. 2008. “Quantitative Methods in Antitrust.” In Issues in Competition Policy, ABA Antitrust Section, edited by Dale Collins, American Bar Association, chapter 30.Google Scholar
Rubinfeld, Daniel L., and Peter O. Steiner. 1983. “Quantitative Methods in Antitrust Litigation.” Law and Contemporary Problems 26: 69–141.Google Scholar
Salkever, David. 1976. “The Use of Dummy Variables to Compute Predictions, Prediction Errors, and Confidence Intervals.” Journal of Econometrics 4 (4): 393–397.Google Scholar
Schoenberger, Robert. 2009. “Eaton Might Owe Damages in Billions in Antitrust Case.” The Plain Dealer, October 20.Google Scholar
Touryalai, Halah. 2012. “Senator Dick Durbin Hates The $7 Billion Visa, Mastercard Settlement,” Forbes, August 7.Google Scholar
White, Halbert, Robert Marshall, and Pauline Kennedy. 2006. “The Measurement of Economic Damages in Antitrust Civil Litigation.” ABA Antritrust Section, Economic Committee Newsletter, Spring 6 (1): 17–22.Google Scholar
Wooldridge, Jeffrey M. 2002. Econometric Analysis of Cross Section and Panel Data. Cambridge: MIT Press, 2002.Google Scholar
About the article
Published Online: 2013-10-26
Published in Print: 2014-01-01
Alternative approaches involve variations on the yardstick approach, such as a comparison of rates of return and/or profit margins across industries.
For a broad discussion of these alternative measures, see Hovenkamp [2005, section 17.5(a)].
See, for example, Salkever (1976), Fisher (1980), Rubinfeld and Steiner (1983), Rubinfeld (1985, 2008), and Higgins and Johnson (2003). See especially White, Marshall, and Kennedy (2006); those authors strongly prefer the forecasting approach and are highly critical of the dummy-variable approach.
The case where the covariates have equal average levels between the pre-conspiracy period and the conspiracy period is discussed in Higgins and Johnson (2003); see their assumption 4.
There may be too few observations under conspiracy conditions to estimate the parameters α+δ and β+γ using the conspiracy period alone.
Note that this assumption is justified if the decision to initiate and cease a conspiracy is based largely on factors captured by the covariates, Xt, or if it is based on idiosyncratic factors that are unrelated to the gains from conspiracy. It is not justified if the decision to initiate or cease a conspiracy is based on unmeasured factors affecting but-for prices, i.e., vt, or on the gains to conspiracy, i.e., ut–vt.
In some applications, price will be modeled in logs, in which case the object of interest may be redefined as
There must be sufficiently variability to allow one to appropriately account for non-collusive variables that might have affected price in the impact period.
A variety of considerations are involved in the decision of whether to use quantity weights, including data quality, heteroskedasticity, efficiency, strong trends in quantity (particularly for narrowly defined products), and robustness, among others. We focus on the case where weights are not used, but note the implications of using weights where relevant.
As noted by Wooldridge (2002, section 18.3.1), covariates can be de-meaned prior to estimation without changing the estimated regression coefficients except for the constant and with essentially negligible effects on the standard errors. This means that we can ensure that
As noted by the editor, Assumption 2 is also implied the somewhat stronger restriction that
Informally, an ergodic stationary process is a process that will not change its properties over time and whose properties can be deduced from a sufficiently long sample of the process.
While it is not our focus in this paper, we note that if one found Assumption 1 to be justified, then there are two consistent estimators for average overcharges, in which case a more efficient estimator can be obtained by combining the two estimators. For example, the linear combination
This is a benefit emphasized by White, Marshall, and Kennedy (2006).
Higgins and Johnson (2003) consider some restrictions that guarantee
To the best of our knowledge, there is no parametric restriction that guarantees an improvement in precision of estimated average overcharges from imposing the restriction γ=0. For example, even in data generating processes where γ=0, it can still be more efficient to allow for a change in the effect of the covariates on price. Because of this, we are not aware of any statistical test that would clearly point to whether it was more appropriate to include or exclude the interaction term from the regression, from the point of view of minimizing the variability of the overcharge estimate.
For Models 1 through 4 we have
Freed (2012) notes that “[e]stimates of the potential cost of a settlement of the [Visa] antitrust case vary dramatically – from a few billion dollars into the hundreds of billions.” Visa eventually settled for $4 billion (Touryalai 2012). Other settlement amounts are cited in Marshall (2008), Schoenberger (2009), and Clark (2011). The only review ever conducted along these lines looked at 40 cases (Lande and Davis 2008). The average recovery for plaintiffs among those 40 was $450 million under one set of assumptions and $491 million under another set of assumptions, but the cases studied were those known to prominent antitrust attorneys and hence more likely to be ones involving large dollar amounts.
This margin of error may be justified either by appealing to the central limit theorem applied to the estimators, or to normality of the sampling distribution of each estimator. A detailed examination of the sampling distribution confirms that for these simulation experiments, the sampling distribution is approximately normal. For example, each estimator in each model exhibits skewness of roughly 0.2 and kurtosis of roughly 3.2.
While the effect is small, we were somewhat surprised that the first dummy variable approach was not as close to the target in this model as it was in Models 1, 2, and 3. We note that the conclusion of Proposition 2 is not that the estimator is unbiased, but rather that it is consistent. On the other hand, we conducted a similar experiment with a slightly larger sample size of T=200 and encountered similar results – a simulation estimate of the mean that is roughly one-half of 1% above the true parameter, where the true parameter is not inside the confidence region for the simulation estimate.