Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Econometric Methods

Ed. by Giacomini, Raffaella / Li, Tong

Online
ISSN
2156-6674
See all formats and pricing
More options …

Measuring Benchmark Damages in Antitrust Litigation

Justin McCrary / Daniel L. Rubinfeld
  • Robert L. Bridges Professor of Law and Professor of Economics Emeritus, U.C. Berkeley; Professor of Law, NYU; and Faculty Research Associate, NBER
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2013-10-26 | DOI: https://doi.org/10.1515/jem-2013-0006

Abstract

We compare the two dominant approaches to estimation of benchmark damages in antitrust litigation, the forecasting approach and the dummy variable approach. We give conditions under which the two approaches are equivalent and present the results of a small simulation study.

Keywords: antitrust; damages estimation; law and economics

References

  • Clark, Don. 2011. ”Rambus Loses Antitrust Case.” Wall Street Journal, November 17.Google Scholar

  • Fisher, Franklin. 1980. “Multiple Regression in Legal Proceedings.” Columbia Law Review 702–736.Google Scholar

  • Freed, Dan. 2012. “Bank of America, Citigroup Face Billions In Losses in Antitrust Case (Update 1).” The Street, January 12.Google Scholar

  • Higgins, Richard, and Paul Johnson. 2003. “The Mean Effect of Structural Change in the Dependent Variable is Accurately Measured by the Intercept Change Alone.” Economics Letters 80: 255–259.Google Scholar

  • Hovenkamp, Herbert. 2005. Federal Antitrust Policy: The Law of Competition and Its Practice. St. Paul, MN: Thomson-West.Google Scholar

  • Imbens, Guido W. 2004. “Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review.” Review of Economics and Statistics 86 (1): 4–29.Google Scholar

  • Kmenta, Jan. 2000. Elements of Econometrics. 2nd ed. Ann Arbor, MI: The University of Michigan Press.Google Scholar

  • Lande, Robert H., and Joshua P. Davis. 2008. “Benefits From Private Antitrust Enforcement: An Analysis of Forty Cases.” University of San Francisco Law Review 42: 817–918.Google Scholar

  • Marshall, Kevin S. 2008. The Economics of Antitrust Injury and Firm-Specific Damages. Tucson: Lawyers & Judges.Google Scholar

  • Rubin, Donald B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66 (5): 688–701.Google Scholar

  • Rubinfeld, Daniel L. 1985. “Econometrics in the Courtroom.” Columbia Law Review 1048–1097.Google Scholar

  • Rubinfeld, Daniel L. 2008. “Quantitative Methods in Antitrust.” In Issues in Competition Policy, ABA Antitrust Section, edited by Dale Collins, American Bar Association, chapter 30.Google Scholar

  • Rubinfeld, Daniel L., and Peter O. Steiner. 1983. “Quantitative Methods in Antitrust Litigation.” Law and Contemporary Problems 26: 69–141.Google Scholar

  • Salkever, David. 1976. “The Use of Dummy Variables to Compute Predictions, Prediction Errors, and Confidence Intervals.” Journal of Econometrics 4 (4): 393–397.Google Scholar

  • Schoenberger, Robert. 2009. “Eaton Might Owe Damages in Billions in Antitrust Case.” The Plain Dealer, October 20.Google Scholar

  • Touryalai, Halah. 2012. “Senator Dick Durbin Hates The $7 Billion Visa, Mastercard Settlement,” Forbes, August 7.Google Scholar

  • White, Halbert, Robert Marshall, and Pauline Kennedy. 2006. “The Measurement of Economic Damages in Antitrust Civil Litigation.” ABA Antritrust Section, Economic Committee Newsletter, Spring 6 (1): 17–22.Google Scholar

  • Wooldridge, Jeffrey M. 2002. Econometric Analysis of Cross Section and Panel Data. Cambridge: MIT Press, 2002.Google Scholar

About the article

Corresponding author: Justin McCrary, Professor of Law, U.C. Berkeley; and Faculty Research Associate, NBER, E-mail:


Published Online: 2013-10-26

Published in Print: 2014-01-01


Alternative approaches involve variations on the yardstick approach, such as a comparison of rates of return and/or profit margins across industries.

For a broad discussion of these alternative measures, see Hovenkamp [2005, section 17.5(a)].

See, for example, Salkever (1976), Fisher (1980), Rubinfeld and Steiner (1983), Rubinfeld (1985, 2008), and Higgins and Johnson (2003). See especially White, Marshall, and Kennedy (2006); those authors strongly prefer the forecasting approach and are highly critical of the dummy-variable approach.

The case where the covariates have equal average levels between the pre-conspiracy period and the conspiracy period is discussed in Higgins and Johnson (2003); see their assumption 4.

There may be too few observations under conspiracy conditions to estimate the parameters α+δ and β+γ using the conspiracy period alone.

Note that this assumption is justified if the decision to initiate and cease a conspiracy is based largely on factors captured by the covariates, Xt, or if it is based on idiosyncratic factors that are unrelated to the gains from conspiracy. It is not justified if the decision to initiate or cease a conspiracy is based on unmeasured factors affecting but-for prices, i.e., vt, or on the gains to conspiracy, i.e., utvt.

In some applications, price will be modeled in logs, in which case the object of interest may be redefined as

or
for example.

There must be sufficiently variability to allow one to appropriately account for non-collusive variables that might have affected price in the impact period.

A variety of considerations are involved in the decision of whether to use quantity weights, including data quality, heteroskedasticity, efficiency, strong trends in quantity (particularly for narrowly defined products), and robustness, among others. We focus on the case where weights are not used, but note the implications of using weights where relevant.

As noted by Wooldridge (2002, section 18.3.1), covariates can be de-meaned prior to estimation without changing the estimated regression coefficients except for the constant and with essentially negligible effects on the standard errors. This means that we can ensure that

is by construction zero, which is computationally convenient. In that case, the coefficient on the dummy variable needs only to be scaled up by
in order to obtain

As noted by the editor, Assumption 2 is also implied the somewhat stronger restriction that

In words, this restriction is that the covariance between quantity and price is the same during the damages period as long as the covariates are the same, regardless of whether it is with or without the conspiracy.

Informally, an ergodic stationary process is a process that will not change its properties over time and whose properties can be deduced from a sufficiently long sample of the process.

While it is not our focus in this paper, we note that if one found Assumption 1 to be justified, then there are two consistent estimators for average overcharges, in which case a more efficient estimator can be obtained by combining the two estimators. For example, the linear combination

is also consistent for the average overcharge and has asymptotic variance of 1/(ω1+ω2) where
with
the asymptotic variance of the dummy variable estimate and c the asymptotic covariance between it and the forecasting estimate, and ω2≡1/(VFCc) with VFC the asymptotic variance of the forecasting estimate. On the other hand, obtaining good estimates of VFC and
is challenging and this may limit the practicality of this approach.

This is a benefit emphasized by White, Marshall, and Kennedy (2006).

Higgins and Johnson (2003) consider some restrictions that guarantee

chief among these being γ=0. This is an old result; see for example Kmenta (2000, section 11–2).

To the best of our knowledge, there is no parametric restriction that guarantees an improvement in precision of estimated average overcharges from imposing the restriction γ=0. For example, even in data generating processes where γ=0, it can still be more efficient to allow for a change in the effect of the covariates on price. Because of this, we are not aware of any statistical test that would clearly point to whether it was more appropriate to include or exclude the interaction term from the regression, from the point of view of minimizing the variability of the overcharge estimate.

For Models 1 through 4 we have

and the estimand reduces to π1δ+π′Xγ, but for Models 5 and 6 the estimand is more complicated to calculate. In all instances, we approximate the estimand by taking 7.2 million samples of size 100 and averaging the sample means
The margin of error for the simulation estimate of the estimand is ±0.03 for Models 1, 5, and 6, and ±0.02 for Models 2, 3, and 4, where we take advantage of the fact that the estimand is the same and average the three resulting simulation estimates.

Freed (2012) notes that “[e]stimates of the potential cost of a settlement of the [Visa] antitrust case vary dramatically – from a few billion dollars into the hundreds of billions.” Visa eventually settled for $4 billion (Touryalai 2012). Other settlement amounts are cited in Marshall (2008), Schoenberger (2009), and Clark (2011). The only review ever conducted along these lines looked at 40 cases (Lande and Davis 2008). The average recovery for plaintiffs among those 40 was $450 million under one set of assumptions and $491 million under another set of assumptions, but the cases studied were those known to prominent antitrust attorneys and hence more likely to be ones involving large dollar amounts.

This margin of error may be justified either by appealing to the central limit theorem applied to the estimators, or to normality of the sampling distribution of each estimator. A detailed examination of the sampling distribution confirms that for these simulation experiments, the sampling distribution is approximately normal. For example, each estimator in each model exhibits skewness of roughly 0.2 and kurtosis of roughly 3.2.

While the effect is small, we were somewhat surprised that the first dummy variable approach was not as close to the target in this model as it was in Models 1, 2, and 3. We note that the conclusion of Proposition 2 is not that the estimator is unbiased, but rather that it is consistent. On the other hand, we conducted a similar experiment with a slightly larger sample size of T=200 and encountered similar results – a simulation estimate of the mean that is roughly one-half of 1% above the true parameter, where the true parameter is not inside the confidence region for the simulation estimate.


Citation Information: Journal of Econometric Methods, Volume 3, Issue 1, Pages 63–74, ISSN (Online) 2156-6674, ISSN (Print) 2194-6345, DOI: https://doi.org/10.1515/jem-2013-0006.

Export Citation

©2014 by Walter de Gruyter Berlin Boston.Get Permission

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Francisco Beneke and Mark-Oliver Mackenrodt
IIC - International Review of Intellectual Property and Competition Law, 2018
[2]
H. Peter Boswijk, Maurice J. G. Bun, and Maarten Pieter Schinkel
Journal of Applied Econometrics, 2018
[3]
Willem H. Boshoff and Rossouw van Jaarsveld
Review of Industrial Organization, 2018

Comments (0)

Please log in or register to comment.
Log in