Net Neutrality and Investment in the US: A Review of Evidence from the 2018 Restoring Internet Freedom Order

George S. Ford 1
  • 1 Phoenix Center for Advanced Legal and Economic, 5335 Wisconsin Ave NW, Ste 440, Washington, DC, USA
George S. Ford
  • Corresponding author
  • Phoenix Center for Advanced Legal and Economic, 5335 Wisconsin Ave NW, Ste 440, Washington, DC, USA
  • Email
  • Search for other articles:
  • degruyter.comGoogle Scholar

Abstract

In 2018, the Federal Communications Commission’s Restoring Internet Freedom Order reversed its 2015 decision to apply common carrier regulation to broadband Internet access services under Title II of the Communications Act of 1934. Empirical evidence indicating negative investment effects of the regulation played a key role in this reversal, though the quantification of these investment effects were a matter of substantial controversy. This article surveys the studies cited in the recent decision and the Commission’s scrutiny of them. In all, the Commission considered eight primary works but relied on only two of them, a culling process that relied on four principles: (1) simply comparing outcomes before-and-after an event is not a valid impact analysis; (2) before-and-after comparisons are more probative if regression analysis is used to condition the outcomes by accounting for potentially relevant factors like economic growth, sales, and so forth; (3) the causal effects of a regulation are best determined with reference to a counterfactual; and (4) the application of proper methods does not excuse the use of bad data. These principles are mostly uncontroversial and are consistent with the modern practice of impact analysis.

1 Introduction

Though the Internet is the dominant modality of communications today, virtually nothing is said about broadband Internet services in the Communications Act of 1934, the statute governing the regulation of communications services in the US. At the time of the last major update to the Communications Act – the Telecommunications Act of 1996 – the Internet was still in its infancy: the year 1995 marked Amazon’s first sale and the widespread distribution of the Pamela Anderson sex tape, both of which dramatically shifted expectations about the new communications technology (Fottrell 2017; Lewis 2014).1 Section 706 of the Telecommunications Act is perhaps the one exception and even there it only speaks of “advanced communications services.” While disputes continue as to whether Section 706 is a direct grant of authority, an indirect grant of authority, or merely hortatory (Spiwak 2015), the statute is clear in its directive that if the Federal Communications Commission (FCC) finds that advanced communications services are not “being deployed to all Americans in a reasonable and timely fashion,” it is to take “immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market (§706(b)).” With little else to rely on, the debate over the presence or lack of Internet regulation focuses largely on competing claims about “infrastructure investment” and “competition.” The statutory mandate concerns infrastructure investment by firms under FCC jurisdiction and much (though not all) of the evidence focused on infrastructure investment by broadband providers and not content providers.

The investment question is central to the contentious debate over Net Neutrality regulation (and deregulation) in the US. The FCC’s 2015Open Internet Order (“2015 Order”), which imposed Title II common carrier regulation on all broadband service providers, concluded the “overwhelming consensus on the record” was that “carefully-tailored rules to protect Internet openness will allow investment and innovation to continue to flourish (FCC 2015: 3).” Less than three years later, with speed uncharacteristic of regulatory agencies, the FCC (under new leadership) in 2017 adopted its Restoring Internet Freedom Order (“RIF Order”) and concluded that the “record evidence, including our cost-benefit analysis, demonstrates that the costs of these rules to innovation and investment outweigh any benefits they may have (FCC 2018: 3).”2 The latter decision dismantled nearly wholesale the 2015 regulatory regime, returning broadband service to oversight under the less-regulatory constraints of Title I of the Communications Act. After a review of the empirical evidence submitted to the record on the investment consequences of applying Title II regulation to broadband service, the Commission concluded in the RIF Order that Title II regulation “adversely affected broadband investment (pp. 57–58)” and that over-turning the 2015 Order was “likely to increase ISP investment and output (p. 58).”

Given the heated controversy over investment effects, the RIF Order will be adored by some and despised by others, a distinction due in large part to confirmation bias. This article is not for those people but rather is aimed at those persons interesting in the actual analysis of the evidence contained in the RIF Order. This article surveys the studies cited in the decision and reviews the RIF Order’s discussion of this empirical evidence regarding the investment effects of Net Neutrality regulation, or more specifically Title II regulation.3 In all, the Commission considered eight (maybe nine) works and relied on two of them: Ford’s (2017a) analysis of investment effects and Hazlett and Wright’s (2017) analysis of subscription effects. As I see it, this culling process relied on four mostly uncontroversial principles. First, simply comparing outcomes before-and-after an event is not a valid impact analysis, though such comparisons may be suggestive. Second, before-and-after comparisons are more probative if regression analysis is used to condition the outcomes by taking into account potentially relevant factors like economic growth, sales, and so forth, or what is known as conditioning on observables. Third, the causal effects of a regulation are best determined with reference to a counterfactual. Quantifying the effect of regulation requires comparing what investment is with regulation relative to what investment would have been without regulation. Fourth, the application of suitable empirical methods does not excuse the use of bad data. While the Commission’s discussion of the evidence is incomplete, I do not see how any of these limitations would alter the final conclusions. An analyst dispassionately applying these four principles would likely reach the same conclusion as the RIF Order.

This article is outlined as follows. First, I offer some conceptual background on impact analysis, avoiding technical jargon so as to appeal to a broader audience. Second, I discuss the naïve before-and-after analysis of investment upon which the Commission did not rely, though a great deal of it was found in the record. Five of the eight studies were of this sort and discarding them was entirely consistent with standard empirical practice. Third, I review the paper by Hazlett and Wright (2017), which is more accurately a review of an earlier paper by Hazlett and Caliskan (2008). This paper also is based on a before-and-after analysis but employs multivariate least squares regression in an effort to account for potentially relevant factors, and this “conditioning” of the difference was critical to the acceptance of the study by the Commission (though the study suffers from a somewhat serious defect). Fourth, I review the remaining two works – Ford (2017a) and Hooton (2017) – both of which relied on a formal counterfactual using the difference-in-differences model. Hooton was summarily and correctly dismissed for the use of forecasted data for almost all of the treatment period, going as far as to test for investment responses to Net Neutrality in 2020 (at the time, a date four years into the future). This left Ford’s work, which was recently published in Applied Economics (Ford 2018a), as the second of two studies relied upon by the Commission. One additional study by Crandall (2017) was briefly mentioned in a footnote. I provide a brief review of this financial event study but note that the Commission did not embrace its conclusions.

2 A Simple Conceptual Background

What effect does a treatment, say regulation, have on an outcome, say investment? Angrist and Pischke’s Mostly Harmless Econometrics (2009) and Mastering Metrics (2015), as well as Imbens and Wooldridge’s (2009) article in the Journal of Economic Literature, offer excellent introductory presentations of the modern methods for quantifying such effects, including discussions of a number of new techniques for quasi-experimental analysis. As observed by Angrist and Pischke (2009: 15), the “goal of most empirical economic research is to overcome selection bias, and therefore to say something about the causal effect of a [treatment],” which here is a regulatory regime. Selection bias arises when the observed difference between two groups, one receiving a treatment and the other not, may be the consequence of something other than the treatment, a problem not easily dismissed in observational studies where the treatment is not randomly assigned. For instance, the simple difference in income between persons with and without a college degree is not the causal effect of a college degree, since those obtaining a degree may be expected to earn higher incomes even without it. While technical presentations of the theories and methods of modern impact analysis are offered in Angrist and Pischke (2009, 2015) and Imbens and Wooldridge (2009), all excellent presentations, I attempt here to present the underlying ideas in a form perhaps more apt to a broader audience.

Often, the effects of a regulation are quantified by comparing outcomes before-and-after it is imposed. Half of the evidence considered in the RIF Order was of this form. To grasp the problem with such simple comparisons, consider a simple scenario. Say there are two periods, 0 and 1, and investment decisions are made at the beginning of each period. A regulation is imposed at the end of Period 0 just prior to the investment decision in Period 1, so that the regulation may influence in the investment level in the latter period. Investment in the first period is 100, and, it turns out, investment is likewise 100 in Period 1. As there is no change in investment, can we conclude that the regulation had no effect on investment? The answer is “no.” It may be, for instance, that the companies in the industry planned on investing 110 in Period 1 but reduced their investment by 10 units in response to the regulation. (Or, perhaps the companies planned on investing 95 before the regulation was imposed, so they increased their investment as a result of the regulation. It could go either way.) To quantify the effect of the regulatory treatment on investment, we must know the level of investment that would have occurred absent the regulation. But, since the regulation is applied, the level of investment without regulation cannot be directly observed. The unseen outcome is referred to as the counterfactual and its absence is problematic since the quantification of the regulation’s effect requires that counterfactual (or, more accurately, an estimate of it).

In fact, we could say that even the simple before-and-after comparison involves a counterfactual of sorts: the counterfactual is presumed to be the outcome from Period 0, which requires the assumption (either implicit or explicit) that investment levels do not change over time. Certainly, telecommunications investment changes over time with or without major regulatory changes, and sometimes substantially so. Between 1980 and 2015, for instance, telecommunications investment reported by the Bureau of Economic Analysis (BEA) changed annually by an average of $1.5 billion with a 95% confidence interval spanning −$3.8 to $6.8 billion. For data reported by the trade group USTelecom, the absolute percentage change in investment from year-to-year averages about 10%. As such, assuming constant investment between years, which half the studies mentioned in the RIF Order do, makes for a poor counterfactual.

How are counterfactuals constructed? There are a few ways to do so. Say, for instance, that we know investment in any given year is determined by the formula 50 + X, where X is some known factor (e.g. Gross Domestic Product, industry sales, and so forth). In the first period, X is 50 so investment is 100. In the second period, X increases to 60, so investment should be 110. Yet, investment is found to be 100 in the regulated setting, implying therefore that the regulation reduced investment by 10 units. If X can be observed over time and both before and after the regulatory action, then the counterfactual can be created (or estimated) using the observed X.4 It is this kind of adjustment that the RIF Order considers when it says analysts need “to control for other factors that may affect investment [] such as technological change [and] the overall state of the economy (FCC 2018: 55).” If the factor X is the only systematic determinant of investment and is unaffected by the regulatory treatment, then this conditioning on observables may be said to equal the impact of the regulation under the unconfoundedness (or ignorability) condition. That is, the variable X accounts for all common causes of the treatment and the outcome.

In most instances, there are many (not one) factors that determine outcomes of interest. Say, for instance, investment is determined by both X and Z in the relationship 50 + X + Z. If both X and Z can be observed, then the researcher may condition on observables and, by design in this example, rely on unconfoundedness to determine the causal effect. But what if Z cannot be observed or it is simply ignored in the analysis. Say Z is zero in Period 0 and X is again 50 so investment is 100. In Period 1, Z is 20 and X is 60, so absent the regulation investment would be 130. Observing only X and believing investment is 50 + X, the analyst presumes the counterfactual is 110, so the analyst concludes the regulation reduced investment by 10 units. Yet, the true effect of the regulation is 30 units: the counterfactual (130) less the observed investment level (100). This error in the measured effect of 20 units is a bias arising from omitting relevant factors from the analysis, which is sensibly referred to in the literature as omitted variables bias (Angrist and Pischke 2009: 59–64). Selection bias and simultaneity bias, both serious problems, are forms of omitted variable bias. Instrumental variables (IV) is a popular empirical approach to address omitted variables bias of these sorts by, in effect, finding an observable factor (an instrument) that permits the researcher to address such bias (Angrist and Pischke 2009: Ch. 4). Finding a good instrument – a factor correlated with the causal variable of interest but uncorrelated with any other determinant of the outcome – can be very challenging.

Another way to quantify the effect of a regulation may not require observing either X or Z. What if, for example, the regulation is applied randomly to half the firms (which are identical in size), and data was available for multiple firms over time (a panel data set). In Period 0, the two sets of firms invest 50 each, for a total of 100. In Period 1, the unregulated firms invest 65 (half of the 130) while the regulated firms invest 50 once more. Comparing the two, we see that investment by the regulated firms was 15 units less than that of the unregulated firms. This difference is what is referred to as the difference-in-differences estimator discussed below: (50–50) − (65–50) = −15. Had the regulation applied to all firms, the total investment effect would be 30 units. The true effect is quantified; regulation reduces investment by 23% [=30/130 = 15/65]. In this format, we need not observe either X or Z, we need only observe the outcome of the unregulated group. In formal parlance, this unregulated group is a “control group” and its outcome serves as the counterfactual for the “treated group.” This sort of randomized assignment is scarce outside the laboratory, leading to an expanding literature on empirical methods that might mimic such scenarios with observational data (i.e. quasi-experimental analysis).

These hypotheticals demonstrate a few key points about impact analysis and help decode statements in the RIF Order (and make up the first three “principles” outlined above). First, a simple comparison of investment levels before-and-after a regulation is largely uninformative, or merely “suggestive” as the RIF Order concludes (FCC 2018: 55). Such simple comparisons – which fail even to condition on observables – are prone to produce a biased (or “untrue”) measure of the effect of regulation, since (in effect) they use prior outcomes as the counterfactual under the assumption investment does not change over time. Second, the true effect of regulation may be determined by conditioning on observables (Angrist and Pischke 2009: 52–59). That is, accounting for all relevant factors, a regression model satisfies the unconfoundedness condition, a strong assumption that holds that the covariates included in the model remove all biases in comparing the outcomes between two groups (Imbens and Wooldridge 2009: 7).5 Models that include some but not all systematic determinants of the outcome may produce biased estimates of the effect of the regulation (though under certain conditions the true effect may be found as well) when comparing outcomes between groups or periods (Angrist and Pischke 2009: 59–64; Imbens and Wooldridge 2009: 32). Omitted variables bias violates the unconfoundedness assumption.

Third, if a valid control group can be found with outcomes expected to match that of the treated group (with or without the treatment), then it may be possible to quantify the true treatment effect without observing all the underlying determinants of the outcome using time series data on firms or countries because such factors are accounted for in the outcomes of the control group (Angrist and Pischke 2009: 69–77). Random assignment of the treatment, as the simple example above assumed, does not occur in the regulatory sphere, so the practical application of impact analysis using a control group can be complicated and requires consideration of many factors, primarily whether or not the treated and control group are expected to have very similar outcomes with and without the treatment.6 If the control group is a bad one, then bias is introduced in the same manner as it is with omitted variables (Angrist and Pischke 2009: 64–68).

There are many empirical procedures a researcher may choose from to estimate the effect of a regulatory treatment, including difference-in-differences regressions, fixed or random effects models, propensity score or coarsened exact matching, Instrumental Variables (“IV”) methods, among others (Imbens and Wooldridge 2009). Simply observing that an outcome is not equal to what it was last year, however, is most often an invalid approach, though this method accounts for the bulk of the evidence submitted to the Commission in regard to Net Neutrality. At the center of valid empirical analysis of treatment effects is the identification strategy. That is, how does the researcher propose to use observational data to approximate a real, randomized experiment? If a researcher claims that investment (or other outcome of interest) is lower after a regulation than before it, then does this simple difference accurately measure the effect of regulation or merely reflect changes in other relevant factors such as interest rates or competition? If there is an observed difference in outcomes between a regulated and an unregulated group, then are there possible explanations other than regulation for that difference? It is the role of the researcher to inform the reader (or policymaker) exactly how an estimated difference in outcomes identifies the treatment effect of interest. Absent a clearly specified identification strategy, policymakers should disregard any empirical claims about the effects of a policy.

Like many regulatory and legislative decisions, Net Neutrality regulation may be sensibly modeled as a “quasi experiment,” where regulatory authorities (or legislatures) are influenced by lobbying, rent seeking, and economic analysis and then decide either for or against certain policies based on some or all such considerations. Designing an empirical test of the regulation’s effects, which hinges largely on the timing of and exogeneity of the “event,” is likely to be fact specific and vary by the vagaries of the political system in question. If the outcome is unknown prior to some official notice of intent (like the FCC’s proposal in 2010), then the timing of the “treatment” may be reasonably assigned to that date. The timing of the “event’ is much less certain, however, when the debate is long-lived, subject to inconsistent signals from decision makers, or a partisan issue. In the US and abroad, Net Neutrality has become highly politicized and heavily partisan. For instance, in the US, aggressive Net Neutrality regulations were imposed on a strictly partisan vote under Democratic control influenced (no doubt) by an informal mandate from President Barack Obama (in a YouTube video). That decision was overturned a few years later on a strictly partisan vote under a new Republican administration. In the current partisan climate in which Net Neutrality plays a central role, the election of a Democratic President in future elections likely signals a return to a more regulatory approach with the accompanying partisan shift at the Commission (the President’s party has majority control at the FCC). While such partisan decision making is undesirable in most respects, election outcomes may serve as a strong predictor of future regulatory decisions, giving researchers the material by which to time events and specify empirical models that render unbiased results of the regulation’s effects (perhaps via instrumental variables).

3 Evidence on Investment Effects of Net Neutrality and Title II

The RIF Order references eight works that attempt to quantify the effects of Net Neutrality regulation on investment including Brake (2017), Ford (2017a), Hazlett and Wright (2017), Hooton (2017), Horney (2017), Kovacs (2017), Singer (2017), and Turner (2017), three of which were blog posts rather than formal research articles (Brake, Horney, and Singer). Of the eight, only Ford (2017a), Hooton (2017), and Hazlett and Wright (2017) offered any formal hypothesis testing. Only Ford (2017a) and Hooton (2017) employed a formal counterfactual, though Horney (2017) offered a crude counterfactual based on a linear trend. Hazlett and Wright (2017) was already published in a Special Issue on Net Neutrality in the Review of Industrial Organization and Ford’s work was recently published in Applied Economics (Ford 2018a). Data sources used in these analyses were varied and included data gathered from company annual reports, the BEA’s fixed asset tables, the OECD’s data on telecommunications investment, and data from the trade groups USTelecom, NCTA, and CTIA on broadband investment, cable company investment, and mobile wireless investment, respectively.7 All said and done, the RIF Order concluded that the “studies in the record that control the most carefully for other factors that may affect investment” were Ford (2017a) and Hazlett and Wright (2017). Ford (2017a) was embraced for the proper use of a counterfactual, and Hazlett and Wright (2017) for addressing omitted variables bias. The remainder of the studies were dismissed either for substantial errors (Hooton) or for being merely suggestive (Brake, Horney, Kovacs, Singer and Turner).

3.1 “Suggestive” Results from Aggregate Investment Changes

Brake (2017), Kovacs (2017), Singer (2017), and Turner (2017) estimate the investment effects of Title II regulation using the simple before-and-after difference estimator,

Δ=Y1Y0,

where Y0 is investment before and Y1 is investment after the 2015 Order. All of these works, with the possible exception of Turner (2017), presume that 2015 is the true treatment date, despite the fact Title II regulation had first been proposed in 2010 and “hover[ed] ominously in the background (Tummarello 2014)” ever since. In conducting event studies, determining the actual event date is often the most difficult task, especially since firms might be expected to alter investment levels based on the threat of heavy-handed regulation.

In these works, only a few years of data was considered so no hypothesis testing was conducted on Δ. As detailed above, a simple before-and-after comparison of investment levels is (in almost all cases) a poor estimate of the effects of regulation (or any other treatment). Such comparisons assume the outcome of interest does not change over time but for regulatory changes. Though the RIF Order discussed these works, somewhat at length, the Commission determined that these findings were merely “suggestive,” and the Commission’s final determinations did not rely on these works. Such naïve comparisons are not entirely worthless and may indeed be suggestive, as the Commission admits, but the evidence is weak, at best. Recognizing that investment does change from year to year forces the question – is the observed change, or lack thereof, consistent with expectations?

Brake (2017), Singer (2017), and Turner (2017), all looked at changes in the aggregate capital spending summed from the annual reports of major broadband providers before-and-after the 2015 Order. Kovacs (2017) looked at changes in the capital expenditures of the mobile wireless carriers using data from CTIA (which covered a longer period). All of these works used data unadjusted for inflation, which is curious given the comparison of data over time. It is disappointing that the Commission in its RIF Order did not fault these works for the failure to adjust for inflation when comparing levels of investment over time.

Table 1 summarizes the analysis but adds the inflated-adjusted figures (in 2016 dollars, by the GDP Deflator). Brake (2017) offered a summary analysis of Singer (2017) and Turner (2017), so the work is excluded from the table. Based on his data, Turner (2017) argued that Title II regulation “accelerated” investment as evidenced by a 5.3% increase in nominal capital spending between summed investment over the periods 2013–2014 and 2015–2016. This increase is based on nominal dollars and required the lumping together of the assumed treatment year (2015) and the subsequent year. In real dollars, capital spending was down in the year following the 2015 Order even by Turner’s own data (and not accounting for a trend).

Table 1:

Capital Expenditure Changes Billions $.

201320142013–2014201520162015–2016
Turner
 Nominal $68.1369.7572.7872.39
 Real $71.1171.4373.7472.39
 % Change real0.45%3.23%−1.83%
 Nominal/2yr sum$137.88$145.12
 % Change5.3%
Singer
 Nominal $64.661.0
−5.6%
 Real $66.261.0
 % Change real−7.8%
Kovacs
 Nominal $32.126.4
−17.8%
 Real $32.926.4
 % Change real−3.02%−19.8%

Both Turner (2017) and Singer (2017) obtained data from the financial filings of the larger broadband providers, but Singer attempted to clean the investment data of irregularities and large foreign investments. The Commission rejected Turner in favor of Singer’s (2017) analysis, observing:

Singer attempted to account for a few significant factors unrelated to Title II that might affect investment, by subtracting some investments that are clearly not affected by the regulatory change (such as the accounting treatment of Sprint’s telephone handsets, AT&T’s investments in Mexico, and DirecTV investments following its acquisition by AT&T in the middle of this period). In contrast, [Turner] presents statistics that it claims demonstrate that broadband deployment and ISP investment “accelerated” to “historic levels” after the Commission approved the [2015 Order]. But [Turner] fails to account for factors such as foreign investment and the appropriate treatment of handsets as capital expenditures, as Singer did (FCC 2018: 55).

These adjustments, which were identifiable and impacted the investment data during the relevant period, seem sensible, at least within the context of such simplistic comparisons. (However, one might conclude foreign investments are caused by excessive regulations at home.) Once these external factors are considered, Singer showed that capital spending by broadband providers fell by 5.6% (or 7.8% in real dollars) between 2014 and 2016. Brake (2017) applied Singer’s adjustment to Turner and reported declines nearly identical to those reported by Singer, which is unsurprising given the data source was the same between those two studies.

For the wireless sector, Kovacs (2017) considered a longer time frame, but the data in Table 1 shows that the reduction in capital spending following the 2015 Order was well below 2014 levels, an 17.8% reduction in nominal and 19.8% reduction in real dollars. Kovacs (2017) also calculated that capital spending per dollar of revenue averaged 16.5% but fell to 14% after 2015. These are huge declines in capital spending for the mobile wireless industry in the year following the 2015 Order, which was the first application of the full weight of Net Neutrality to the mobile wireless industry.

Looking at these data, the Commission concluded that investment had likely decreased since the adoption of the 2015 Order, stating “[i]n 2015, capital investment appears to have declined for the first time since the end of the recession in 2009. And investment levels fell again in 2016 – down more than 3 percent from 2014 levels (FCC 2018: 54–55).”8 The Commission found these declines to be “particularly curious” since the economy “has been growing” (FCC 2018: 54–55).” The Commission was likewise moved by “the stark trend reversal that has developed in recent years” and surmised the “regulatory environment created by the [2015 Order has] stifled investment (FCC 2018: 54–55).” In these statements, the Commission is describing, albeit informally, the construction of a counterfactual. Capital investment “declined for the first time …since 2009,” suggesting the presumption investment would likely have risen in 2016. The economy “has been growing,” indicating the presumption telecommunications investment moves in step with the economy. And, of course, the “stark trend reversal” requires some consideration of a trend.

Consistent with this sort of thinking, Horney (2017) crafted a counterfactual using data from USTelecom. To do so, he estimated a linear trend of nominal investment using data from 2003 through 2014 and then extrapolated that trend to 2015 and 2016 to serve as a counterfactual.9 Nominal capital spending in 2016 was determined to be $5.3 billion below the linear trend. Despite the use of a counterfactual (albeit a crude one), the RIF Order was somewhat dismissive of Horney, lumping the analysis in with the works of Singer and Turner:

[Horney’s (2017)] calculation using broadband capital expenditure data for 16 of the largest ISPs reached a result similar to Singer’s, but this analysis simply compared actual ISP investment to a trend extrapolated from pre-2015 data. These types of comparisons can only be regarded as suggestive, since they fail to control for other factors that may affect investment (such as technological change, the overall state of the economy, and the fact that large capital investments often occur in discrete chunks rather than being spaced evenly over time), and companies may take several years to adjust their investment plans. (FCC 2018: 55–56).

This “other factors” thinking (i.e. conditioning on observables) appears frequently in the RIF Order. Perhaps had Horney conditioned his forecast on GDP, the Commission would have found the analysis more probative, and doing so certainly makes a considerable difference in his predications.10

Figure 1 shows the USTelecom investment data (used by Horney, nominal), the linear forecast (the dashed line), and the forecast based on a regression including a time trend and nominal GDP (the dotted line).11 Plainly, the model with GDP fits the data much better than does the linear trend, which fits poorly and misses the 2014 investment level by a large amount, confirming that the linear forecast is a poor counterfactual. Conditioning the prediction on GDP produces much larger investment effects for 2016. The linear trend predicts actual 2016 investment to be $5.3 billion below trend, whereas conditioned on GDP produces a $9.8 billion difference from the prediction, a spread well below the lower limit of a one-standard deviation confidence interval around the forecast. Using the latest USTelecom data and adjusting for inflation, investment in 2016 is about $2 billion below the linear trend but $7.1 billion below the GDP-conditioned prediction. The difference between accounting for GDP and not is plainly substantial, a difference that arguably supports the RIF Order’s relegation of Horney (2017) to the “suggestive” bin and its desire for the inclusion of “other factors” in the analysis.

Figure 1:
Figure 1:

Horney (2017) Revisited.

Citation: Review of Network Economics 17, 3; 10.1515/rne-2018-0043

In sum, all these data indicate capital spending was down in 2016 (at least in real dollars), but the lack of empirical substance led the Commission to conclude that the findings in these works were, if anything, suggestive. Instead, the Commission reasoned, “methodologies designed to estimate impacts relative to a counterfactual tend to provide more convincing evidence of causal impacts of Title II classification (FCC 2018, 56).” It is difficult to fault the Commission for its conclusions, and ideally the RIF Order will encourage better empirical practices by advocates, analysts and commenters in the future.

3.2 Hazlett and Wright (2017)

Though offering no direct evidence on the effects of the 2015 Order on investment, Hazlett and Wright (2017) reviewed evidence on the effects of Title II regulation on Digital Subscriber Line services (“DSL”) that applied prior to the end of line sharing in 2003 and the reclassification of the DSL as a Title I service in 2005. This analysis was not of Net Neutrality but of Title II regulation more generally, which applied differentially to the high-speed Internet connections offered by phone companies and by cable providers. Prior to these deregulatory acts, a review of the data indicated that subscriptions to DSL service grew much slower than did cable modem service, which was not subject to Title II regulation, but grew much faster after the regulations were set aside. As noted in the RIF Order,

After 2003, when the Commission removed line-sharing rules on DSL, DSL Internet access service subscribership experienced a statistically significant upward shift relative to cable modem service. A second statistically significant upward shift in DSL Internet access service subscribership relative to cable modem service occurred after the Commission classified DSL Internet access service as an information service in 2005 (FCC 2018: 56).

This evidence was not original to Hazlett and Wright (2017); the article was referencing an earlier publication by Hazlett and Caliskan (2008). The regression model mentioned in the RIF Order was,

lngtDSL=β0+β1lngtCBL+β2lnt+β3D03+β4D05+εt,

where gt is the percentage growth rate in subscribership for DSL and cable model service (superscript “CBL”) in period t, t is a time trend, and D03 and D05 are dummies variables that equal 1 after the relevant quarter of the deregulatory acts of 2003 and 2005 (the data is quarterly covering 1Q2001 through 4Q2006).12 Note that the estimated equation is non-standard for growth rate data; Equation (2) is suitable only when growth rates are positive (due to the natural log transformation) and it is not therefore generalizable. Growth was always positive during the period, so the model can be estimated. Despite the non-standard specification, the coefficients on the regulatory dummy variables do measure shifts in the growth rate as interpreted.13 The coefficients β3 and β4 are found to be positive and statistically different from zero, and the results coincide with the FCC’s interpretation of the model in the RIF Order.

The RIF Order’s appreciation for Hazlett and Wright, or more accurately Hazlett and Caliskan, relied largely on the inclusion of “controls for factors influencing the overall economy (FCC 2018: ft. 362).” In Equation (2), the model includes cable broadband growth, and in other models appearing in the study the RIF Order noted the regression included “Canadian DSL subscribership as an explanatory variable (FCC 2018: ft. 362).” Canada was not included in the analysis as a control (as in a difference-in-differences model), but rather the broadband growth rate entered Equation (2) as an additional explanatory variable. Models including Canadian DSL data (and data on Canadian cable modem subscriptions) did not include the D05 variable (the data did not permit it), but the coefficient on the 2003 deregulatory act remained positive, stable and statistically significant. Thus, the “second statistically significant upward shift” stated in the RIF Order cannot be supported by the empirical analysis reviewed in Hazlett and Wright (2017) since the second period dummy variable was not in the model, but the Canadian variables were not statistically significant. The marginal impact of the D03 variable is hardly different between a model with and without the Canadian variables, so it may be reasonable to presume the same would be true for the D05 variable had the data been available.

There is some danger in the way the RIF Order spoke of both Horney and Hazlett and Wright. As discussed earlier, the conditioning on observables does not imply the unconfoundedness assumption is satisfied (e.g. including X but not Z).14 (In addition to unconfoundedness, the use of time series data, which is common in evaluating the effects of regulation, a researcher must consider the relevance stationarity, cointegration, and other factors, depending on the model’s specifications and requirements.) In fact, Hazlett and Wright’s (2017) regression model, which the RIF Order embraced for conditioning on observables, suffers from a fairly serious problem. DSL and cable modem service are substitutes, so the application of Title II regulation to DSL also served as a treatment for cable modem service, shifting customers to the less regulated service. Consequently, an explanatory variable of Equation (2) is itself a treated outcome (i.e. gCBL) and thus endogenous to the model.

The RIF Order may be interpreted (incorrectly) as implying the inclusion of a few regressors cures all ills, a mis-interpretation that could lead to poorly specified models. In estimating plausibly causal effects, a researcher is obligated to explain why she believes the regressors are sufficient to meet the standard of providing an unbiased estimate of the treatment effect by satisfying the unconfoundedness assumption. Also, care must be taken to avoid the inclusion of endogenous variables as regressors, or else account for simultaneity bias using appropriate methods.

Hazlett and Wright (2017) include another analysis relevant to the investment question not mentioned in the RIF Order, but is nonetheless important. In the 2015 Order, the FCC dismissed worries about investment effects by claiming “broadband providers invested $212 billion in the three years following adoption of the [2010] rules – from 2011 to 2013 – more than in any three-year period since 2002 (FCC 2015: 3).” This claim, which largely accounts for all the “empirical” evidence on investment in the 2015 Order, was based on nothing more than a comparison of three-year summations of nominal investment levels using USTelecom’s data.

Hazlett and Wright evaluated the “more than” claim using investment measured in constant dollars, a near obligatory adjustment when evaluating time-series data (though one ignored entirely by the RIF Order). The analysis is presented in Hazlett and Wright’s (2017) plot of a three-year rolling investment levels (in 2014 dollars) over the period 1996–2014 (Figure 1 in the article). Narrowing attention to the relevant time span, it is apparent from this figure that the summed investment during the 2011–2013 period is not the largest since 2002. In fact, the 2011–2013 investment level was below average for this limited sample. For the eight three-year spans after 2002 but prior to the 2011–2013 window (the first being 2003–2005), capital spending in the sector exceeded the 2011–2013 level in five instances. Hazlett and Wright (2017: 493) conclude, “it appears that the 2011–2013 period was not associated with any notable uptick in trend [in] Internet investment.” FCC Chief Economist at the time the 2015 Order was drafted described that the deliberative process leading to decision as an “economics-free zone,” though he later qualified his assessment by observing “[e]conomics was in the Open Internet Order, but a fair amount of the economics was wrong, unsupported, or irrelevant.” (Brennan 2016). Hazlett and Wright’s simple yet potent dismissal of the 2015 Order’s claim on investment lends additional credibility to Brennan’s assessment, but the RIF Order was silent on these more relevant data points.

3.3 Counterfactual Analysis

Only Ford (2017a) and Hooton (2017) employed a counterfactual analysis based on a control group, both appearing to apply a difference-in-differences (DiD) estimator to quantify the effect of Title II regulation. Ford reported large negative investment effects from Title II regulation, while Hooton claimed there was no effect. The difference-in-differences (DiD) estimator is,

δ=(Y1TY0T)(Y1CY0C),

where δ is the DiD estimator, the YT are the outcomes of the treated group and the YC the control group (Meyer 1995; Angrist and Pischke 2009: 227). Equation (3) has three differences, thus, the term difference-in-differences: (1) the difference in outcomes between two periods when a treatment is rendered in the second period; (2) the difference in outcomes between two periods when a treatment is not rendered in the second period; and (3) the difference between these two differences. Put simply, the estimator adjusts the difference for the treated group by the difference that would occur absent the treatment as measured by the control group.

Plainly, the DiD methodology requires a valid control group that serves as a stand-in for the treated group to measure its outcome absent the treatment. It is not correct to think of the DiD methodology as comparing the outcomes of two different groups, but rather it aims to compare the outcomes of a treated group with and without the treatment (the latter of which is unobservable). The principle property of the control group is that the outcomes of the treated and control groups would be very similar both before and after the treatment is given if the treatment was never applied. This requirement is known as the common trends or parallel paths assumption, which cannot be tested but can be subjected to analysis to lend credibility to the control group (Meyer 1995; Angrist and Pischke 2009: 230). Ideally, the control and treated groups would be very similar in many respects, but such situations are rare in observational studies. Still, as long as the common trends assumption is plausibly satisfied, the DiD estimator has validity.

3.3.1 Treatment Date

Another key element of the DiD analysis, as with any other event study, is the selection of the treatment date. Stock returns, for instance, adjust on rumors of mergers and earnings, not necessarily when the formal announcement is made. To quantify the impact of Title II regulation on investment, Ford (2017a) chose a treatment date of year 2010 based on three factors. First, investment analysts suggested that the prospect of Title II regulation had influenced investment decisions in the industry since it was first proposed in 2010 by then Chairman Julius Genachowski.15 Second, media coverage of the industry indicated the threat of Title II was ever present since 2010. In Table 2, the count of articles from the communications trade press discussing Title II reclassification for broadband between 2010 and 2015 is summarized. As the table shows, across the entire period between 2010 until the FCC’s 2015Open Internet Order the debate over Title II regulation was active. A lull in 2012 and 2013 likely reflected the ongoing review of the 2010 regulations by the D.C. Circuit Court (overturning and remanding the FCC’s rules in January 2014). Third, an event study of the surprise announcement of the application of Title II regulation in 2010 revealed a strong negative effect on broadband provider stock returns (Ford et al. 2010), indicating the market viewed Title II regulation as bad for business. It seems improbable, therefore, that the broadband providers ignored the threat of reclassification in their investment planning.

Table 2:

Title II Trade News Coverage.

YearArticles
2010595
2011110
201227
201365
20141,625
20152,079

The choice of treatment date for Title II regulation was the subject of much debate. Between Ford and Hooton, however, the argument was irrelevant. Ford (2017a) proposed the 2010 treatment date and Hooton (2017: 10) embraced it, the latter stating the “2010 treatment date is a more accurate implementation year” and that “any study for 2015 impacts should be interpreted cautiously.” Given the 2010 treatment date, which preceded the formal decision by five years, the Commission described Ford’s research as an “assessment of how ISP investment reacted to news of impending Title II regulation,” and based on Ford (2017a) concluded that this research indicated “Title II regulation discouraged ISP investment (FCC 2018: 57).”

Hooton’s counterfactual work was discounted, however, since it involved, as discussed later, forecasted investment figures over much of the treatment period (FCC 2018: 58). Hooton also offered a number of regression-based means-difference studies, but this evidence was likewise found unpersuasive, largely due to a lack of a counterfactual and insufficient regressors, though more serious errors were certainly present than the RIF Order mentions (Ford 2017e,f). All said and done, the RIF Order (p. 58) declared “the studies in the record that control the most carefully for other factors that may affect investment (the Ford study and the Hazlett & Wright study)” support the conclusion that “reclassification of broadband Internet access service from Title II to Title I is likely to increase ISP investment and output.”

3.3.2 Statistical Model

Ford and Hooton applied the standard DiD regression model, which is,

kit=βXit+δDit+λt+μi+εit,

where kit is (the natural log of) capital spending for sector i at time t, Dit is a dummy variable that equals 1 following the event date (0 otherwise) for the telecommunications sector, μi is fixed effect for each sector in the sample, λt is a time effect common to all observations in time t, and εit is the econometric disturbance term. This equation is a two-way fixed effects model (time and sector). The coefficient of interest is the DiD estimator δ (for Eq. 3), which measures the change in k resulting from the regulatory treatment. The t-test on this coefficient indicates whether the “treatment,” in this case reclassification, has a statistically-significant effect on investment.

3.4 Ford’s DiD Analysis

Ford employed 36 years of data (1980–2015) on investment in telecommunications infrastructure from the BEA’s fixed assets tables, with the final year being the latest available. As the 2015 Order applied to all broadband providers, it was not possible to craft a control group from unregulated telecommunications providers. While the use of telecommunications investment data from other countries (provided by the OECD) was considered, it was set aside for a number of reasons: (1) the data series ended in 2013; (2) Net Neutrality regulation was being contemplated over the treatment period for nearly all OECD countries; (3) some OECD members have state-owned telecommunications systems; (4) economic recovery from the 2008 recession varied widely across the OECD; (5) investment data from other countries is converted to US dollars which may introduce problems; and (6) the parallel paths assumption had no support. Once the OECD data is updated, it may be possible to construct an empirical model quantifying the effects of Net Neutrality regulation in the US and abroad, but such an effort would require detailed knowledge of the varied regulatory regimes of member nations and the status of their assorted Net Neutrality regulations and proposals, in addition to an empirical model that could account for all of it.

In light of the claimed limitations of the OECD data, Ford turned to the investment activity for the seventy industry sectors in the US covered by the BEA, which were evaluated for trends comparable with telecommunications investment over the pre-treatment period. If sectors could be found that tracked closely with telecommunications investment over three decades, even during two episodes of substantial economic turbulence (the Internet bubble and the global recession), then these sectors could plausibly serve as a valid counterfactual during the treatment period (2011–2015) under the parallel paths assumption. Four sectors were chosen for the control group: (A) machinery manufacturing; (B) computer and electronic products manufacturing; (C) plastic and rubber products manufacturing; and (D) transportation and warehousing. Figure 2 shows the pre-treatment trends, which shows very similar investment activity (but is not a formal test of the parallel trends assumption). The null hypothesis of equal growth rates between telecommunications and the control group could not be rejected, lending support to the parallel paths assumption, though this method is again not a formal test of the assumption. Additional evidence supporting the control group is summarized below.

Figure 2:
Figure 2:

Pre-Treatment Investment Trends.

Citation: Review of Network Economics 17, 3; 10.1515/rne-2018-0043

For the full sample (1980–2015, excluding 2010), there were five industry sectors (four controls and telecommunications) and 35 years of data each for 175 total observations. By far, this was the richest data set among all the studies. Limiting the analysis to 1990–2015, where the treatment dummy variables equals 1.0 for the telecommunications sector after 2010 (0 otherwise), there were 130 total observations and for 2000–2015 there were 75 observations. Figure 3 illustrates the post-treatment average investment levels for telecommunications and the control group; a rather large departure of average investment levels is plain to see.

Figure 3:
Figure 3:

Telecom vs. Controls.

Citation: Review of Network Economics 17, 3; 10.1515/rne-2018-0043

For the three samples, Table 3 presents the estimated DiD estimators, the marginal effects, and the estimated annual impact to investment in dollars. The DiD estimate is always negative and indicates a marginal effect of around a 20% reduction in investment relative to the counterfactual. All the DiD coefficients are statistically significant at the 1% level or better. Title II regulation, or its threat, was found to reduce capital spending in the telecommunications sector by about $32 billion annually. (The BEA data is more comprehensive and has a larger mean than the USTelecom and OECD data.) This effect is very large and consistent across samples.

Table 3:

Investment Impacts Ford (2017a).

Sample periodδMarginal effectObs.Yearly inv. effect
1980–2015−0.226*** (−3.09)−20.3%175−32.0 Bil.
1990–2015−0.227*** (−3.28)−20.3%130−32.0 Bil.
2000–2015−0.267*** (−4.83)−23.4%75−38.5 Bil.

Sig. levels: * 10%, ** 5%, *** 1%.

Based on these results, the Commission concluded that the “2010 announcement of a framework for reclassifying broadband under Title II – a credible increase in the risk of reclassification that surprised financial markets – was associated with a $30 billion–$40 billion annual decline in investment (FCC 2018: 57).” Ford offered additional analysis of his DiD models and responded to criticisms in Ford (2017b), though these complementary works were not mentioned in the RIF Order.

Ford provided a number of robustness checks. First, a test for the “best” treatment date was provided, analyzing whether a 2007, 2008, 2009, or 2010 treatment date best fit the data. The results indicated the 2010 treatment date was best. Second, the model was estimated with different control groups (excluding each individually), and the marginal effects were not much affected. Third, a number of alternative specifications were considered, including models with additional regressors including the industry sales, net capital stock, and both regressors. The dependent variable was also defined as investment divided by net capital stock. The estimated marginal effects were comparable to the results reported above. Broader economic trends (e.g. GDP, interest rates, and so forth) were accounted for by the time fixed effects, since such factors are common to all industries included in the sample. Finally, a rolling five-year pseudo treatment was applied across the 1980–2015 data to test whether the investment levels of the telecommunications sectors were different than the control group during any other five-year window (32 distinct tests). Absent the windows 2010–2014 and 2011–2015, none of these pseudo treatments were statistically different from zero. These results lend credibility to the choice of control group.

While a number of criticisms were levied against Ford (2017a), mostly related to the choice of treatment date and control group, the Commission addressed only one criticism from Nix et al. (2017). Nix et al. (2017) insisted Ford’s estimates were invalid because Title II applied to portions of the telecommunications industry during the pre-treatment period, specifically DSL services which were reclassified as a Title I service in 2005. While it was a criticism worth considering, the Commission dismissed the complaint noting that “the bulk of broadband subscribers used cable modem services that were not regulated under Title II (FCC 2018: 57).” Moreover, the Commission reasoned that the application of Title II to DSL during the period would, if anything, “imply Ford’s negative result for investment was understated (FCC 2018: 57),” referencing Hazlett and Wright (2017) in support. Accepting the complaint as valid, Ford’s estimates may be sensibly described as reliant on the assumption that the application of Title II to the entire industry was a different treatment than the application to DSL alone, which seems reasonable.16 Also, the Commission’s argument that the application of Title II during the pre-treatment period to a relatively small portion of the industry leads to an understatement of effects during the treatment period seems plausible (since investment in the pre-treatment period was likewise below its unregulated level).

How one interprets Ford in reference to Title II was and continues to be debated. It seems without question, however, that Ford (2017a,b), later published with additional analysis in Ford (2018a), demonstrated that telecommunications investment after 2010 departed materially from the investment levels of other sectors with capital spending that had tracked very closely with telecommunications investment over the prior thirty-year period. Figure 3 illustrates the marked departure of telecommunications investments from the (average of the) control group after 2010. For no other five-year span did telecommunications investment depart from the control group in a statistically-significant manner. Capital spending is likewise linked to employment levels (Beard et al. 2014), and other work by Ford (2017c) confirmed with a DiD analysis that sector employment was likewise down after 2010 in a statistically-significant manner. Something clearly happened to the telecommunications industry in 2010 that reduced capital spending relative to the counterfactual, even holding things like net capital stock and sales constant. It is difficult, in light of the strength of the evidence, to fault the Commission for giving credence to these results.

3.5 Hooton’s DiD Analysis

Aside from the 2010 treatment date and the application of the DiD framework, Hooton’s analysis departed substantially from that of Ford. Investment for the US broadband industry was measured using USTelecom data from 1996 through 2013 (in per-capita terms), forgoing the readily-available BEA data spanning many decades. A single control (not a control group) was selected by averaging investment data from an unspecified number of OECD nations (no details were provided), losing much data and variability in the process. The OECD data ended in year 2013, leaving only three-years of post-treatment data. In addition to the lack of data, Ford (2017a,e,f) did not use the OECD nations as a control group for a variety of other reasons detailed above. The RIF Order (ft. 345) also rejects the use of international data due to “the much more regulatory European system, which includes mandatory unbundling at regulated rates.” Moreover, with most OECD nations contemplating or implementing some form of Net Neutrality during the pre-treatment or treatment periods, perhaps affecting investment levels. Recognizing this problem, Hooton states that he “remov[ed] all countries that have [Net Neutrality] or have discussed it in their legislative bodies,” but no listing of countries excluded (or included) is provided. As a consequence of poor documentation, replication of Hooton’s work was not possible.17 Despite the DiD methodology’s reliance on the parallel-paths assumption, Hooton fails to either to mention it or assess its plausibility for the control unit. With no evidence to support the commonality of trends in the pre-treatment period, there is no reason to expect commonality in the post-treatment period (absent the treatment), and little basis upon which to embrace the “no effect” result Hooton reports.

Perhaps most alarming about the analysis is despite all the data ending in 2013, Hooton estimates the DiD model using observations reaching through 2020, which at the time (June 2017) included investment data from four years in the future. “Missing” observations from 2014 through 2016 (which were actually available for the USTelecom data) and “yet to occur” observations from 2017 through 2020 were “predicted” using a regression-based forecast equation applied to the historical data. To be clear, the forecasts were not used to construct a counterfactual à la Horney; the forecasts were used as the investment data itself, with these extrapolations from historical data accounting for seven of ten post-treatment observations. It goes without saying that this approach is entirely illegitimate. Even Hooton conceded that the “use of forecasted data for impact evaluations is a flawed approach (Hooton 2017: 10).” The Commission, apparently accepting Hooton’s assessment of his own work, rejected the study for using “forecast rather than actual data (FCC 2018: 58).”18 Though the method is flawed, the estimated DiD coefficients from Hooton’s four models (from his table B1) are always negative but never statistically different from zero.

The RIF Order mentions the attempt by Ford (2017f) to replicate Hooton’s analysis, using only the available data (through 2013) rather than extrapolating observations into the distant future. The RIF Order observes, “[Ford] attempted to replicate the results of table B1 and obtained strikingly different results when excluding the forecast data (FCC 2018: ft. 360).” Indeed, Ford (2017f) found the DiD coefficients to be negative and statistically significant. Still, the RIF Order placed “limited weight” on these replications because “[Ford] chose to only estimate Hooton’s baseline model, which did not control for obviously confounding factors such as the business cycle (FCC 2018: ft. 360).” Since the OECD data included observations from different countries, including macroeconomic variables was feasible, unlike for Ford’s (2017a,b) work which used only US data.

The Commissions’ concern about “other factors” may be addressed as can the problem of Hooton’s forecasted investment data. Subsequent to the close of the record for the RIF Order, the OECD released investment data from 1997 through 2015, making it possible to use actual investment data over longer periods and ending in the same year as Ford’s data.19 Complete data over the period is available for 28 nations. Matching Hooton, the dependent variable of the DiD regression is investment-per-capita and the treatment date is 2010 (which is excluded from the regression as a transition year). GDP-per-capita is included as an “other factor.” Results are first obtained for all OECD nations and second for those that have not implemented some sort of Net Neutrality policy according to the 2015 Communications Outlook.20

Estimates from Equation (4) are provided in Table 4 with standard errors that are robust to both cross-sectional heteroskedasticity and within-panel serial correlation. The GDP variable was positively signed and statistically different from zero at better than the 5% level in both regressions. Consistent with Ford, the DiD estimator δ is negative and statistically different from zero at the better than the 5% level for both samples. Investment-per-capita in the US for the treatment period averaged $245, so the results indicate about a 17% decline in investment, a decline very close to Ford’s estimate of a 20% reduction. The δ coefficient and the marginal effects are not much affected by the sample restriction (changing from −52.18 to −50.39), a result that may simply reflect the crude nature of the Net Neutrality indicator provided in the Communications Outlook.21 Net Neutrality regulations vary widely across member nations in both form and the seriousness with which the nation’s pursued them. Yearly investment declines are on the order of $17 billion, which is smaller than Ford’s estimate though this difference simply reflects the fact the BEA investment data is more encompassing and thus has a larger mean than does the USTelecom and OECD data.

Table 4:

Investment Impacts using OECD Data.

ControlδMarginal effectNations (Obs.)Yearly inv. effect
All OECD−52.18*** (−2.97)−17.6%28 (504)−17.5 Bil.
OECD no Net Neutrality−50.39** (−2.26)−17.1%19 (342)−16.9 Bil.

Sig. levels: * 10%, ** 5%, *** 1%.

Do these estimates confirm the negative investment effect? Is the inclusion of “other factors” sufficient to qualify the estimated DiD coefficient as valid? No. Causal analysis requires a careful study of the data used to estimate the model. First, consider the common trends assumption. A test of growth rates between telecommunications and the control group in the pre-treatment period (using linear regression) finds no difference for either sample (t-stat < 1 for both), but this appears due to large standard errors of the coefficients. Visual inspection of the investment data, provided in Figure 4 for the mean-centered series, offers little support for the parallel paths assumption.22 Also, for ten rolling five-year pseudo-treatments in the pre-treatment period, eight of the DiD coefficients are statistically different from zero. While this analysis does not preclude the use of OECD data as a control group, it seems clear that a richer analysis than that provided here is required to do so.

Figure 4:
Figure 4:

US Telecom vs. OECD Control.

Citation: Review of Network Economics 17, 3; 10.1515/rne-2018-0043

3.6 Crandall’s Event Study

One additional study cited in the RIF Order is a financial event study by Crandall (2017). Crandall looks for abnormal stock returns on four events he deemed relevant to Net Neutrality regulation: (1) the 2014 Verizon v. FCC decision overturning the 2010 Open Internet Order (“2010 Order”); (2) the 2014 agreement between Comcast and Netflix regarding interconnection; (3) President Obama’s November-2014 YouTube video calling on the FCC to impose Title II regulation; and (4) the adoption of the 2015 Order in March-2015. Very little evidence of abnormal returns is found with the exception of a few negative and statistically significant abnormal returns for a few cable operators.

The RIF Order dismisses the Crandall study, concluding the lack of measurable effects “may reflect the forward-looking, predictive capabilities of market players (FCC 2018: ft. 346).” Perhaps that is true, but it also the case that the event dates studied were not terribly germane. The Verizon v. FCC decision remanded the FCC’s earlier 2010 Open Internet Order and there was a great deal of uncertainty as to how the FCC would respond to that remand. It was taken as some to be a victory for broadband providers (Verizon was the plaintiff, after all), but others recognized the potential danger of more aggressive action. The expected sign on this event for broadband providers is ambiguous. As for the Comcast/Netflix arrangement, such agreements represented nothing new in the delivery of content across large networks. It was business as usual. Likewise, the formal adoption of the 2015 Order was no surprise at all.

The only real “event” (a surprise) of these four was President Obama’s YouTube video on November 10, 2014. Crandall reports on that day the three largest providers of cable broadband service (Comcast, Time Warner Cable, and Charter) experienced large negative abnormal returns. Title II was taken to be bad news for cable stocks, as it was in 2010 when it was first proposed (Ford et al. 2010). Why the data shows no abnormal returns for the telephone companies is unclear, but at the time it was not expected that Title II would apply to mobile wireless carriers like Verizon and AT&T. It is interesting, however, that both Ford, Ford et al. (2010) and Crandall (2017) find that investors believe it is the cable industry that bears the brunt of the Title II debate, perhaps because these companies have never been regulated under Title II (they are regulated under Title VI) and are thus completely inexperienced in any common carriage regulatory scheme.

4 Conclusions and Policy Impact

Quantifying the effect of a policy depends on the ability to produce a plausible counterfactual, an idea dating back at least to Rubin (1974). In reviewing the evidence on the investment effects of Title II regulation, the FCC came to that very conclusion. While counterfactual analysis is the standard approach in modern impact analysis, a Westlaw search of FCC orders indicates the RIF Order is the first to use the term “counterfactual” in an empirical context (another was found in the context of a mathematical merger simulation). At minimum, the RIF Order was more thorough than the economic analysis undertaken by the Commission in its 2015 Order (Brennan 2016; Ellig 2017; Hazlett and Wright 2017). An appeal to counterfactual analysis is not a small advancement in policy analysis at the agency, but only time will tell whether the demands of the RIF Order on analysts is a watershed or a one-off. Maybe the Commission and its new Office of Economics and Analytics will continue to demand more rigorous empirical work, but I cannot say I am hopeful in that regard, since the FCC reflects the character of its temporary leaders.

In all, I believe the FCC’s assessment of the record evidence on the investment effects of Title II regulation was reasonable even if incomplete, though this article is intended to help others reach their own conclusion in that regard. I do believe that empiricists would likely agree with the general empirical principles that can be extracted from the RIF Order, which I take to be the following. First, simply comparing outcomes before-and-after an event is not a valid impact analysis, though such comparisons may be suggestive. Second, regression-based means difference tests of policy changes (i.e. event study) are probative but only if the model is adequate. Third, the causal effect of a regulation is best determined with reference to a counterfactual. Fourth, the application of proper methods does not excuse the use of bad data.23 As detailed here, the conclusions of the RIF Order follow from these four basic principles. As these four principles are consistent with modern impact analysis, the RIF Order offered a reasoned and reasonable assessment of the evidence before it.

Still, this review of the RIF Order demonstrates that while the regulatory agency was given much evidence on the investment effects of its Net Neutrality regulations, most of it was suggestive at best, some of it was easily dismissed for conspicuous flaws, and very little of it was found to be relevant and credible. The record on investment effects considered by the Commission was not rich by any means, even if far more evidence was evaluated in the RIF Order than for the earlier 2015 Order that it overturned, since the earlier decision relied on no empirical evidence at all. With Net Neutrality being a politically-charged issue, both in the US and abroad, a lack of robust evidence on its effects risks allowing policies to be entirely unhinged from economic reality, a situation that will almost certainly lead to inefficient outcomes.24 All parties to the debate believe that Net Neutrality regulation, or its absence, has important consequences for consumers and the businesses that participate in the Internet ecosystem, and it is up to interested parties to offer credible analysis on such effects. As nations grapple with how to best maximize the value of modern communications, more evidence on this important policy topic is desirable.

References

Footnotes

1

Telecommunications Act of 1996, Pub. LA. No. 104–104, 110 Stat. 56 (1996) (available at: https://www.fcc.gov/general/telecommunications-act-1996).

2

Between the two orders was a national election that switched the FCC’s administration from a Democrat to a Republican majority. The question of regulating broadband under Title II of the Communications Act is largely split along party lines.

3

Theoretical analyses of the investment effects of Net Neutrality regulation offer mixed predictions, but none of the available works contemplate the application of the breadth of regulation available under Title II of the Communications Act. The investment effect of an actual policy vector (rather than mere stylized constraints) is an empirical question requiring an empirical answer. For a discussion of the theoretical work on the effects of Net Neutrality, see Jamison (2019), also published in this Special Issue.

4

Time series data may also be useful in this regard by forecasting a counterfactual from historical data.

5

Unconfoundeness is likewise referred to as exogeneity, ignorability, or selection on observables.

6

A number of techniques are available to craft good control groups to avoid omitted variables bias including various matching algorithms (Imbens and Wooldridge 2009: 28).

7

While some questions were raised about the capital spending of edge providers like Netflix and Facebook, no meaningful evidence was submitted to the FCC on the effects of its rules on such expenditures. To the extent these investments were in telecommunications infrastructure, they were included in the analysis by Ford (2017a), who used a broad measure of telecommunications investment.

8

What would the Commission have said about a small increase in capital spending after the 2015 Order? The relevant question, however, is whether is rose by more or less than expected had the 2015 Order not been adopted.

9

Horney’s work was finalized prior to the official investment figure for 2016 was released requiring an estimate of 2016 investment. See Ford (2018b).

10

A more detailed and updated analysis of Horney’s analysis is provided in Ford (2017d, 2018b).

11

The predictions are based on a Least Squares regression of nominal investment on a time trend, or a time trend and nominal GDP, spanning the period 2003 through 2014. The estimated coefficients are used to predict the investment counterfactual for 2015 and 2016. Newey-West standard errors are used to craft the confidence interval.

12

Given evidence of serial correlation, the authors apply the Prais-Winston estimator to account for serial correlation (Kadiyala 1968).

13

Given the specification, if β2 is positive, then the growth rate grows to infinity (which would be odd), but if the coefficient is negative growth declines asymptotically to zero, which is consistent with the series migrating to a mature level of subscriptions. The estimated coefficient on the natural log of the trend is negative. Given the specification, the effect of the regulatory change is proportional to the growth rate. In this setting, the growth rate is declining but shifts upward (but still declines asymptotically) if the coefficients β3 and β4 are positive.

14

Cable model growth and the Canada variables were generally not statistically significant determinants of log-growth in the model.

15

Press Release, Federal Communications Commission, The Third Way: A Narrowly Tailored Broadband Framework, Statement of Chairman Julius Genachowski 4–5 (May 6, 2010) (available at: http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-297944A1.pdf).

16

In studies of prescription drugs, for example, different potencies are often applied in an effort to find the most efficacious treatment.

17

This lack of detail is problematic given that the addition or subtraction of even one country might dramatically alter the results. It is also unclear how or if Hooton addressed the changing membership of the OECD over the sample period. Chile, Estonia, Israel, the Slovak Republic, Slovakia, all joined during the sample period, and investment-per-capita in each is below the OECD average (some substantially so), reducing the OECD average over time and thereby distorting the comparison to US investment data.

18

An attempt by the Internet Association, a trade-group for edge companies and Hooton’s employer, to convince the FCC the estimates for the 2010 treatment date did not include forecast data, were rejected. The FCC observed “comparing the reported number of observations in table B1 and B2 of the [Hooton] study clearly indicates that the same datasets were used to estimate 2010 and 2015 effects (RIF Order: ft. 360).”

19

Data obtained from OECD, Key ICT Indicators (table 9b) (http://www.oecd.org/sti/broadband/oecdkeyictindicators.htm).

20

Details are available at: http://www.oecd.org/sti/broadband/2-9.pdf.

21

This analysis is for replication and comparison purposes alone.

22

Attempts to construct a control group by visual inspection and other means including the method of Synthetic Counterfactual was unproductive as the investment series are highly irregular and often exhibit large changes near the event date that were inconsistent with investment changes in the United States (Abadie et al. 2015).

23

There are numerous methods to replace missing data, but that is an entirely different problem that I address here. Even so, an analyst should be clear about the methods used to replace missing data and their impact on results.

24

It remains an open question whether the scarcity and often low quality of the evidence is the cause of, or the effect of, the politicization of the issue.

If the inline PDF is not rendering correctly, you can download the PDF file here.

FREE ACCESS

Journal + Issues

Search