Analysis of the NCAA Men’s Final Four TV audience : Journal of Quantitative Analysis in Sports uses cookies, tags, and tracking settings to store information that help give you the very best browsing experience.
To understand more about cookies, tags, and tracking, see our Privacy Statement
I accept all cookies for the De Gruyter Online site

Jump to ContentJump to Main Navigation

Journal of Quantitative Analysis in Sports

An official journal of the American Statistical Association

Editor-in-Chief: Mark Glickman PhD

SCImago Journal Rank (SJR) 2014: 0.265
Source Normalized Impact per Paper (SNIP) 2014: 0.513
Impact per Publication (IPP) 2014: 0.452

Access brought to you by:

provisional account


Analysis of the NCAA Men’s Final Four TV audience

1 / R. Paul Sabin1 / Keith M. Willes2

1Department of Statistics, Brigham Young University, Provo, UT 84602, USA

2BYU Broadcasting, Brigham Young University, Provo, UT 84602, USA

Corresponding author: Scott D. Grimshaw, Department of Statistics, Brigham Young University, Provo, UT 84602, USA, Tel.: +1-801-422-6251

Citation Information: Journal of Quantitative Analysis in Sports. Volume 9, Issue 2, Pages 115–126, ISSN (Online) 1559-0410, ISSN (Print) 2194-6388, DOI: 10.1515/jqas-2013-0015, June 2013

Publication History

Published Online:


This is the first paper to investigate factors that affect the size of the TV audience for the NCAA Men’s Final Four. The model is based on Nielsen data for 54 markets for 10 years. One of the most interesting results is that college basketball teams have a measurable effect in their local markets, but even the biggest name programs do not have a national effect. However, the little known teams that succeed in the tournament, known as Cinderellas, have a national effect likely due to the media attention and success on the court. Broadcasters and advertisers are interested in maximizing the TV audience, and the model allows prediction to compare games between big market teams, big name teams, David vs. Goliath games, and a Championship game between two Cinderella teams.

Keywords: TV; Nielsen ratings; NCAA basketball; March Madness; demand for sport

1 Introduction

In 2010 the NCAA negotiated their largest ever TV contract for the NCAA Men’s Basketball Tournament with partners CBS Sports and Turner Sports. The $10.8 billion deal for 2011 to 2024, which averages $771 million per year and only slightly smaller than the $930 million per year the NBA receives from partners ESPN/ABC and TNT, indicates the popularity of collegiate basketball and demonstrated the value to broadcasters and advertisers of a live event for a valued demographic. There are many reasons to analyze the TV audience of the NCAA Men’s Basketball Tournament. From a business perspective, there are many parties interested in the size of the TV audience. For example, the NCAA and bidding networks would include predictions of future audiences as one of many factors used in negotiating the price of future broadcast rights, and advertisers place ads with both national and local broadcasters that rely on estimates of audience size in purchasing the ad inventory during the games and actual audience size to justify to their clients the value of the ad campaign. From an economic perspective, the study of the demand for sport uses TV viewing as a measure of “fan interest” and the factors that have an effect on demand can be used to assess the correct judgements made by stakeholders. For example, all stakeholders (NCAA, broadcasters, advertisers) benefit from maximizing TV viewing – up to the limits of preserving tournament fairness and competitive play. From the sports perspective the analysis may add insight into questions about the popularity of different teams, the appeal of Cinderella teams, and comparing regions of basketball fans.

Much of the research on the NCAA Men’s Basketball Tournament has focused on predicting the winners of tournament games, ranking teams compared to tournament seeding, and evaluating the tournament design. Schwertman, Schenk and Holbrook (1996) show simple statistical models using seed position could predict the probability of each of the seeds winning the regional tournament, and Carlin (1996) extends those models to allow explanatory variables that would be available at the start of the tournament, such as team rankings and point spreads. West (2008) proposes a flexible rating to predict the expected number of tournament wins for the tournament teams using summaries of the season’s data or rankings. Stekler and Klein (2012) investigate models for predicting the winner of each game using the difference in seeding and the average rankings of analysts. Harville (2003) proposes a modified least squares procedure to produce a ranking where the estimated team effects can be used to predict the winner of NCAA tournament games or to compare the fairness of the NCAA selection committee at-large invitations or seeding. Instead of using the regular season game outcomes, Coleman and Lynch (2009) investigate team performance statistics provided in the “nitty-gritty report” used by the NCAA Division I Men’s Basketball Committee to select the at-large teams and assign the seeds to predict tournament game winners. Brown and Sokol (2010) extends the LRMC method, which combines a logistic regression model to estimate head-to-head differences in team strength and a Markov chain model to combine those differences in an overall ranking, to include empirical Bayes models that condition jointly on all outcomes between a pair of teams and demonstrate improved prediction. Bauman, Matheson and Howe (2010) point out that there may be an inefficiency in the tournament design since the 10 and 11 seeds average more wins and typically progress farther in the tournament than the 8 and 9 seeds. Morris and Bokhari (2012) clarify this to mean that, while there is no significant difference between mean number of wins or proportion of teams advancing to the Sweet Sixteen among 8/9, 10, 11, and 12 seeds, there is a significant difference among teams that win their first game. Gray and Schwertman (2012) compare the correlation of a measure of each team’s tournament performance with seeding from the NCAA committee, and the end of season rankings from Sagarin and RPI, and conclude that in 2011 the NCAA committee seeding produced a better tournament than the computer methods. In an investigation of the conference tournaments that determine much of the NCAA tournament teams, Balsdon, Fong and Thayer (2007) find strong evidence that teams sometimes tank in the conference tournament and suggest that the underperformance benefits the conference by increasing the number of NCAA tournament bids which result in increased revenue sharing.

This paper investigates the TV audience of the NCAA Men’s Basketball Tournament Final Four. Using data from the last 10 years (2003–2012) enables an analysis during some of the tournament’s largest TV audiences. The TV audience for a sporting event can be viewed as a consumer-theory model, where consumers chose to watch the broadcast on TV because it maximized their entertainment subject to a budget constraint. Following current research, a consumer-theory model suggests the main categories of demand for viewing a sporting event are consumer preference (fan of the sport, team, tournament, players), economic (price, availability of substitutes, macroeconomic), quality of viewing (broadcast quality, season and time of broadcast), characteristics of the game (outcome uncertainty, team quality, outcome significance), and supply capacity (available audience).

The model for the TV audience of the ith market for a game with teams j and


The response variable aud is the number of households who watched the game in the ith market. The effects due to market and each team’s effect within their local market are denoted by γi and θj, respectively. The model includes other explanatory variables that have been shown in other papers to impact demand as they apply to the Final Four: cinderella identifies games involving a relatively unknown team having unusual success in the tournament, no1seed identifies games involving the number one overall seed in the tournament, nbadraft counts the number of players in the game who are top NBA prospects, seedmin is the lowest tournament seed of the two teams, pointspread reflects how close the game is perceived to be, semifinal indicates the Saturday games, year identifies the year between 2003 and 2012, latestart indicates when the second game on Saturday starts later than scheduled because the prior game went long. Because the model predicts each market, the following factors have an effect only within market i: λi indicates the impact of hosting the Final Four in market i, nba indicates an NBA game involving the NBA team located in market i, mormon indicates the conflict in the SLC market between a religious meeting and the Saturday Semifinal games.

The majority of the research on fan demand for sport has used the game attendance. Borland and Macdonald (2003) review the literature on factors that affect attendance which are most often on baseball in the US and soccer in the UK, but also include football, basketball, hockey and rugby. Some of these studies have included whether or not an event is televised as an explanatory variable to measure the lost gate revenue when fans have a lower cost option to watch the game. While game attendance will have some similarities to TV audience, there are some differences for the Final Four. Game day attendance is much lower than the TV audience because watching the game on TV has a lower opportunity cost (both in time since watching 6–8 h of game time is much smaller than the long-weekend of the Final Four and financial commitment where advertisers pay a large portion of the cost in return for consumer’s attention) and is not limited by the size of the arena (which has a maximum even though the Final Four is played in arenas much larger than a typical basketball arena).

The studies of TV audience have explored NFL and professional soccer, which are some of the highest rated programs in the US and Europe. In order for sports to draw a large TV audience the game needs to be compelling, which is sometimes at odds with the objectives of team owners, coaches, and players. Buraimo, Forrest and Simmons (2007) describe the outcome uncertainty hypothesis and find in their study of professional Spanish soccer games that the betting odds are a good representation of how close the game is perceived to be. This is an extension of Forrest and Simmons (2002) and Forrest and Simmons (2006) which investigate the betting odds as a measure of the outcome uncertainty in game attendance from the English professional soccer. Bojke (2007) measures the importance of a game by the likelihood of making the playoffs and found it had a significant effect on regular season game attendance in English professional soccer. Buraimo and Simmons (2009) and Buraimo (2008) simultaneously modeled stadium attendance and TV audience and find a contradiction where fans at the game prefer contests where the outcome is expected to be a home victory, but TV viewers are drawn to games that are close. Mongeon and Winfree (2012) investigate the NBA and found winning had a higher impact on TV attendance than game attendance.

In some of the most highly viewed games, Forrest, Simmons and Buraimo (2005) find that viewing is highest for games involving the star players in the English Premier League, but not all players are equal since Kanazawa and Funk (2001) find underlying racism in NBA games in 1996–1997 where white players drew larger audiences than black players but the opposite was true in Aldrich, Arcidiacono and Vigdor (2005) for NFL quarterbacks. Paul and Weinbach (2007) study Monday Night Football games and found the highest rated games feature high scoring winning teams where the game outcome is uncertain. Nüesch and Frank (2009) studied Swiss ratings of the World Cup and European Championship and found that while the strength of the competition matters, patriotism impacts ratings in national soccer team games. Tainsky (2010) investigates TV viewing for NFL games in each US TV market and found that while games involving winning teams in primetime attract high ratings, other factors such as the tenure of a team in a TV market and teams sharing a TV market also impact ratings. Alavy et al. (2010) use minute-by-minute TV viewing during English Premier League games and find that viewers demand excitement, which is likely associated with meaningful play that result in an win, since viewers are likely to switch channels if a draw looks likely. Feddersen and Rott (2011) include friendlies in the data, added progression through tournament play as factors in the model, and from a TV business prospective obtain an estimate of the reduction in viewing when the game is broadcast by a private instead of a public network. Berkowitz, Depken and Wilson (2011) find that the NASCAR season differs from other sports by having TV ratings that peak with the season-opening Daytona 500 and then decrease through the season, but competitiveness toward in the Chase for the Cup races can increase ratings. Tainsky and McEvoy (2012) focused attention on TV markets that do not have an NFL team in research that is valuable to local affiliates who have some input in which games to broadcast, and, in addition to effects found in previous research, find an effect for the closest team to the market (indicating a local sport fan effect) and an effect for games involving the Cowboys and Patriots (indicating a few brands may have national appeal).

The model in this paper can be used to make predictions of future TV audience in the Final Four games for advertisers and local affiliates, as well as conjectures about teams that would maximize total viewing. In the past 10 years the NCAA Final Four has had “Cinderella” teams which are from small schools who advance further in the tournament than their seeding. The previous research in professional soccer and football would suggest these games would have the smallest TV audiences. However, this paper shows that Cinderella teams, with their accompanying media attention, result in a significant national effect that exceeds any other team’s local effect.

2 Data

The NCAA Division I Men’s Basketball Tournament has decided the basketball national champion since 1939. The tournament has been televised in some fashion since the late 1960s and is one of the premier sporting events in America. The tournament has evolved from an 8 team playoff in 1939 to a 68 team format in 2012. The Semifinal and Championship games on the last weekend of the tournament have always been referred to as the “Final Four.”

For many years the Final Four has taken place the first weekend of April with two evening games on Saturday then the final game Monday night. The tournament over the past 10 years has seen many memorable sports moments and includes the teams listed in Table 1. In the 2003 Final Four, the nation was introduced to future NBA superstars DeWayne Wade, who played for Marquette, and Carmelo Anthony, who led Syracuse to the National Championship. Florida, a school historically connected with football success, became a basketball sensation in 2006 and 2007, winning back-to-back National Championships. The NBA changed the rules of the draft starting in 2006 and required players to be at least a year removed from high school before entering the draft. This created a new recruiting technique in the collegiate ranks (the “one and done”). John Calipari coached Memphis to the 2008 Championship game using five players thought to be ready to jump to the NBA draft including future NBA MVP Derrick Rose. While Kansas, coached by Bill Self, won that hard-fought game in overtime using more upperclassmen, four years later in 2012 Calipari, using different “one and done” players and now coach at Kentucky, beat Kansas in a match-up of the two winningest NCAA Division I basketball programs in history. Widely considered the biggest rivalry in college basketball, North Carolina and Duke have both won titles in the last 10 years. Roy Williams took over as coach at North Carolina in 2003 and led them to the Championship in 2005 and 2009. Duke narrowly beat Butler, a Cinderella team in 2010 to win Mike Krzyzewski’s fourth National Championship. Not to be forgotten is the University of Connecticut, who also won National Championships in 2004 and 2011 during the time period of this data to bring their coach, Jim Calhoun, to three National Championships. The 2011 Connecticut team defied the odds by winning all 11 of its post-season games in the Big East Conference Tournament and the national championship.

TeamFinal Four Appearances (2003–2012)NCAA Championships (2003–2012)Local Market (Nielsen DMA Rank)
Butler20Indianapolis (26)
Connecticut32Hartford & New Haven (30)
Duke21Raleigh-Durham (24), Charlotte (25), Greensboro (46)
Florida22Jacksonville (50), Orlando (19), Tampa-St. Petersburg (14)
George Mason10Washington, DC (8)
Georgetown10Washington, DC (8)
Georgia Tech10Atlanta (9)
Illinois10Chicago (3), Indianapolis (26), St. Louis (21)
Kansas31Kansas City (31)
Kentucky21Louisville (48)
Louisville20Louisville (48)
LSU10New Orleans (52)
Marquette10Milwaukee (34)
Memphis10Memphis (49)
Michigan State30Detroit (11)
North Carolina32Raleigh-Durham (24), Charlotte (25), Greensboro (46)
Ohio State20Columbus (32), Cincinnati (35), Cleveland (18)
Oklahoma State10Oklahoma City (44), Tulsa (59)
Syracuse11Buffalo (51), New York (1)
Texas10Austin (47), San Antonio (36)
UCLA30Los Angeles (2)
VCU10Richmond (57)
Villanova10Philadelphia (4)
West Virginia10Pittsburgh (23)
Table 1

Teams appearing in the NCAA Men’s Basketball Tournament Final Four, 2003–2012. The rank of the Nielsen DMA for 2011–2012 is given in parentheses.

The response variable aud is the number of households that watched the game either live or the same day from a recording, as reported by Nielsen for the given designated market area (DMA). Nielsen uses a complex sampling approach based on the viewing in representative panels in each DMA that are weighted to obtain audience estimates. Because of Nielsen’s efforts to construct an area probability sampling frame, their estimates do not suffer from biases in provider. Since Nielsen measures “in-home” viewing, the patrons watching the game in a sports bar or other public setting would not be included in the sample. An open research question is the factors that affect the revenue or profitability of sports bars (of which audience is only one part). In each DMA Nielsen reports the number of households that watched at least 5 min within a specific quarter hour, and for a program such as a basketball game, the reported TV audience is the mean over the quarter hours of the game. Typically, the audience builds during the

h, with the largest audience the last quarter hour. Other papers on TV audience have used ratings (percentage of households with TV that were watching the given program) as the response variable, but a recent trend in broadcasting focuses more on the size of the audience instead of the fraction. The log-transformation results in a more symmetric distribution of the residuals.

TV markets are defined geographically and do not have equal population. The model should allow for possibly different TV audiences because of differences in popularity of collegiate basketball. The collection of γi represent the differences in TV viewing, after adjusting for other factors, of the 56 Nielsen DMAs with Local People Meter samples (where viewing is recorded for each person in the household) and Local Meter Market samples (where viewing is recorded for the household, not the person). These 56 out of 212 markets represent approximately 70% of the US TV homes. The largest markets are New York and Los Angeles, but among the 56 markets are Raleigh-Durham, Indianapolis, and Kansas City (DMA ranking 24, 26, 31, respectively) which are smaller markets but perhaps have more college basketball fans. Two markets, New Orleans and West Palm Beach, have been omitted from the analysis because the ratings were not available at least one year because of a major hurricane disruption. In summary, the data represents 10 years of 3 Final Four games in 54 markets. This is often called panel data and there are large differences in the mean and variance between markets as shown in Figure 1.

Figure 1

Comparison of the distribution of the Final Four TV audience for three of the largest and three of the smallest markets.

One of the interesting questions is measuring the nature and size of fan support for different teams. As seen in Tainsky and McEvoy (2012) a few teams may have a national following while most others have predominantly regional fan bases. The teams and their local markets are in Table 1. An explanatory variable that identified games involving any of the top 10 big name teams was omitted during backward elimination based on AIC, leading to the conclusion that in collegiate basketball there are only local, not national, effects. There are some teams that share the same local market and the model assumes additivity of the team effects. There is no estimate for LSU since New Orleans has been omitted from the data. The parameter θj denotes the effect of the jth team playing in the game within its local market. Since a few teams have repeated appearances, one could treat them as replicates and estimate a random effect for each team, but the lme function in the R package nlme does not converge for that model (perhaps because not enough teams have replicates or the teams with replicates do not have sufficient information after adjusting for the correlation between the semifinal and final games – lme with random effects may converge in future work with more years of data or expanding the scope to include earlier tournament rounds).

The last 10 years the media have labelled as a “Cinderella Story” the surprising tournament winning streak of a team from a small school who overcomes their seeding. In 2006 George Mason joined LSU in 1986 as the lowest seed (11) ever to make the Final Four. That was not the only Cinderella team, because in 2010 Butler, a school very few knew about, made an improbable run to the championship game and came within a missed 3-pointer as time expired of winning the national championship. Only a year later in 2011, Cinderella teams continued to surprise the fans of the tournament. Butler, this time an 8 seed, and VCU, an 11 seed, played each other in a Semifinal game. In 2011, Butler tied the record for the lowest seed to ever play in the Championship game. While future tournaments may add teams to the list of Cinderellas or possibly drop some for continued success, the variable cinderella counts the number of Cinderella teams (George Mason, Butler, VCU) playing in the game. In predicting the audience of future games the definition of a Cinderella is not clear from these few examples. One could suggest requiring a 5-seed or higher since these teams must win at least 3 games they are not expected to win in order to make it to the Final Four. One could also add the requirement that the school be less-known (for example, a team from a so-called mid-major conference) in order to invite the unusual national media attention. The effect is due to the Cinderella label, not the specific teams. If a team like Butler continues to have NCAA tournament success they will lose the Cinderella label and the media attention that goes with it. Another part of the Cinderella label is the relatively low likelihood of the occurrence, and therefore the effect may decrease if Cinderella teams begin to make regular appearances in the Final Four.

Changes in the tournament seem to occur every few years, and one that could impact the TV audience is the NCAA Selection Committee choosing the best team in the tournament and identifying it as the “Number 1 Overall Seed.” While there is no particular seeding advantage, previous research has shown audiences increase for high quality teams and no1seed is an indicator variable in the model for any games involving the number one overall seed.

While some viewers are fans of the school/team/program, others are fans of the stars. While there are high-profile exceptions, college basketball is the development league for the NBA and fans of the NBA may be drawn to Final Four games to see future stars. The model includes nbadraft which counts the number of players in the game that are considered at the time of the tournament to be a potential NBA lottery pick.

Another explanatory variable to represent the quality of the teams is summarized in seedmin, which is the lower of the seeds playing in the game. When both teams are the number one seed from their region, as in 2008 when all games in the Final Four were number one seeds, then seedmin=1 where the best highest quality teams played, and larger values of seedmin reflect lower quality teams.

Outcome uncertainty is expected to be an important predictor of the TV audience. Buraimo et al. (2007) suggest the suspense of an expected close game will bring TV viewers who may not be fans of the teams playing or would not otherwise be interested in the game. Two different measures of outcome uncertainty were explored: the magnitude of the point spread from bets on the outcome (pointspread) and the absolute value of the difference between seed. Because of collinearity between these two measures and with seedmin (which resulted in coefficients with the wrong sign), all pairs of these explanatory variables were explored, and seedmin and pointspread had the lowest AIC. Small values of pointspread are evenly matched teams, which may draw large TV audiences, and large values of pointspread may have predictable outcomes, which may be of less interest to the TV viewer.

The three games in the Final Four are not equivalent. The nature of the tournament design means that the Monday Championship game has more at stake than the Saturday semifinal games, and to allow for this difference the model includes the indicator variable semifinal. The explanatory variable year*semifinal measures the growth over time of the Saturday Semifinal games. There was no significant change year-to-year in the Monday Championship game.

Because the Final Four is played the end of March or beginning of April every year on CBS with some of the best announcers (Jim Nantz, Billy Packer, Clark Kellog, Steve Kerr, Greg Gumbel), modeling the well-known seasonality in TV viewing or differences in broadcast quality is not necessary. Also, basketball is not impacted by the weather as outdoor sports where viewers may alter their choice to attend the game. The Saturday semifinal first game begins at 6 ET, just before TV primetime, with the second game beginning 40 min after the first game finishes. Usually this is in the 8:45 ET quarter-hour but occasionally the first game goes long and the second game has a later than expected start time, and to estimate the effect latestart is an indicator variable included in the model. The Championship broadcast starts at 9 ET in primetime (but the game usually starts about 20 min later).

The host city of the Final Four changes from year to year. From 2003 to 2012 New Orleans, San Antonio, and Indianapolis have hosted twice, with St. Louis, Atlanta, Detroit, and Houston hosting once each. The TV audience may increase in the host market due to increased promotion regardless of the teams playing, and λi is estimated in the model for these seven markets with λi=0 for all other markets.

Another event that may impact the viewing in a local market is competitive programs. It is possible that a basketball fan would find an NBA game competing for their TV watching. This is a local effect, not a national effect, since most NBA games are only broadcast within their local market. The indicator nba denotes the NBA team in market i played during the Final Four game. While there are NBA games scheduled every Saturday during the Final Four, there are only 2 years (2004, 2012) where NBA games were scheduled during the Monday Final game.

Unlike game attendance, the TV audience is often perceived as unbounded since there exist no physical constraints such as arena capacity. However, the TV audience may have limitations when usual TV viewers are not near their household for another reason. One possible event occurs every year in the Salt Lake City market (which covers all of the state of Utah). On the same weekend as the Final Four, The Church of Jesus Christ of Latter-day Saints (Mormon) holds one of their semiannual general conferences and at 8 ET Saturday, male church members worldwide meet at local churches to receive inspiration and instruction from Church leaders. This 2-h meeting overlaps with Saturday Semifinal games (end of the first and beginning of the second). According to the Pew Forum on Religion & Public Life (2008), 58% of Utah residents are Mormon and so the supply capacity in the Salt Lake City market for the Semifinal games may be limited and the model includes the indicator variable mormon to estimate the effect. One could view this as an example of the optimal allocation of scare leisure time instead of supply capacity, where Mormons have to choose between two concurrent activities.

The model in this paper is the result of variable selection. Backward elimination using AIC resulted in dropping both explanatory variables involving race (player and coach), the change year-to-year in the audience for the Championship game, and the explanatory variable corresponding to a national effect for the biggest name teams (North Carolina, UCLA, Kentucky, Duke, Kansas, Louisville, Indiana, Syracuse, Connecticut, Arizona). The backward elimination AIC model contained the sum of the rank of the last AP poll before the NCAA Tournament which was not significant at α=0.05, and so the presented model eliminated that explanatory variable. Eliminating terms from the model may reflect the low power that is a result of the variation in the observed values of the explanatory variables more than absence of effect. Table 2 contains the summary statistics for the explanatory variables remaining in the model. Regarding the factors involving race, there were only 3 games with a black coach (none with two black coaches) and all games but one had 6 or more black starting players.

Table 2

Summary Statistics for Explanatory Variables in TV Audience for 30 Final Four Games (2003–2012).

The εijk are normally distributed, but there is structure to the error covariance that should be modeled. Because the markets are of such different sizes, model

for I=1, …, 54, instead of assuming constant variance. It would also be inappropriate to assume all games are independent. While there are many possible correlations (for example spatial correlation between markets and temporal correlation between years), exploratory models with complex modeling of the correlation failed to converge. Only the simplest correlation model where ρ denotes the correlation between games for a given market i and a given year of the Final Four converged in gls. (The R package nlme contains lme and gls which are equivalent when the model contains only fixed effects, which is the case here.)

3 Results

Tables 36 contain the parameter estimates. The effect of market, γi, is highly significant (H0:γi=0 for all i has p-value <0.001). While differences between markets is not surprising, the estimates of TV audience effect by market, given in Table 3, show 12 of the top 20 markets are in the Eastern time zone (while part of the Indianapolis and Louisville markets are in the Central time zone, most of the population is in the Eastern time zone). Another interesting result compares the TV audience effect for the Final Four to the number of TV households in the market. Figure 2 compares the order of the

with the order of the TV market size (number of households with TVs, as measured by Nielsen 2011–2012). A few markets stand out with higher and lower than expected TV audiences for the Final Four after adjusting for other model effects. The markets with higher than expected audiences are some of the most passionate fans of collegiate basketball regardless of the teams playing: Louisville, Columbus, Cincinnati, and Memphis. At the other extreme are markets without a major college basketball team or a market dominated by an NBA team: Miami, Sacramento, San Diego, and San Antonio.

Local market
New York0.000100.01.120
Los Angeles–0.37268.91.003
Washington, DC–0.90440.54.290
Dallas-Ft. Worth–1.00836.51.272
San Francisco–1.04835.11.046
Minneapolis-St. Paul–1.20030.11.196
Columbus, OH–1.22729.31.563
St. Louis–1.32726.51.024
Tampa-St. Petersburg–1.33226.41.181
Kansas City–1.35425.81.325
Portland, OR–1.55221.20.786
Hartford-New Haven–1.63619.51.621
Greenville, SC–1.65019.21.768
Oklahoma City–1.73217.71.804
Salt Lake City–1.86515.51.154
San Diego–1.90714.91.573
Las Vegas–2.10312.21.271
San Antonio–2.30710.01.482
Ft. Myers–2.3349.71.302
Table 3

TV audience effect for each market. Due to the log-transformation on the response,

is the percentage of the New York TV audience that watch the Final Four after adjusting for all other factors in the model. The model allows different variances for each market by weighting relative to

Explanatory variable
Table 4

Estimated regression coefficients, βj.

George Mason0.898245.50.145
Georgia Tech0.574177.5<0.001
Michigan State0.925252.2<0.001
North Carolina0.665194.4<0.001
Ohio State0.494163.9<0.001
Oklahoma State1.141313.0<0.001
West Virginia0.454157.50.039
Table 5

Team effect within local market. Due to the log-transformation on the response,

is the percentage increase in the local TV audience when the given team plays in the Final Four after adjusting for all other factors in the model. All other teams have θj=0.

San Antonio0.398148.9
St. Louis0.606183.3
Table 6

Final Four Host effect within local market. Due to the log-transformation on the response,

is the percentage increase in the local TV audience when the Final Four played in the market after adjusting for all other factors in the model. All other markets have λi=0.

Figure 2

Comparison of the rank of the Final Four TV audience effect (γi) and the rank of the number of TV households for each market.

Table 4 contains the estimates of the

along with the p-values. One of the objectives of this research is to investigate whether a few big-name teams have a national effect or if the team effect is limited to the local TV market. Since the explanatory variable indicating games with big name teams was dropped in the variable selection but the p-value for H0:θj=0 for all j is <0.001, the effect of even the biggest name teams is not national. The team effect in their local market estimates are in Table 5. Because of the log-transformed response variable, one must be careful interpreting the coefficients. While Marquette may have the largest change of over 300%, the Milwaukee market has many fewer TV households than Los Angeles and so UCLA will result in more Final Four viewers even though
is smaller. A few teams share the same market, and those comparisons indicate North Carolina has a larger fan base than Duke, Louisville has a slightly larger (but not statistically significant) fan base than Kentucky, and neither Georgetown or George Mason have a statistically significant effect in the Washington, DC market (perhaps indicating Washington, DC is not a very strong college basketball market). The comparisons of team effects in different markets is covered later in predicting the TV audience of different Final Four games.

While no team has a national brand, the appeal of a Cinderella team has a highly significant effect. In fact, compared to games with no Cinderella teams, games with one Cinderella team are expected to have 35% larger audiences since

all else equal, and games with two Cinderella teams are expected to have 81% larger audiences since

The NBA may be a league of star players, but the Final Four appears to have only a small effect due to star players. Most games only have one such player which would increase viewing by less than a game with the number one overall seed.

The regression coefficient associated with year*semifinal represents the year-to-year change for the Semifinal game. The explanatory variable corresponding to year-to-year change for the Championship game was omitted during model selection, which is a surprise since the most recent TV contract was a sizable increase over the previous one. This is due, perhaps, to the stronger competition from other network programming on Monday night. Another thought is that the increasing value of the TV contract is due to the strong increases in the earlier round games. That is, while the Saturday Semifinal games have smaller audiences as expected (large negative coefficient for semifinal), the TV audience shows statistically significant growth because the slope of year for Semifinal games is 0.016.

Another interesting result regarding the Semifinal games is that when the first game goes long and the second game has a later start, the TV audience increases. This could be that the drama from a close game that comes down to the final possession gives better material for the highlight show between games so the TV audience is more likely to hold through to the second game.

The model contains three explanatory variables that measure the effect of competitiveness and quality of the opponents. First, the TV audience is significantly larger for games involving the number one overall seed. Also, since the sign of the estimated coefficient associated with the lowest seeded team playing (seedmin) is negative, the TV audience shrinks as the “best” team playing is judged relatively weaker by the tournament. Most previous research shows that close games have bigger TV audiences. Point spread was a better predictor than seed spread in measuring the effect of outcome uncertainty during model selection. These results appear to be in contradiction to the Cinderella effect discussed earlier, but Cinderella teams are extremely low seeds and the model is expressing a general decline in the TV audience when the favorites are eliminated. TV viewers love the favorites, not the underdogs – unless the underdog is Cinderella!

While an increase in the TV audience occurs in the Final Four host market, the estimates

given in Table 6 are much smaller than when the local team plays in the Final Four
For example, it is better for Michigan State
Georgia Tech
or Butler
to make the Final Four than for Detroit
or Indianapolis
respectively, to host. However, hosting is a risk-free effect compared to the uncertainty of making it to the Final Four. It does not appear that the potential to increase the TV audience plays a large role in the NCAA’s decision about host cities. Instead, the NCAA Men’s Basketball Committee announces an RFP and evaluates bids from host cities on the basis of arena, convention center, hotel capacity, and financial commitment. (The host cities through 2016 have already been chosen.)

The competition of an NBA game has a significant effect on the market TV audience. Future research could investigate this effect by NBA team and explore interaction when a market has a college team and NBA team playing at the same time.

Finally, recall that in the Salt Lake City market, the potential TV audience for the Saturday Semifinal games is limited because a large proportion of the population would be attending an important Mormon religious meeting outside their homes. The effect, included in the model as mormon, is highly significant and may limit the appeal of BYU, Utah, Utah State, and Weber State as potential Final Four teams to local broadcasters.

The model allowed each market to have their own variance, and the estimated

are in Table 3. Some of the largest variances are Washington, DC, Tulsa, and Cleveland. The correlation between the Final Four games played in the same year is estimated as

From the broadcaster perspective, the value of the model is to predict possible scenarios for the Final Four TV audience with the objective to maximize viewing. The following predictions are for the 2013 Championship game when the Final Four will be played in Atlanta, and focus on teams playing so top5draft is held constant at 1. The total TV audience is the sum of the predicted TV audience for the 54 markets in the model. Since these 54 markets represent about 65–70% of the national TV audience, there will be some discrepancy between the prediction and the national TV audience reported by Nielsen.

First, consider a match-up between teams from the two biggest TV markets. This is often suggested as the game network executives are hoping for. The prediction for Syracuse playing UCLA, where both teams are regional one seeds and one team is the overall number one seed in a close game (point spread 1), is 10.4 million households.

The variation in the

’s indicate other teams may have a larger draw in their market than Syracuse or UCLA in New York or Los Angeles, respectively. Consider a match-up between two big name teams, as in 2012 when Kentucky and Kansas played. Under ideal circumstances these two teams would be regional one seeds and one team would be the overall number one seed. After evaluating all possible combinations between the big-name teams, the largest predicted TV audience of 10.7 million households is for a game between UCLA and Florida. Florida may be a surprise since its
does not look particularly large, but Florida’s local market contains three Nielsen DMAs which, when combined, would make them the third largest TV market.

In 2011 Butler showed that Cinderellas are not just a story for the early rounds of the tournament but can appear in the Championship game. A game with a Cinderella team will exceed predictions for games with big name teams because the Cinderella effect in the model increases the TV audience of every market. One could conjecture that the ideal opponent for the Cinderella would be in a David vs. Goliath game, where the big name team would be the number one overall seed. For example, the model predicts a TV audience of 13.2 million for Butler playing Florida. In this example, Butler’s low seed, lack of a big name or major conference, and large 7 point spread is more than offset by the large Cinderella effect.

The magnitude of the Cinderella effect offers the prospect that the largest possible TV audience is a Championship between two Cinderella teams. While recognizing this is extrapolation of the model, of the three Cinderella teams that have appeared in the Final Four, a hypothetical Championship game between George Mason and VCU (both appearing as 5 seeds) in a close game whose point spread is 1 has a predicted TV audience of 13.9 million. The reason this exceeds a game involving Butler is because George Mason’s local market is very large and VCU has one of the top

’s to offset the small Richmond market.

One implication of the large Cinderella effect is that Cinderella status is a result of the tournament format. That is, because relatively unknown teams can qualify for the NCAA Men’s Basketball Tournament, there is the possibility that through their success in the tournament they create a national audience. In contrast, the BCS Championship game and their affiliated bowl games are single games that offer little opportunity for “unknown” teams to play their way to national attention and therefore the BCS bowl games may be ignoring a large possible increase in their TV audience.

The predictions are based on the teams succeeding in the NCAA tournament. Therefore, it would not be appropriate to consider manipulating the tournament to advance teams that have large effects into the Final Four because it would undermine the credibility of the national championship. However, it may be appropriate for tournament organizers to prefer potential Cinderella teams over equivalent teams from major conferences since if the Cinderella teams succeed in the tournament they offer a larger TV audience. This would not just mean selecting the last few teams in the tournament, but in assigning the seed of good teams.

It should be noted that this model applies only to the Final Four and not to preseason or conference games. One could argue that the Final Four is “must see TV,” meaning that consumers make a particular effort to watch the program as opposed to choosing from all the available programs given that they had decided to watch TV. Regular season college basketball games would require a more sophisticated model that recognized many games are played on a given night and are competing for the basketball fan’s attention, as well as all the other programming available on TV.

4 Summary

This paper is the first to study the TV audience of the NCAA Men’s Final Four. Using data from 54 markets for 10 years, a model is estimated that includes factors previously used in research on demand for sports programming. The results provide insight in how being a fan of college basketball and/or specific teams translates into TV viewing, and the model would be of interest to tournament organizers, teams, broadcasters, and advertisers. No evidence exists that any of the most popular and successful big-name teams have a national TV audience. However, a successful Cinderella team with the accompanying media attention does result in a large statistically significant national effect. While 12 of the top 20 markets are in the east, higher than expected viewing was observed in Louisville, Columbus, Cincinnati, and Memphis, regardless of the teams playing. The markets with lower than expected viewing are those without a major college basketball team or markets dominated by an NBA team (Miami, Sacramento, San Antonio, and San Diego). While broadcasters and advertisers have no input or control over how the tournament plays out or which teams appear in the Final Four, the model allows the opportunity to speculate about possible match-ups that would maximize the TV audience. Because of the national appeal of Cinderella teams, a Championship game between two Cinderella teams is expected to produce a larger TV audience than a game between two big market teams, a game between two big name teams, or a David vs. Goliath game between a big name and a Cinderella team.


  • Alavy, K., A. Gaskell, S. Leach, and S. Szymanski. 2010. “On the Edge of your Seat: Demand for Football on Television and the Uncertainty of Outcome Hypothesis.” International Journal of Sport Finance 5:75–95.

  • Aldrich, E. M., P. S. Arcidiacono, and J. L. Vigdor. 2005. “Do People Value Racial Diversity? Evidence from Nielsen ratings.” Topics in Economic Analysis & Policy 5(1):Article 4.

  • Balsdon, E., L. Fong, and M. A. Thayer. 2007. “Corruption in College Basketball? Evidence of Tanking in Postseason Conference Tournaments.” Journal of Sports Economics 8:19–38. [CrossRef]

  • Bauman, R., V. A. Matheson, and C. A. Howe. 2010. “Anomalies in Tournament Design: The Madness of March Madness.” Journal of Quantitative Analysis in Sports 6(2):Article 4.

  • Berkowitz, J. P., C. A. Depken, II, and D. P. Wilson. 2011. “When Going in Circles is Going Backward: Outcome Uncertainty in NASCAR.” Journal of Sports Economics 12:253–283. [Web of Science] [CrossRef]

  • Bojke, C. 2007. “The Impact of Post-Season Play-Off Systems on the Attendance at Regular Season Games.” pp. 179–202 in Statistical Thinking in Sports, edited by J. H. Albert and R. H. Koning. Boca Raton, FL: Chapman & Hall/CRC.

  • Borland, J. and R. Macdonald. 2003. “Demand for Sport.” Oxford Review of Economic Policy 19:478–502. [CrossRef]

  • Brown, M. and J. Sokol. 2010. “An Improved LRMC Method for NCAA Basketball Prediction.” Journal of Quantitative Analysis in Sports 6(3):Article 4.

  • Buraimo, B. 2008. “Stadium Attendance and Television Audience Demand in English League Football.” Managerial and Decision Economics 29:513–523. [CrossRef]

  • Buraimo, B. and R. Simmons. 2009. “A Tale of Two Audiences: Spectators, Television Viewers and Outcome Uncertainty in Spanish Football.” Journal of Economics and Business 61:326–338. [CrossRef]

  • Buraimo, B., D. Forrest, and R. Simmons. 2007. “Outcome Uncertainty Measures: How Closely do they Predict a Close Game?” pp. 167–178 in Statistical Thinking in Sports, edited by J. H. Albert and R. H. Koning. Boca Raton, FL: Chapman & Hall/CRC.

  • Carlin, B. P. 1996. “Improved NCAA Basketball Tournament Modeling Via Point Spread and Team Strength Information.” The American Statistician 50:39–43.

  • Coleman, J. and A. K. Lynch. 2009. “NCAA Tournament Games: The Real Nitty-Gritty.” Journal of Quantitative Analysis in Sports 5(3):Article 8.

  • Feddersen, A. and A. Rott. 2011. “Determinants of Demand for Televised Live Football: Features of the German National Football Team.” Journal of Sports Economics 12: 352–369. [CrossRef] [Web of Science]

  • Forrest, D. and R. Simmons. 2002. “Outcome Uncertainty and Attendance Demand in Sport: the Case of English Soccer.” The Statistician 51:229–241.

  • Forrest, D. and R. Simmons. 2006. “New Issues in Attendance Demand: The Case of the English Football League.” Journal of Sports Economics 7:247–266. [CrossRef]

  • Forrest, D., R. Simmons, and B. Buraimo. 2005. “Outcome Uncertainty and the Couch Potato Audience.” Scottish Journal of Political Economy 52:641–661. [CrossRef]

  • Gray, K. L. and N. C. Schwertman. 2012. “Comparing Team Selection and Seeding for the 2011 NCAA Men’s Basketball Tournament.” Journal of Quantitative Analysis in Sports 8(1):Article 2.

  • Harville, D. A. 2003. “The Selection or Seeding of College Basketball or Football Teams for Postseason Competition.” Journal of the American Statistical Association 98:17–27. [CrossRef]

  • Kanazawa, M. T. and J. P. Funk. 2001. “Racial Discrimination in Professional Basketball: Evidence from Nielsen ratings.” Economic Inquiry 39:599–608. [CrossRef]

  • Mongeon, K. and J. Winfree. 2012. “Comparison of Television and Gate Demand in the National Basketball Association.” Sport Management Review 15:72–79. [CrossRef]

  • Morris, T. L. and F. H. Bokhari. 2012. “The Dreaded Middle Seeds – Are they the Worst Seeds in the NCAA Basketball Tournament?” Journal of Quantitative Analysis in Sports 8(2):Article 1.

  • Nüesch, S. and E. Frank. 2009. “The Role of Patriotism in Explaining the TV Audience of National Team Games – Evidence from Four International Tournaments.” Journal of Media Economics 22:6–19. [Web of Science] [CrossRef]

  • Paul, R. J. and A. P. Weinbach. 2007. “The Uncertainty of Outcome and Scoring Effects on Nielsen Ratings for Monday Night Football.” Journal of Economics and Business 59:199–211. [CrossRef]

  • Pew Forum on Religion & Public Life. 2008. “U.S. Religious Landscape Survey.” Technical report, Pew Research Center.

  • Schwertman, N. C., K. L. Schenk, and B. C. Holbrook. 1996. “More Probability Models for the NCAA Regional Basketball Tournaments.” The American Statistician 50:34–38.

  • Stekler, H. O. and A. Klein. 2012. “Predicting the Outcomes of NCAA Basketball Championship Games.” Journal of Quantitative Analysis in Sports 8(1):Article 3.

  • Tainsky, S. 2010. “Television Broadcast Demand for National Football League contests.” Journal of Sports Economics 11:629–640. [CrossRef]

  • Tainsky, S. and C. D. McEvoy. 2012. “Television Broadcast Demand in Markets Without Local Teams.” Journal of Sports Economics 13:250–265. [Web of Science] [CrossRef]

  • West, B. T. 2008. “A Simple and Flexible Rating Method for Predicting Success in the NCAA Basketball Tournament: Updated Results from 2007.” Journal of Quantitative Analysis in Sports 4(2):Article 8.

Comments (0)

Please log in or register to comment.