Skip to content
Publicly Available Published by De Gruyter February 22, 2017

Algorithmic Opportunity: Digital Advertising and Inequality in Political Involvement

Young Mie Kim

Young Mie Kim (PhD, University of Illinois at Urbana-Champaign) is an Associate Professor of the School of Journalism and Mass Communication and faculty affiliate of the Department of Political Science at the University of Wisconsin-Madison. Kim’s research centers on the role digital media play in citizen competence and participation.

EMAIL logo
From the journal The Forum


Contemporary digital advertising operates through algorithms – a data-driven logic created by the constant loop between a voter’s voluntary choices and campaigns’ strategic feedback on these choices. By paying attention to the logic of digital advertising, I discuss how digital advertising algorithms adjudicate consequential decisions that broadly affect political involvement. I argue digital advertising limits algorithmic opportunity to access and acquire political information. Voters are strategically defined, and information inequality is created between the arbitrarily defined “strategically important” and “strategically unimportant.” Discriminately defined by campaigns, different voters receive different information, thereby engaging differently in politics. To illuminate main points, an exploratory analysis of empirical data that “reverse engineered” the algorithms of digital advertising is presented. I conclude that contemporary digital advertising perhaps propels inequality of political involvement and polarizes the electorate.


In the final weeks of the election, Trump’s digital team narrowly targeted certain voters from Clinton’s support base and delivered negative appeals via Facebook ads, silently shattering Clinton’s coalition (Green and Issenberg, 2016). “This is how we won the election,” said Brad Pascale, former digital director of the Trump campaign.

The recent “big data” revolution has brought about a sea change in political campaigning. Digital advertising perhaps best demonstrates these drastic shifts. Digital advertising has expanded its realm far beyond simple image banners and includes search-based text ads (e.g. Google ads), search-engine optimization, social media ads (e.g. Facebook’s native ads, sponsored news feeds, and trends), interest-based behavioral (re)targeting ads, email ads, and more. By utilizing a vast amount of voter profile data, widely available surveillance technologies, and computational data analytics, political campaigns target voters as narrowly as possible, down to each and every individual. Predictive values are assigned to these narrowly defined voters, messages are customized to the individual, and voters are re-targeted with revised messages. Ted Cruz’s digital team, for instance, built up voter psychographics by harvesting Facebook users’ posts, likes, and social networks and matching them up with comprehensive voter profile data. With these “enhanced voter profiles,” the campaign then sent personalized ads to voters based on the voter archetypes assigned to them.

As such, contemporary digital advertising operates as algorithms – a data-driven, structural logic created by the constant loop between a voter’s voluntary choices (information generated via one’s own selection of information) and campaigns’ unobtrusive, yet strategic feedback on these choices. Each individual voter is uniquely defined by these feedback mechanisms and constantly redefined along the lines of a campaign’s strategic adaptation to situational changes, in real time. Unlike broadcast advertising, algorithms represent the “black box” that stores this information while silently sending different messages to each individual without any public monitoring. Voters have a false belief that advertising messages, just like television ads, are widely shared among the public. Given its fundamentally different logic, contemporary digital advertising perhaps adjudicates consequential decisions that broadly affect citizen competence and participation.

Despite the tremendous implications that accompany this phenomenon, much of the discussion concerning digital advertising takes place around technical issues. Many assume that digital advertising is a mere transference of traditional advertisements to online platforms. By paying attention to the logic of digital advertising (i.e. algorithms), I discuss how contemporary digital advertising reinforces – or perhaps even propels – the unequal distribution of political information in unique ways. The primary purpose of this paper is to provide insight in order to understand the significance of digital advertising and to develop a conceptual framework that generates testable hypotheses. To illuminate the main points, I include an exploratory analysis of empirical data that “reverse engineer” the algorithms of digital advertising and empirically investigate some of the key tenets of my conceptual framework.

Inequality in Political Involvement

Inequality in political involvement – the unequal distribution of information and unequal representation in political participation – has long been studied by scholars, and a number of models of political involvement discuss the factors that influence inequality in political involvement. Two root causes have been commonly identified in previous models: ability and motivation.

In conceptualizing political sophistication, Luskin (1990) maintains that political knowledge is equated to individuals’ intelligence or ability to comprehend the complex world of politics. Political knowledge is best explained by individuals’ ability to comprehend information as well as their interest in politics; therefore, little can be done to correct unequally distributed, stratified political knowledge (Smith 1989). Unequal representation in political participation is largely due to differential distributions of civic resources and motivation. Verba, Schlozman, and Brady (1995) emphasize that political involvement in a democratic society is a voluntary choice in that citizens must be motivated to participate in the first place. Political participation imposes requirements: citizens must have the resources to participate in politics, such as the ability to comprehend relevant political information and express opinions, the time to volunteer for political action, and the money to make donations, to name a few.

While much research heavily emphasizes inherent individual and psychological roots, some scholars acknowledge the social or technological aspect that accounts for inequality in political involvement: opportunity. Shifting attention from individual to systematic factors, Delli Carpini and Keeter (1996) underscore opportunity as a fundamental cause for unequal distribution of political knowledge and stratified democracy. For these scholars, opportunity encompasses access, availability, and the presentation of political information, which reflect social, economic, and technological systems. They argue that the unequal distribution of political knowledge should be understood as an issue of opportunity and not simply as individual citizens’ ignorance or inherent interest (or disinterest). “Political knowledge is produced by the interaction of human observation of the political world with how the political world organizes and presents itself to the public” (Delli Carpini and Keeter 1996, 8).

Political communication scholars more explicitly integrate technological aspects to illuminate how opportunity – access, availability, or presentation of information – interact with capacity or motivation. Prior (2007) argues that in the post-broadcast, high-choice media environment, the politically interested enjoy an array of political information and deeply engage in politics, whereas the politically uninterested avoid politics altogether; hence, the public becomes polarized with the absence of a middle ground. Similarly, Iyengar and Hahn (2009) illustrate that in the high-choice media environment, strong partisans selectively expose themselves to partisan news media consistent with their party identity, ultimately polarizing the public to further extremes.

Media history indeed reveals a full picture of how communication technologies at a given historical moment set a condition for political involvement. Before the broadcast era, only the literate could consume political information. Back then, the problems in political involvement were largely due to generally low levels of individual resources and skills – that is, the capacity issue. In the broadcast era, political information was widely diffused, as broadcast did not require political sophistication to understand the political issues presented, nor did it demand a high level of motivation to learn and engage in politics. In the post-broadcast age, however, with the availability of too many options and too much information, motivational factors exert significant influence into biases in political learning and participation. In the “big” data age, I argue that opportunity is structured by data-driven logic – i.e. algorithms – and it plays a pivotal role in political involvement.

Algorithmic Opportunity

Digital advertising perhaps best demonstrates how algorithms operate and account for inequality in political involvement. Digital advertising is based on algorithms – a series of data-driven rules created by the constant loop between an individual’s voluntary choices and a campaign’s strategic feedback on these choices.

By definition, algorithms determine how political information is organized and presented, thus offering an opportunity for political involvement. Unlike other opportunity factors, however, an algorithm is not a contingent condition that moderates the effects of motivation on political involvement. Rather, algorithms are deeply entangled with motivation. For instance, Google search ads are generated because of a voter’s voluntary choice to seek information regarding particular issues or candidates; they are rooted in political interest.

Digital advertising is not entirely determined by voluntary choices, however. Algorithms are a series of rules carefully designed and highly controlled by campaigns that have very specific aims and strategies. For this very reason, biases in campaigns’ perception of voters come into play in the algorithms of digital ads, reinforcing existing political inequality. Termed as “algorithm discrimination,” data scientists have begun to explore how data logic reinforces existing inequality. For instance, a study simulating Google users found that ads on career training for high-income executives were seen significantly more by men than women: men received them 1852 times compared to the 318 times they were sent to women. Even when a user with no search history is simulated, the differential availability of information remained (Datta, Tschantz, and Datta 2015). Researchers speculated that it was probably because the ad campaigns specifically targeted men over women with an array of gender-correlated data above and beyond the (simulated) users’ voluntarily generated choices (in this case, simulated choices).

It is worth noting, however, that the data used by campaigns to create their algorithms are never perfect. Because data pose certain limits on campaigns’ targeting strategies, they limit political involvement. Hersch (2015) argues that voter engagement is largely a function of the data that campaigns use to create their target voters. He argues that young voters and unregistered individuals are rarely contacted and thus receive little information; this is because few public records (the type of data most campaigns primarily relay on) exist to identify and understand them.

Since digital advertising is operated by algorithms, a voter’s opportunity to access and acquire political information is constrained. Voters are defined (and constantly redefined) by algorithms, which are designed, adapted, and controlled by campaigns that have specific aims and strategies. Voters considered strategically important are given the opportunity to be politically involved, whereas those strategically unimportant do not have the opportunity to be as informed about politics. In digital advertising, political information is not randomly diffused but unequally distributed among the electorate, and such unequal distribution is reinforced byalgorithmic biases imposed by campaign strategies and the nature of the data campaigns utilize, as well as voters’ voluntary choices.

With the ability to narrowly define specific voters and customize messages to these targeted voters, campaigns not only control the amount of political information they receive, but the substance of the information as well. The algorithms used by digital advertisers provide or limit the opportunity to learn about specific aspects of politics and engage politically in specific ways. Since voters receive different messages, they engage differently in elections.

Hillygus and Shields (2008) provide an astute theory with convincing empirical evidence to explain how advanced communication technologies and rich data improve a campaign’s ability to target persuadable voters. Campaigns present information that is particularly relevant and appealing to an individual and supply the justification for supporting a given candidate. When campaigns lack appropriate data or the technology to identity persuadable voters, they have no choice but to use imperfect partisan heuristics, thus concentrating their resources on broad partisan mobilization, such as voter turnout (Hersch 2015). With advances in information and communication technology, campaigns identify some of the most persuadable voters – those who actively engage in politics and are less enthusiastic about their own party. According to Hillygus and Shields, the most persuadable voters often come from the opposing candidates’ coalition, as they work to divide the opposite party’s base of support. Hillygus and Shields’ theory centers on some of the persuadable voters, “cross-pressures” – partisans who disagree with their party on a specific issue of their concern (e.g. a pro-gun Democrat, a pro-marriage-equality Republican). Campaigns microtarget these cross-pressured voters with wedge issues that divide the opposing candidates’ partisan supporters. As a result, the electorate becomes polarized.

Although digital advertising provides relatively few details on policy issues (Ballard, Hillygus, and Konitzer 2016), at a general level, Hylligus and Shields’ (2008) theory still applies. Based on the data-driven logic of algorithms, digital campaigns target their ads to persuadable voters and provide the justification for supporting a candidate (or defecting from the opposing candidate). These are “wedge voters,” who are perceived to be heavily involved in politics, but unsatisfied with or uncommitted to their candidate. In these ads, campaigns often attack their opponents (or they contrast the opposing candidates with the endorsed candidate). Furthermore, Lau and his colleagues (2016) argue that campaigns’ negative appeals increase affective polarization – negative feelings toward the other side. As such, negative appeals are seen as an effective way to persuade wedge voters.

A Study: “Reverse Engineering” Digital Advertising

To illuminate some of the key tenets of my aforementioned conceptual framework, I conducted an exploratory analysis. In this study, I generated testable hypotheses that were drawn from the theoretical statements highlighted in the previous section and illustrated empirical evidence that appeared to support the key tenets at a general level.

This analysis “reverse engineered” the logic of digital advertising. In typical reverse engineering, bots simulate human beings and scrape web pages and replicate algorithms while the machines identify and analyze the patterns revealed in the replication. Bot’s simulation and a machine’s approximation of algorithms are still opaque to algorithms based on an actual human being’s behavior (Hamilton et al. 2014). Furthermore, collecting data by crawling/scraping (in this case, political ads) occurs at an aggregate level, which does not capture individually targeted messages simply because it is not tied to an actual human being. But another way to reverse engineer is to unobtrusively observe algorithms generated by actual human behavior in the real world. Volunteers were recruited to install an app that automatically detected in real time the ads exposed to them during their normal browsing. The ads and associated meta-information were then sent to our research server accompanied by an individual unique ID, which then combined with the same individual’s survey responses.

The data presented here was collected during the primary elections in 2016, as a small-scale pilot study for full-scale implementation in the general elections. A total of 1466 US citizens (18-years-old or older) completed a survey including a battery of items on demographics and political attitudes/behavior and adopted the browser extension through our platform during the primary elections. [1], [2]

Unequal Distribution of Political Information

I argue that in digital advertising, political information is not randomly diffused but unequally distributed among the electorate and that such unequal distribution is reinforced by algorithmic biases imposed by campaigns’ strategies and the nature of the data campaigns utilize, not simply by voters’ voluntary choices. I expected to observe the unequal distribution of digital ads among the electorate, which is not random, but rather systematic. Such unequal distribution would be similar to existing political inequality. For instance, the White, college educated, older, high income would have a higher probability to be informed about politics. I also expected to detect the correlates between such distribution and campaigns’ algorithmic biases, above and beyond voters’ voluntary browsing activities.

Perhaps because it was during the primary elections, data indicated that only 52.8% of the participants even received political ads. The other half of the participants received no political ads at all, even during the peak of the campaign (2 weeks prior to their own states’ primary election). [3] As expected, typical voters who received political ads during the primaries (factors predicting ad reception) were middle-aged (between 45 and 54 years old), White, college educated, and highly interested politics (equal to or higher than 3 on a four point-scale). However, there was no gender differentiation. In general, those who receive political ads much resemble those in social, economic, and political power.

One might claim that because digital ads are widely distributed across many sites, ad reception is random (just like television advertising); therefore, the distribution would resemble existing political inequality. However, my analysis indicated that participants’ overall browsing activities (total number of page visits) did not predict ad reception, suggesting that ad reception should not be random, but rather strategically targeted by campaigns.

Alternatively, it might be possible that ad reception is purely a function of participants’ voluntary browsing selection (as is the case of online information selection). The data indicated this was not the case, however. Individual factors predicted a high level of browsing activity (age and male were positively associated with overall browsing activity) [4] but did not predict ad reception, suggesting digital ad reception should not bea simple function of an individual’s voluntary choice. [5] Even though Non-Whites engage in browsing activities as much as whites, whites were significantly more targeted by campaigns.

Since the politically capable and interested are more likely to respond to political ads in general, political campaigns concentrate ads on these groups. They are also the ones who produce the most data, such as voter registration, voting history, and consumer/marketing data. This is information that political campaigns heavily rely on. Middle-aged, white, politically interested, and collegiate citizens have a longer voting history, more voter registration records, and more consumption power (thereby producing more consumer/marketing data) than young voters or Non-Whites. Supporting this point, I found that the ads leading to “add your name” typically designed for data contribution were significantly more concentrated among non-registered voters (by 19.4%), compared to registered voters.

Different Voters, Different Mobilization, and Different Appeals

I argue that digital advertising algorithms provide or limit the opportunity to learn about specific aspects of politics and engage in politics in specific ways. Different voters receive different messages, and thereby engage differently in elections. Given campaigns’ data and advanced technologies, I expected to observe more persuasion ads than other types, which then would be concentrated among “wedge voters” (broadly defined as those who are perceived to be heavily involved in politics, but unsatisfied or uncommitted to their candidate), with negative appeals.

To investigate, I correlated ad features and individual participants’ demographic characteristics. The biggest advantage of human observation-based reverse engineering is that we can precisely link the digital ad features to voter profiles because we know exactly which ad came from which user – for each and every ad and each and every individual. Assuming the ads are customized based on political campaigns’ algorithms that predict potential voter characteristics, I reversed the process and constructed models that predicted ad features by voter characteristics. [6]

The ad feature I focus on here is the primary goal of the ads – whether it was primarily to mobilize the turnout (Get-Out-The-Vote or GOTV), or to persuade people by providing rationales and justification for an endorsement, or conversely, to attack specific candidates (Persuasion). [7]

One might argue that given the limited storytelling capacity of digital ads, most would be predominantly GOTV ads. Furthermore, many researchers still doubt campaigns’’ capacity to narrowly target specific types of voters in digital advertising, claiming that digital advertising relies heavily on imperfect heuristics. Thus, digital advertising would largely focus on broad mobilization such as GOTV. To the contrary, I hypothesized that with the increased microtargeting capacity of digital campaigns we would observe more persuasion ads targeting specific demographic groups, especially “wedge voters” who could divide other candidates’ support base. As expected, persuasion appeals were sent about two-and-a-half times more than GOTV ads. [8]

The findings indicate that while GOTV ads targeted a wide range of people, persuasion ads targeted the low income and Republicans (See Figure 2). Republicans were targeted with persuasion ads significantly more than Independents and Democrats. The 2016 primaries were highly contentious across almost all the states, especially among Republican candidates. With the unexpected popularity of unestablished, non-traditional Republican candidates like Donald Trump, Republican candidates heavily focused on dividing Trump’s base of support. Although Democrats’ popular votes turned out to be close between Bernie Sanders and Hillary Clinton (perhaps because Hillary Clinton’s advantage and confidence in delegate votes), Clinton’s campaign focused on dividing Republican coalitions by targeting Trump’s base of support.

Low-income individuals and Republicans and Independents also received many ads with negative appeals, as did Non-White voters (see Figure 3). However, Non-Whites were not the primary target across all candidates; hence they received relatively fewer ads despite their high level of overall browsing activity (as explained in Figure 1). When targeted, however, they received proportionally more negative appeals than positive appeals (regardless of whether its primary purpose was voter mobilization or persuasion). Overall, it appears that wedge voters are tactically targeted with the weapon of negative appeals. The findings imply that digital ads, especially with negative appeals, promote sharp divides among the electorate, which in turn leading to affective polarization (Lau et al. 2016) between different groups.

Figure 1: Likelihood of Ad Reception (Total reception).Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05. −2 Restricted Log Likelihood=1184.812.
Figure 1:

Likelihood of Ad Reception (Total reception).

Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05. −2 Restricted Log Likelihood=1184.812.

Figure 2: Likelihood to be Targeted for Persuasion or GOTV.Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05 (Lighter colors indicate statistically significant at p<0.1). −2 Restricted Log Likelihood=697.722.
Figure 2:

Likelihood to be Targeted for Persuasion or GOTV.

Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05 (Lighter colors indicate statistically significant at p<0.1). −2 Restricted Log Likelihood=697.722.

Figure 3: Likelihood to be Targeted with Negative Appeals.Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05. −2 Restricted Log Likelihood=658.988.
Figure 3:

Likelihood to be Targeted with Negative Appeals.

Estimate fixed effect, mixed model: Numbers indicate estimate mean differences from counterparts. Error bars indicate standard errors. Predictors are presented only if statistically significant at p<0.05. −2 Restricted Log Likelihood=658.988.

Albeit limited, the findings generally supported the core tenets of the theory. However, the data analysis here is only the first step. More granular analysis (e.g. an extensive analysis of browsing patterns correlated with ads) remains as a future task.


Digital advertising operates as algorithms, a loop between voters’ voluntary choices and a campaign’s strategic feedback on these choices. Voters are strategically defined and redefined as campaigns constantly adapted to situational changes. Information inequality is created between campaigns’ arbitrarily defined “strategically important” and “strategically unimportant.” Given that such gaps often correlate with existing social, economic, and political stratification, digital advertising perhaps widens the existing gap and exacerbates inequality.

Ironically, those who receive the most ads acquire the most narrowly “personalized” information. With enhanced voter profiles and advanced surveillance technologies, digital advertising particularly targets “wedge voters” with negative appeals, dividing the competing candidate’s coalitions and increasing hatred toward the others. Hence, the electorate becomes polarized.

Given the ubiquitousness of data and advanced technologies, digital advertising continuously evolves, and algorithmic biases that go beyond individual voters’ control pervade in electoral processes. One might still argue that digital advertising is only a way that political information is diffused among the public, thus those who point out potential detrimental consequences of digital advertising are overconcerned. However, considering the continually declining audience size of other media (especially broadcast media), the increasingly fragmented interest among the public, and – perhaps most problematic – the enigmatic nature of campaigns’ “algorithms” that remain proprietary, the impact of digital advertising might be greater than previously assumed.

Digital advertising is not merely a change in the way political campaigns communicate with the electorate. It appears to change how campaigns define voters and what the electorate means in our society.

About the author

Young Mie Kim

Young Mie Kim (PhD, University of Illinois at Urbana-Champaign) is an Associate Professor of the School of Journalism and Mass Communication and faculty affiliate of the Department of Political Science at the University of Wisconsin-Madison. Kim’s research centers on the role digital media play in citizen competence and participation.


The data illustrated in this study is a pilot study for Project DATA (Digital Advertising & Analysis, the Principal Investigator: Young Mie Kim). Project DATA is funded by the John S. and James L Knight Foundation, Center for Information Technology and Policy at Princeton University, and Women in Science and Engineering Leadership Institute at the University of Wisconsin-Madison, and the Vice Chancellor’s Office for Research of the University of Wisconsin-Madison. The author would like to extend gratitude to Project DATA team members, especially Ceri Hughes, who assisted with advertising coding, and Benjamin Toff, who helped with the construction of the volunteers’ ad “donation” platform (


Ballard, Andrew O., D. Sunshine Hillygus, and Tobias Konitzer. 2016. “Campaigning Online: Web Display Ads in the 2012 Presidential Campaign.” PS. July 2016: 414–419.10.1017/S1049096516000780Search in Google Scholar

Datta, Amitt, Michael Carl Tschantz, and Anupam Datta. 2015. “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination.” Proceedings on Privacy Enhancing Technologies 2015 (1): 92–112.10.1515/popets-2015-0007Search in Google Scholar

Delli Carpini, Michael X., and Scott Keeter. 1996. What Americans Know about Politics and Why It Matters. New Haven, CT: Yale University Press.Search in Google Scholar

Green, Joshua, and Sasha Issenberg. 2016. “Inside the Trump Bunker with Days to Go.” Bloomberg, October 27 2016. Accessed December 13, 2016, in Google Scholar

Hamilton, Kevin, Karrie Karahalios, Christian Sandvig, and Motahhare Eslami. 2014. “A Path to Understanding the Effects of Algorithm Awareness.” CHI Extended Abstracts on Human Factors in Computing Systems (alt.CHI). ACM, New York, NY, USA, 631–642.Search in Google Scholar

Hersch, Eitan D. 2015. Hacking the Electorate: How Campaigns Perceive Voters. New York, NY: Cambridge University Press.10.1017/CBO9781316212783Search in Google Scholar

Hillygus, D. Sunshine and Todd G. Shields. 2008. The Persuadable Voter: Wedge Issues in Presidential Campaigns. Princeton, NJ: Princeton University Press.10.1515/9781400831593Search in Google Scholar

Iyengar, Shanto, and K. S. Hahn. 2009. “Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use.” Journal of Communication 59: 19–39. doi:10.1111/j.1460-2466.2008.01402.Search in Google Scholar

Lau, R. Richard, David J. Andersen, Tessa M. Ditonto, Mona S. Kleinberg, and David P. Redlawsk. 2016. “Effect of Media Environment Diversity and Advertising Tone on Information Search, Selective Exposure and Affective Polarization.” Political Behavior. doi:10.1007/s11109-016-9354-8.Search in Google Scholar

Luskin, Robert C. 1990. “Explaining Political Sophistication.” Political Behavior 12 (4): 331–361. in Google Scholar

Prior, Markus. 2007. Post-Broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections. New York, NY: Cambridge University Press.10.1017/CBO9781139878425Search in Google Scholar

Smith, Eric R. A. N. 1989. The Unchanging American Voter. Berkeley, CA: University of California Press.Search in Google Scholar

Verba, Sidney, Kay Lehman Schlozman, and Henry E. Brady. 1995. Voice and Equality: Civic Voluntarism in American Politics. Cambridge, MA: Harvard University Press.10.2307/j.ctv1pnc1k7Search in Google Scholar

Article note:

An earlier version of this paper was presented at the 2016 Election Symposium of the Election Research Center at the University of Wisconsin-Madison (December 9, 2016).

Published Online: 2017-2-22
Published in Print: 2016-12-1

©2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.11.2022 from
Scroll Up Arrow