During the 2016 campaign, satisfaction with both Presidential candidates – Democrat Hillary Clinton and Republican Donald Trump – was the lowest it has been in over 20 years. 1 Trump and Clinton were not only largely unpopular with the American public, neither was beloved by the press (Patterson 2016). Yet while content analyses of media coverage revealed that both candidates received negative coverage, it was Trump who received more negative coverage overall (Patterson 2016). 2 The media coverage of Trump (as well as the words and actions of the candidate and some of his supporters) created associations between the Republican candidate and White supremacy, the Ku Klux Klan, misogyny, and even sexual assault. Here we consider the possibility that this type of political climate stigmatized Trump supporters and led to social desirability biases in reported candidate preference during the campaign.
In what follows we present early evidence that, throughout the 2016 Presidential campaign, social desirability bias may have also played a rule in muting expressions of support for the Republican presidential candidate (and, ultimately, President-Elect), Donald Trump. We examine attitudes in the weeks leading up to the 2016 Presidential Election among a diverse sample of college students who span the ideological spectrum. By measuring the psychological trait known as “self-monitoring,” we are able to demonstrate that individuals who are most susceptible to social desirability pressures were significantly less likely to express support for Trump. Those who are least susceptible to social desirability pressures, on the other hand, were the most likely to voice their support for the Republican candidate.
The 2016 Presidential Election will surely be remembered for its surprise ending: the candidate who won had been predicted to lose by the majority of pollsters in America. 3 Our study helps to shed light on circumstances that can lead some voters to withhold their political preferences and, more broadly, provides a real-world example of how social settings affect political expression.
Political Norms and Social Desirability Bias
Social desirability can skew measures of public opinion when particular social norms are prevalent. For example, White respondents during the 1989 Virginia gubernatorial election were more likely to report support for Black gubernatorial candidate, Douglas Wilder, when asked by a Black interviewer. When confronted with a White interviewer, on the other hand, perceived norms in favor of supporting a Black candidate decreased, as did White respondents’ expressed support for the candidate (Finkel, Guterbock, and Borg 1991; see also Traugott and Price 1992).
Kuklinski, Cobb, and Gilen (1997) compared traditional survey questions regarding race relations with the experimental approach known as the “item count technique,” designed to measure, on aggregate, anger about having a Black neighbor. A large majority of Americans responded to the survey question with tolerance (the socially desirable response), with 80 percent of southerners expressing their approval for Black families moving into White suburbs (p. 341). Using the experimental item count technique, however, they found that 42 percent of southerners felt angry when thinking about a Black family moving in (p. 329). The authors argue that the likely explanation for the distinction between the survey response and the experimental finding is that “social desirability effects are contaminating the traditional [survey] measure” (p. 341).
Social desirability bias in polls is also shown to skew reports of behaviors, like voting. Karp and Brockington (2005) argue that norms in favor of voting are particularly high in countries where turnout is generally higher. The authors examined self-reported voting behavior in five countries that independently verify whether a person has voted: Britain, New Zealand, Norway, Sweden, and the United States. The data demonstrate that in countries with higher expectations of voting, non-voters are more likely to report having voted when they, in fact, did not. 4Klar and Krupnikov (2016) similarly show that an individual’s willingness to express his or her true attitudes and behaviors depends on social norms. Specifically, these authors find that norms of anger and aggression in partisan politics can lead partisans to identify themselves as “independent” in political polls and can even drive their behavior in ways that mask their true partisanship.
Taken together, this research suggests that contexts that make a political option seem “undesirable” can lead people to be reluctant to express a preference for the option. The belief that a certain preference is undesirable may stem from social norms that govern most situations (i.e. it is always undesirable to express this preference), but it may also arise in certain specific cases (i.e. it is undesirable to express this particular preference at this particular time). Moreover, this research suggests that some people may not only be hesitant to express an undesirable preference publically, but may even hesitate to do so in the relative anonymity of a survey setting. As a result, if there are many negative characteristics associated with a political candidate, some people may grow to believe that others will perceive them negatively if they openly support this candidate, and as a result may be reluctant to truthfully state that they will vote for the candidate.
Measuring Susceptibility to Social Desirability Bias with “Self-Monitoring”
The degree to which an individual tends to succumb to social desirability pressures has been identified as an individual-level trait known as “self-monitoring” (Snyder 1974, 1979; Snyder and Gangestad 1986). Individuals higher in self-monitoring are attuned to the norms in their social environments and respond to these norms by expressing attitudes and behaviors that comply with socially desirable expectations. For example, Weber et al. (2014) demonstrate that low self-monitors who live in diverse areas are more likely to use derogatory racial stereotypes to formulate policy preferences. High self-monitors, on the other hand, are motivated to conform to norms of egalitarianism and thus refrain from translating racial hostility into policy preferences.
In other work, self-monitoring is shown to condition attitudes toward minority candidates, with low self-monitors expressing higher levels of discrimination (Terkildsen 1993), as well as racial prejudice more generally, with low self-monitors expressing higher levels of racial resentment (Feldman and Huddy 2005). Klar and Krupnikov (2016) find that self-monitoring conditions Americans’ willingness to publicly identify with a political party, particularly under conditions of partisan disagreement. Specifically, when confronted with negative information about politics, high self-monitors are less likely to report an identification with a political party, even as they still hold partisan views. Low self-monitors, meanwhile, publicly identify as a partisan even if they are aware that partisans are negatively perceived.
Social Desirability Bias in the 2016 Election
During the 2016 election, both Hillary Clinton and Donald Trump were viewed by many untrustworthy and unlikeable; indeed, only a third of voters reported that either candidate is “honest and trustworthy” (Gallup 2016a). Nevertheless, it was arguably Donald Trump who emerged as the least socially desirable candidate in the field. An analysis of eight major media outlets by DataFace revealed more positive coverage of Clinton by 6 of the outlets (including the New York Times, Wall Street Journal, Chicago Tribune, and Washington Post). 5 In addition to receiving critical media coverage, Trump was also largely cast as the inevitable loser, with most major polling outlets predicting a Clinton victory. On November 7, the day before the election, major media outlets predicting election outcomes placed the probability of a Clinton win at over 90%. 6
Moreover, Trump’s character and fitness for the presidency seemed to be questioned by many voters. In late October 2016, Pew found that 56% of respondents reported that Trump had either “none at all” or “not too much” respect for Democratic institutions (Pew 2016). Indeed, approximately 70% of respondents to the survey agreed that Trump was “hard to like” and 69% described him as “reckless.” 7 These criticisms were not without merit: as Election Day neared, the Republican nominee dealt with a wave of sexual assault allegations against him, he received the endorsement of the official newspaper of the Ku Klux Klan, and an audio recording of him boasting about groping women was released to the public. Indeed, only 38% of respondents in the Pew survey reported that Trump has either a “great deal” or a “fair amount” of respect for women. 8
It is possible that, against this backdrop, people hesitant to deviate from socially desirable preferences would be reluctant to express their preferred candidate if that candidate were Trump. In contrast, those lower in self-monitoring and thus less concerned with providing socially desirable responses would display higher levels of support for Trump.
Data and Method
In the weeks leading up to the 2016 Presidential Election, we surveyed 344 college students at a large and diverse research university in the Southwest. The respondents varied widely with respect to race, party identification, and ideology (see demographics in Table 1).
Demographics of Survey Sample.
|Party Identification||% Democrat||24|
|% Independent leaning Democrat||16|
|% Pure Independent||24|
|% Independent Leaning Republican||10|
We first asked our respondents a series of questions regarding the 2016 Election, including a question about their preferred candidate. Table 2 illustrates the percent of respondents who supported Clinton, Trump, and a third party.
Candidate Preference among Survey Sample.
We then asked all respondents to answer three questions that measure their level of self- monitoring. These measures are based on the original “self-monitoring” scale developed by Snyder (1979) and were subsequently abbreviated by Berinsky and Lavine (2012). They include the following three questions:
- When you are with other people, how often do you put on a show to impress or entertain them?NeverOnce in a whileSome of the timeMost of the timeAlways
- When you are in a group of people, how often are you the center of attention?NeverOnce in a whileSome of the timeMost of the timeAlways
- How good or poor of an actor would you be?PoorFairGoodExcellent
Higher values on the response scale indicate greater self-monitoring. By scaling the responses to each question from 0 to 1 and then summing all three measures, we were left with a unidimensional self-monitoring scale ranging from 0 (i.e. providing the lowest response on each question) to 3 (i.e. providing the highest response on each question). In Table 3, we provide the distribution of the self-monitoring scale among our respondents.
Distribution of Self-Monitoring Scores.
We now turn to our analysis of how self-monitoring conditions support for Trump.
We modeled support for Trump versus Clinton in a logistic regression. This model allows us to include demographic variables and thus to control for any differences between low and high self-monitors – for example, ideology and party identification, which are each independently predictive of support for Trump. In Table 4, we present the marginal effects of each coefficient.
Marginal Effects of Correlates of Trump Support (marginal effects after logit).
|Independent Variable||Marginal Effect (dy/dx) (Standard Error)|
|Ideology (Higher values=more conservative)||0.06*** (0.02)|
|Party identification (Higher values=more Republican)||0.12*** (0.02)|
*p<0.1; **p<0.05; ***p<0.01; all tests are two-tailed.
The most significant predictors of voting for Trump rather than for Clinton are identifying as a Republican (p=0.00) and holding ideologically conservative views (p=0.003). Identifying as a woman is the most significant predictor of voting for Clinton (indicated by the negative coefficient; p=0.009).
Even while controlling for these political factors, the logit results demonstrate that higher self-monitoring scores are significantly correlated with lower support for Trump (p=0.08). This is to say that for every increased unit of self-monitoring, the probably of reporting support for Trump decreases by nine percent.
An alternative hypothesis might be that individuals higher in self-monitoring genuinely support Clinton. That would mean that self-monitoring does not reveal dishonest responses but rather true distinctions in voting patterns. We find this theory implausible given important similarities between high and low self-monitors. If we split self-monitors at the mean (1.36 on our 3-point scale), we find that those in the “low” group are statistically indistinguishable from those in the “high” group when it comes to both party identification and political ideology (see Table 5). The only significant distinction between low and high self-monitoring is gender: low self-monitors are more female than are high self-monitors. Since we find that women are less likely to support Trump but that low self-monitors are more likely to support Trump, this does not cast doubt on our finding. In fact, it may make our findings even more robust.
Political and Demographic differences between High and Low Self-Monitors.
|Low Self-Monitors||High Self-Monitors||Difference Between Means|
|Mean party ID||3.47||3.61||p=0.22|
In sum, controlling for ideology, party identification, and gender, we still find that self-monitoring plays a role in respondents’ willingness to express support for Donald Trump. Individuals who score higher in the trait that predicts willingness to express socially desirable responses are also less likely to say they support Trump.
With different methods, other scholars have found complementary evidence for the theory that Americans might have intentionally repressed their expressed support for Trump. Enns and Schuldt (2016) randomly assigned half of self-reported “uncommitted” voters to rate a policy proposal without any candidate endorsement. The other half was assigned to rate that same proposal but, this time, it was explicitly endorsed by Trump. The scholars found that uncommitted voters were more favorable toward the policy when it was associated with Donald Trump. This suggest that voters who call themselves uncommitted might, in fact, have held true – but masked – preferences for the Republican candidate.
On the morning of the election, the candidate who was to ultimately win the election also had the lowest favorability rating in the history of presidential polling (Gallup 2016b). Throughout the entirety of the general election, he never once led his opponent in poll aggregates (Huffington Post Polls 2016). State-level pre-election polling in 2016 underestimated support for Trump in 38 states (Jackson 2016). Our study will surely be one of myriad scholarly attempts to help understand the historic outcome of the 2016 Presidential Election.
Admittedly, our study has a number of limitations. Our sample is, of course, a limited slice of the American public: we rely on a sample of college students in one specific region of the United States. Although there are the wide-ranging ideological viewpoints expressed by the participants in this study, which hints at the idea that this particular campus is less dominated by a particular viewpoint than other campuses may be, it is still possible that norms against the Republican candidate might be stronger on a college campus than in other contexts. Notably, however, this sample does vary widely with respect to self-monitoring (see Table 1). In turn, this study is a preliminary analysis of the relationship between support for the 2016 Presidential candidates and self-monitoring, though its results may be context-specific to particular perceptions of social desirability. As a result, our study would merit from being tested in different contexts and with different samples; indeed, only by doing so can we measure its external validity (McDermott 2002). Finally, our study only considers individuals who opted to participate in the survey. It is possible that in 2016 many individuals – feeling reluctance to share their opinions – simply opted not to respond to polls.
Further, it is not the intention of this study to claim that social desirability bias fully explains how and why polling failed to accurately capture support for the Republican nominee. Rather, our goal is to consider whether people who have a tendency to adjust their behaviors in ways that are socially desirable expressed their preferences honestly. In our study, we find that individuals who are generally prone toward “monitoring, controlling, and molding” (Snyder 1974, p. 527) their attitudes to comply with social norms also refrain from supporting Trump, whereas low self-monitors, who “express it as they feel it” (Snyder 1974, p. 527) are more likely to support the Republican candidate. Even while controlling for factors that are highly predictive of vote choice (for example, party identification and ideology), self-monitoring appears to be a significant determinant of reported candidate support. Given the ideological and political similarities between high and low self-monitors, we do not believe that their vote patterns were ultimately different. Rather, we argue that their willingness to truthfully reveal their vote choice during the 2016 election differed in ways that could have been one (of the many) factors contributing to polling error in 2016.
Abramowitz, Alan. 2016. “Will Time for Change Mean Time for Trump?” PS: Political Science & Politics 49 (4): 659–660.
Berinsky, Adam J., and Howard Lavine. 2012. “Self-Monitoring and Political Attitudes.” Improving Public Opinion Surveys: Interdisciplinary Innovation and the American National Election Studies, edited by John Aldrich and Kathleen M. McGraw. Princeton, NJ: Princeton University Press.
Enns, Peter K., and Jonathon P. Schuldt. 2016. “Are There Really Hidden Trump Voters?” New York Times, November 7.
Feldman, Stanley, and Leonie Huddy. 2005. “Racial Resentment and White Opposition to Race-Conscious Programs: Principles or Prejudice?” American Journal of Political Science 49 (1): 168–183.
Finkel, Steven E., Thomas M. Guterbock, and Marian J. Borg. 1991. “Race-of-Interviewer Effects in a Preelection Poll: Virgina 1989.” The Public Opinion Quarterly 55 (3): 313–330.
Gallup. 2016a. “As Debate Looms, Voters Still Distrust Clinton and Trump.” (September 23, 2016, Report by Frank Newport.) http://www.gallup.com/poll/195755/debate-looms-voters-distrust-clinton-trump.aspx.
Gallup. 2016b. “Trump and Clinton Finish with Historically Poor Images.” (November 8, 2016, Report by Lydia Saad.) http://www.gallup.com/poll/197231/trump-clinton-finish-historically-poor-images.aspx.
Holbrook, Allyson L., and Jon A. Krosnick. 2010. “Social Desirability Bias in Voter Turnout Reports.” Public Opinion Quarterly 74 (1): 37–67.
Huffington Post Polls. 2016. “2016 General Election: Trump vs. Clinton.” (November 8, 2016.) http://elections.huffingtonpost.com/pollster/2016-general-election-trump-vs-clinton.
Jackson, Natalie. 2016. “Election Polls Underestimated Donald Trump In More Than 30 States”. Huffington Post (December 23, 2016). http://www.huffingtonpost.com/entry/election-polls-donald-trump_us_585d40d7e4b0de3a08f504bd.
Karp, Jeffrey A., and David Brockington. 2005. “Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries.” The Journal of Politics 67 (3): 825–840.
Klar, Samara, and Yanna Krupnikov. 2016. Independent Politics: How American Disdain for Parties Leads to Political Inaction. New York, NY: Cambridge University Press.
Kuklinski, James H., Michael D. Cobb, and Martin Gilen. 1997. “Racial Attitudes and the ‘New South.’” The Journal of Politics 59 (2): 323–349.
McDermott, Rose. 2002. “Experimental Methods in Political Science.” Annual Review of Political Science 5: 31–61.
Norpoth, Helmut. 2016. “Primary Model Predicts Trump Victory.” PS: Political Science & Politics 49 (4): 655–658.
Patterson, Thomas E. 2016. “News Coverage of the 2016 General Election: How the Press Failed the Voters”. Shorenstein Center Report. http://shorensteincenter.org/news-coverage-2016-general-election/.
Pew. 2016. “As Election Nears, Voters Divided Over Democracy and Respect.” (October 27, 2016). http://www.people-press.org/2016/10/27/as-election-nears-voters-divided-over-democracy-and-respect/.
Snyder, Mark. 1974. “Self-monitoring of Expressive Behavior.” Journal of Personality and Social Psychology 30 (4): 526–537.
Snyder, Mark. 1979. “Cognitive, Behavioral, and Interpersonal Consequences of Self-Monitoring.” In Advances in the Study of Communication and Affect: Perception of Emotion in Self and Others, edited by Patricia Pliner, Kirk R. Blankstein and I. M. Spiegel. New York, NY: Plenum Press.
Snyder, Mark, and Steven W. Gangestad. 1986. “On the Nature of Self-monitoring; Matters of Assessment, Matters of Validity.” Journal of Personality and Social Psychology 45: 1061–1072.
Terkildsen, Nadya. 1993. “When White Voters Evaluate Black Candidates: The Processing Implications of Candidate Skin Color, Prejudice, and Self-Monitoring.” American Journal of Political Science 37: 1032–1053.
Traugott, Michael W., and Vincent Price. 1992. “The Polls – A Review: Exit Polls in the 1989 Virginia Gubernatorial Race: Where Did They Go Wrong?” Public Opinion Quarterly 56 (2): 245–253.
Weber, Christopher R., Howard Lavine, Leonie Huddy, and Christopher M. Federico. 2014. “Placing Racial Stereotypes in Context: Social Desirability and the Politics of Racial Hostility.” American Journal of Political Science 58 (1): 63–78.
See Pew Research Center Report: http://www.pewresearch.org/fact-tank/2015/08/21/24-of-americans-now-view-both-gop-and-democratic-party-unfavorably/.
Trump also received more coverage generally. Patterson (2016) reports that over the course of the general election campaign Trump regularly received more than 50% of the coverage of the candidates. Moreover, as Patterson (2016) notes, Trump’s negative coverage was a function of the candidate making statements that were generally perceived as negative.
The election was also divided in terms of predictions. While predictive models based on polls suggested that Clinton was more likely to win, a number of forecasting models based on electoral “fundamentals” predicted a more positive outcome for Trump (e.g. Abramowitz 2016; Norpoth 2016).
See also Holbrook and Krosnick (2010) who employ the “item count technique” to draw similar conclusions.
As reported by the New York Times on November 7, 2016: Huffington Post placed the probability of a Clinton win at 98%, the Princeton Election Consortium reported the probability of Clinton winning as greater than 99%, and DK placed it approximately 90%.
Comparatively, 37% reported that Clinton Trump had either “none at all” or “not too much” respect for Democratic institutions, 59% reported that Clinton was hard to like and 43% described her as “reckless.”
Comparatively, 76% reported that Clinton has a “fair amount” or a “great deal of respect” for women.