It’s become cliche to decry each election as the most negative of our lives; the atmospherics of this race, though, are completely unique in recent American history – and uniquely conducive to negativity. (Aaron Blake, The Washington Post, July 29, 2016)1
Right after Hillary Clinton had officially accepted her nomination as the presidential candidate for the Democratic Party for the 2016 US presidential election, The Washington Post published the above negative prediction. As a recent study by the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy showed (Patterson 2016), the US media itself contributed significantly to the perceived negativity of the campaign by a largely negative tone in their coverage of the two main candidates, Hillary Clinton as well as Donald Trump. In addition to that, however, the language of the two candidates themselves has also been described as fairly negative: a pilot study by undergraduates from the University of Michigan (Bayagich et al. 2016) indicated that the tone of both candidates in the 2016 presidential debates was considerably negative. Both Clinton and Trump showed negativity scores higher than any other presidential candidate of their respective party in the last 30 years – with Trump exhibiting considerably more negative language than Clinton. In a similar vein, a linguistic analysis of the two candidates Twitter posts (Crockett 2016) revealed, perhaps rather unsurprisingly, that Trump’s tweets were considerably more negative than Clinton’s posts.
All of these analyses certainly yield interesting results concerning the emotional language used by the two candidates, yet they also suffer from a simplistic dichotomy of language as either negative or positive (or neutral, if a third category is included in the analysis). Bayagich et al. (2016), for example, relied on the Lexicoder Sentiment Dictionary (Young and Soroka 2012), which classifies lexical items as only either ‘positive’ or ‘negative’ (Young and Soroka 2012: 212). Such binary classifications are still the dominant approach in sentiment analysis (Nissim and Patti 2017: 32–34; Pozzi et al. 2017: 3) and do allow researchers to get a first glimpse at the emotional language of a speaker. Yet, they obviously fail to provide a detailed analysis of the complex set of emotions that speakers have. Sadness and fear, for example, are both negative emotions, yet, as we all know, they crucially differ in their physiological as well as psychological effects. Similarly, we consider joy and trust as positive emotions, but are well aware of how different we feel when we experience the two. From a psychological perspective, emotions are a set of complex physiological, cognitive and behavioral reactions to a stimulus, which have evolved as functional adaptions to particular situations and as impulses for specific types of situation-adequate behavior (Plutchik 2001: 345–348; Becker-Carus and Wendt 2017: 541–542): while fear might trigger an impulse to flee, sadness can lead individuals to seek social support and solace. This evolutionary advantage obviously raises the issue of whether there are basic, universal emotions that all humans share as part of their genetic makeup. One piece of evidence for the existence of such universal human emotions comes from research by Paul Ekman and colleagues, who argue for the existence of unconscious cross-cultural facial expression of emotions (inter aliaEkman and Friesen 1969; Ekman and Friesen 1971; Becker-Carus and Wendt 2017: 551–552; though see Russell 1994 for a critical review of these results).2 Based on the results of this line of research, Ekman (1992, 2005) postulated the following six basic human emotions, for all of which he identified specific corresponding facial expressions: joy, sadness, anger, fear, disgust and surprise. Yet, despite receiving some empirical support, Ekman’s classification is far from being generally accepted, with competing approaches arguing for anything from three up to eleven universal emotions (Plutchik 2001: 349).3 An alternative approach based on psychological and physiological research that has received considerable support was advocated by Plutchik (1980, 1994), who postulated the following eight basic human emotions: joy, sadness, anger, fear, trust, disgust, anticipation and surprise. As can be seen, Plutchik’s list subsumes Ekman’s six emotions and only adds trust and anticipation to them (Mohammad and Turney 2013). In contrast to Ekman’s list, which largely comprises negative emotions (i.e. sadness, anger, fear and disgust), Plutchik assumes an identical number of positive and negative emotions, which can be further subdivided into four opposing pairs, namely anger–fear, anticipation–surprise, joy–sadness, and trust–disgust (cf., e.g. Mohammad and Turney 2010; Mohammad and Turney 2013). In addition to this, in Plutchik’s approach, specific types of emotions that go beyond the basic ones are straightforwardly accounted for either by different degrees of intensity (e.g., terror is seen as a stronger form of fear, while apprehension is a weaker form) or by a combination of emotions (e.g. hatred is analysed as a combination of disgust plus anger; Plutchik 2001: 348-350)
Currently, there is thus no single accepted psychological theory of basic human emotions. Nevertheless, what all psychological research agrees on is that a simple positive-negative dichotomy is far to simplistic to capture the full range of human emotions. Consequently, automatic sentiment analyses should also employ more fine-grained emotional classifications. In light of this, following Mohammad and Turney (2010, 2013), the present study aims for a psychologically more detailed analysis of the emotional language employed by Clinton and Trump. Adopting a corpus-based approach, the presidential campaign speeches of the two main candidates were subjected to an automatic sentiment analysis (Mohammad and Turney 2013; Jockers 2016) that not only identified positive or negative lexical items, but also detected lexis associated with Plutchik’s eight basic human emotions. In particular, the analysis draws on the NRC Word-Emotion Association Lexicon (Mohammad and Turney 2013), which contains 10,170 lexical items that are coded for Plutchik’s basic human emotions as well as positive or negative polarity (cf. also Mohammad and Turney 2010; Schweinberger 2016). Alternative models of human emotions such as Ekman’s approach could, of course, also form the basis for automatic sentiment analysis. Yet, as pointed out above, Plutchik’s categories have the advantage of not only subsuming Ekman’s six emotions but also of providing a more balanced list of positive and negative emotions (and, potentially, forming the basis for more fine-grained future analyses that also incorporate the intensity parameter as well as the combination of basic emotions to create specific emotional feelings).
The 10,170 lexical items of the NRC contain (n.b.: 189 items occur in more than one set, which is why the numbers below add up to 10,359; Mohammad and Turney 2013):
- the 1,587 most frequent noun, verb, adverb and adjective uni- and bigrams from the Macquarie Thesaurus (with frequency being assessed via the Google n-gram corpus; for details see Mohammad and Turney 2013),
- all 640 words of the Ekman subset of the WordNet Affect Lexicon,
- 8,132 terms from the General Inquirer.
The emotional ratings for all these items were then collected via the crowed-sourced Amazon Mechanical Turk service and are based on 38,726 ratings from 2,216 subjects. All lexical items of the NRC were rated by five different individuals using a Likert-scale design: subjects were, e.g., asked whether a given word was not positive, weakly positive, moderately positive or strongly positive. Similar judgments had to be made for all other parameters (negativity as well as the eight Plutchik categories). 85% of all items had a rating on which at least four raters agreed (assignments with two or more standard deviations from the mean were discarded; for more details see Mohammad and Turney 2013).
The NRC is available via open access and has been implemented (
The syuzhet library offers no option for assessing the influence of co-occurring ‘shifters’, words that affect the degree or even polarity of emotional words. It was therefore decided to draw on the sentimentr package (Rinker 2016) to test the effect of shifters on the present results. This R library has the advantage of also offering the NRC as one of its emotional lexicons. In contrast to the syuzhet library, sentimentr does not provide an in-depth analysis of Plutchik’s basic emotions, but only yields a single polarity score, thus only allowing us to check the accuracy of the initial binary emotion analysis presented below:5 The sentimentr score is calculated by first assigning a positive ‘+1’, or negative ‘−1’ value to an emotion word in accordance with its classification in the NRC. This score S1 is then further modified by checking whether a shifter occurs within a context specified by the user (the default window is 5 words before and 2 words after the polarized word). Sentimentr distinguishes four types of shifters with the following default values (all of which can be modified and whose precise settings remain to be identified by future research):
- Amplifiers (currently comprising 50 items, such as certain, totally, or very) and de-amplifiers (14 items, e.g. barely, hardly or rarely), respectively, increase and decrease the original score S1 by 1.8 to yield a score S2,
- Negators (26 items such as didn’t, never or not), if present, then flip the sign of S2 to give the score S3,
- Adversative conjunctions (3 items: although, but, and however) lower or increase the score S3 by 1.85, depending on whether they follow or precede the polarized item, to yield S4.
Finally, all the weighted S4 scores of a sentence are summed and divided by the square root of the total number of words of the sentence. This procedure thus gives a normalized score for each sentence that is slightly different from the one calculated by the syuzhet package. Nevertheless, the sentimentr package made it possible to run two NRC-based analyses on the data: one excluding any shifter effect (an analysis identical in approach to the syuzhet package) and one taking into account the effect of contextual shifters (using the default settings mentioned above). As a statistical analysis of the correlation of these two models showed, both approaches yielded similar results, which offers some corroboration for the validity of the results from the ‘simpler’ bag-of-words method provided by the syuzhet package.
Note, however, that like the bag-of-words approaches the more sophisticated shifters approach cannot be considered perfect models of human understanding. Take, e.g., the following sentences:
- (1) He is stupid.
- (2) He is not just stupid …,
- (3) He is stupid and dangerous.
- (4) He is not just stupid and dangerous …
- (5) He is not just stupid but dangerous.
The sentences in (1)–(5) are all clearly negative and the not just construction in (2), (4) and (5) only serves to amplify this negative meaning. Yet, while a context-blind syuzhet method correctly identifies (1–5) as negative, the sentimentr analysis, e.g., ‘blindly’ treats not as a negator that reverses the polarity of stupid and gives an overall positive or neutral score for (2), (4) and (5) (see the Appendix analysis in the accompanying R script for details).6 In order to provide a more adequate model of human sentiment word processing, future automatic sentiment analysis will therefore have to draw on more elaborate syntactic analyses that account for the sophisticated interaction of complex form-meaning correspondences (as advocated, e.g., in Construction Grammar theories; see, e.g. Hoffmann 2017).
All in all, the present analysis confirms that Trump does indeed use statistically significantly more negative lexical items in his campaign speeches than Clinton. Moreover, the results also show that the most frequent emotion evoked by both candidates, as expected for campaign speeches, is trust. In addition to this, the statistical analysis of the basic emotions detects significant preferences for each candidate: while lexical items from the emotional fields of anticipation, trust and joy appear significantly more often in Clinton’s speeches, Trump’s speeches significantly exhibit a preference for emotional lexis evoking disgust, anger, fear and sadness. The present results thus provide further empirical support for the view that Trump’s campaign was much more negative than Clinton’s, adding that there are a particular set of negative emotions that were more frequently expressed by the former. Finally, the study also shows how automatic sentiment analyses of political speech can benefit in general from complementing simple binary polarity analysis of emotions with a psychologically more fine-grained and relevant emotion word classification.
The data for the present study comes from the American Presidency Project website (http://www.presidency.ucsb.edu), which is hosted at the University of California, Santa Barbara. The site contains more than 120,000 official presidential documents, including the documents of the 2016 Presidential Election (http://www.presidency.ucsb.edu/2016_election.php). The relevant campaign speeches were scraped from this site and statistically analysed using the program R version 3.4.1 (R Core Team 2016). All scripts used in the extraction and analysis of the data are provided together with this article.
In a first step, the presidential campaign speeches by Clinton and Trump were downloaded from the American Presidency Project website as text files following the procedure outlined by Francom (2015). At the time the study was carried out, the website with Clinton’s data7 also contained 106 speeches from her 2008 campaign, which were excluded from further analysis. The remaining data from each candidate were then saved in a single text file each. These files were then submitted to an automatic sentiment analysis using the syuzhet package (Jockers 2016). Additionally, as outlined in the previous section, the sentimentr package (Rinker 2016) was used to test the validity of the binary syuzhet sentiment analysis against a model that also takes into account the influence of co-occurring shifter items.
Finally, the data were subjected to two separate statistical analyses: first of all, for each sentence exhibiting words identified as either positive or negative, a negativity score NEG was calculated. This ratio variable was simply the proportion of negative words per sentence (that is, NEG = (frequency of negative words in a sentence/(frequency of negative words in a sentence + frequency of positive words in a sentence)). The NEG scores for Clinton and Trump were then subjected to a Wilcoxon rank sum test (alternatively known as Mann-Whitney U test) in R, since an F-test had revealed that the two distributions did not meet the criterion of variance homogeneity, thus precluding an independent T-test (Oaks 1998: 17; Gries 2009: 208, 218–226). In a second step, the overall frequencies of emotion words per Plutchik category were calculated for each candidate. Both variables included in this second analysis are nominal: the factor CANDIDATE has two levels (‘Clinton’ and ‘Trump’), the factor EMOTIONS has eight levels (‘joy’, ‘sadness’, ‘anger’, ‘fear’, ‘trust’, ‘disgust’, ‘anticipation’ and ‘surprise’). In order to find out whether any of the 16 (2×8) resulting variable combinations are statistically associated, that is whether any CANDIDATE significantly uses more or fewer words of a particular EMOTIONS category, these data were subjected to a “configural frequency analysis” (CFA; cf. Bortz et al. 1990: 155–157; Gries 2009: 240–252) using the HCFA 3.2 script (Gries 2004). For each CANDIDATE × EMOTIONS variable combination, the script calculated an exact binomial test and adjusted the significance of all tested factor associations (so-called ‘configurations’) for multiple testing (using the Holm method; see Gries 2009: 249 for details). Since p-values are dependent on sample size, CFAs also include a sample size-independent measure of effect size ranging from 0 to 1 labeled “coefficient of pronouncedness” (“Q”, a measure equivalent to r2; cf. Bortz et al. 1990: 156; Gries 2009: 249).8
The data retrieved from the American Presidency Project website yielded the following two corpora: the campaign speeches of Clinton with 249,185 words and the campaign speeches of Trump, which amounted to 183,521 words. Note that this slight difference in number of words does not affect the following statistical analyses since all methods employed in this paper correct for unequal sample size.
Let us first look at the distribution of negative words in the two corpora. Overall, the data yield the following results:
Table 1 already shows that overall Trump’s campaign speeches contain proportionally more negative words than Clinton’s (30% vs. 40%). This, however, is obviously only a very blunt summary of the use of emotional words of the two candidates. As mentioned in section 2, in addition to this, a NEG score was calculated for both candidates that indicates to what degree individual sentences are negative that include any words identified as positive and/or negative by the Word-Emotion Association Lexicon. This measurement thus allows us to track the proportion of negative words for each individual sentence and is less affected by single outliers (of particularly positive or negative words in a single utterance). Figure 1 gives a visual representation of the distribution of NEG scores for both candidates:
Raw frequencies of positive and negative words in the campaign speeches of Clinton and Trump.
|Candidate||Negative words||Positive words||Sum|
|Clinton||7,137 (30%)||16,680 (70%)||23,817|
|Trump||6,939 (40%)||10,455 (60%)||17,394|
Figure 1 provides further support for the claim that Clinton (median = 0%, mean = 30%) uses considerably less negative words than Trump (median = 33%, mean = 38%). Since an F-test indicated that the two distributions have a significantly different variance (F = 1.1013, num dfnum = 5849, dfdenom = 10502, p-value <0.001), the NEG scores of Clinton and Trump were subjected to a Wilcoxon rank sum test. This test showed that the difference between the two candidates is highly statistically significant (W = 34961000, p-value <0.001).
How reliable are the above results in light of the fact that the syuzhet package does not take into account the potential effect of co-occurring shifters? As Figure 2 shows, using the sentimentr package, we actually find a strong correlation between the simple syuzhet-type of analysis and a more sophisticated analysis including the effect of shifters:
For both data sets, Figure 2 shows a strong positive correlation between the analysis without and with shifters (with r = 0.66 for the Trump data and r = 0.99 for the Clinton data). These results thus corroborate the validity of the syuzhet presented above. Nevertheless, as mentioned in the preceding section, neither model should be considered perfect and future research on automatic sentiment analyses will definitely have to draw on far more sophisticated models to better approximate human sentiment word understanding.
As mentioned in the introduction, one step in this direction would be to move beyond a simple binary classification of words as positive or negative. Such an approach can, of course, yield interesting first results concerning the sentiment of a text. Yet, in order to provide a psychologically more revealing analysis, in a next step the data were also analysed for Plutchik’s eight basic human emotions. This more detailed sentiment analysis yielded the following results:
Plutchik’s eight emotions can be subdivided into four complementary pairs, namely anger–fear, anticipation–surprise, joy–sadness, and trust–disgust (cf., e.g. Mohammad and Turney 2010). In Table 2, we find a similar distribution for the last of these pairs in the two corpora: both Clinton and Trump most frequently use emotion words evoking trust (26% and 23%, respectively) and that words with the meaning of disgust appear with the least frequency in both corpora (with 4% and 7%, respectively). All the other emotions, however, are employed to different degrees by the two candidates. Figure 3 gives a visual representation of the data distribution provided in Table 2:
EMOTIONS by CANDIDATE.
As Figure 3 reveals, Clinton seems to use considerably more words from more positive emotional categories with joy and anticipation in second and third place. In contrast to this, Trump has fear as the second most frequent emotion category evoked. Moreover, while sadness and anger are in penultimate and antepenultimate position of Clinton’s range of emotional words, they are ranked above surprise in Trump’s data.
In order to statistically test the relative preferences of two candidates for particular emotions, the raw data from Table 2 was subjected to a CFA test. The results from this analysis are summarized in Figure 4:
For each CANDIDATE × EMOTIONS combination, Figure 4 provides the raw frequencies (column ‘Freq’), the expected frequencies (‘Exp’), how much the former deviate from the latter as calculated by a binomial test (‘Cont.chisq’), the direction of the deviation (‘Obs-exp’, with ‘<’ indicating fewer and ‘>’ indicating more observed data points than expected), the Holm-adjusted significance value of the combination (‘P.adj.Holm’) as well as the coefficient of pronouncedness (‘Q’).
In Figure 4, I have marked by color (blue for the Democratic candidate Clinton and red for the Republican candidate Trump) those factor combinations that a candidate uses statistically significantly more often than their opponent. As the CFA analysis reveals, we do, indeed, find the expected emotional bias: Compared to her opponent’s speeches, in Hillary Clinton’s campaign speeches positive emotions such as anticipation, trust and joy are significantly preferred (and words evoking disgust, anger, fear and sadness are significantly dispreferred). Trump’s campaign speeches, in contrast to Clinton’s speeches, show a relative statistical preference for negative emotions such as disgust, anger, fear and sadness (and a significant dispreference for positive emotions such as anticipation, trust and joy). The only category for which there is no statistical difference between the two candidates is surprise.
The findings of the sentiment analysis of the campaign speeches by the main candidates of the 2016 US presidential election largely confirms what previously has been suggested in the media: Donald Trump’s language in the analysed documents is, on the whole, relatively more negative than that of Hillary Clinton. More importantly, however, by adopting a psychological classification of emotions (Plutchik 1980, Plutchik 1994), the present study was able to provide a much more detailed and fine-grained analysis of the data. This analysis revealed important findings that cannot be detected by simplistic binary models of human emotions: as we have seen, for both candidates trust is the most frequently evoked emotion (making up 26% and 23% of all emotional speech, respectively). Considering that the major function of campaign speeches is to gather support for and develop trust in a candidate, that in itself is not surprising. Yet, it is important that this positive emotion is also the most frequent one in Donald Trump’s speeches – the candidate that is normally described as someone who relied on a largely negative rhetoric. What the statistical analysis showed, however, is that there are specific negative emotions that are significantly favored in Trump’s campaign speeches (namely disgust, anger, fear and sadness) that probably contributed to the overall assessment of his campaign as predominantly negative. Still, an open question that remains to be addressed by future studies is whether the 2016 campaign was indeed more negative in tone than previous campaigns (as suggested, e.g., by the quote from the The Washington Post at the start of the paper or the initial findings from Bayagich et al. 2016). As argued in the present paper, the syuzhet package (Jockers 2016) is an ideal tool to address this question.
The package, which carries out an automatic sentiment analysis drawing on a lexicon that is based on Plutchik’s eight basic human emotions (the Word-Emotion Association Lexicon; Mohammad and Turney 2013) is freely available online, as is the software program R (R Core Team 2016) with which all analyses for the present study have been conducted. The resources necessary for similar types of studies are therefore easily available and it is hoped that future research will therefore make much more use of this psychologically more plausible type of sentiment analysis. Yet, as pointed out earlier, this automatic model of sentiment detection is only a first attempt at modeling human sentiment word processing. More adequate models will have to be developed in the future that not only draw on psychological theories of human emotions, but that also incorporate the complex effect of co-occurring words and constructions that shift the sentiment score of an emotional word (and which go beyond a blind application of contextual information).
I would like to thank all three anonymous reviewers for their critical feedback, which has greatly improved the quality of the final paper.
Bayagich, Megan, Laura Cohen, Lauren Farfel, Andrew Krowitz, Emily Kuchman, Sarah Lindenberg, Natalie Sochacki, Hannah Suh & Stuart Soroka. 2016. Exploring the tone of the 2016 Campaign. http://cpsblog.isr.umich.edu/?p=1884 (accessed 07 March 2017).
Becker-Carus, Christian & Mike Wendt. 2017. Emotionen. In Christian Becker-Carus & Mike Wendt (eds.), Allgemeine Psychologie: Eine Einführung, 539–568. Berlin et al: Springer.
Bortz, Jürgen, Gustav A. Lienert & Klaus Boehnke. 1990. Verteilungsfreie Methoden in der Biostatistik. Berlin et al: Springer.
Calvo, Rafael A. & Sunghwan M. Kim. 2013. Emotions in text: Dimensional and categorical models. Computational Intelligence 29(3). 527–543.
Crockett, Zachary. 2016. What I learned reading 4,000 Trump and Clinton tweets. Source: http://www.vox.com/2016/11/7/13550796/clinton-trump-twitter (accessed 07 March 2017).
Ekman, Paul 1992. An argument for basic emotions. Cognition and Emotion 6(3). 169–200.
Ekman, Paul. 2005. Emotion in the human face. Oxford: Oxford University Press.
Ekman, Paul & Wallace V. Friesen. 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1. 49–98.
Ekman, Paul & Wallace V. Friesen. 1971. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17. 124–129.
Francom, Jerid. 2015. Web scraping with ‘rvest’ in R. Source: http://francojc.github.io/web-scraping-with-rvest/ (accessed 07 March 2017).
Gendron, Maria, Debi Roberson, Jacoba Marietta van der Vyver & Lisa Feldman Barrett. 2014. Cultural relativity in perceiving emotion from vocalizations. Psychological Science 25. 911–920.
Gries, St. Th. 2004. HCFA 3.2 ‑ A Program for Hierarchical Configural Frequency Analysis for R for Windows. http://www.linguistics.ucsb.edu/faculty/stgries/research (accessed 17 November 2014).
Gries, St. Th. 2009. Statistics for linguistics with R: A practical introduction. Berlin: Mouton de Gruyter.
Hoffmann, Thomas. 2017. Construction Grammars. In Barbara Dancygier (ed.), The Cambridge handbook of cognitive linguistics, 310–329. Cambridge: Cambridge University Press.
Jockers, Matthew. 2015. Some thoughts on Annie’s thoughts … about Syuzhet. http://www.matthewjockers.net/2015/03/04/some-thoughts-on-annies-thoughts-about-syuzhet/ (accessed 17 July 2017).
Jockers, Matthew. 2016. Introduction to the Syuzhet package. https://cran.r-project.org/web/packages/syuzhet/vignettes/syuzhet-vignette.html (accessed 07 March 2017).
Kennedy, Alistair & Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence 22. 110–125.
Mohammad, Saif M. & Peter D. Turney. 2010. Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. In Proceedings of the NAACL-HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, 26–34. June 2010, LA, California. http://aclweb.org/anthology/W10-02 (accessed 26 September 2017).
Mohammad, Saif M. & Peter D. Turney. 2013. Crowd sourcing a word-emotion association lexicon. Computational Intelligence 29(3). 436–465.
Nissim, Malvina & Viviana Patti. 2017. Semantic aspects in sentiment analysis. In Federico Alberto Pozzi, Elisabetta Fersini, Enza Messina & Bing Liu (eds.), Sentiment analysis in social networks, 31–48. Amsterdam et al.: Elsevier.
Oaks, Michael P. 1998. Statistics for corpus linguistics (Edinburgh Textbooks in Empirical Linguistics). Edinburgh: Edinburgh University Press.
Patterson, Thomas E. 2016. News coverage of the 2016 general election: How the press failed the voters. https://shorensteincenter.org/news-coverage-2016-general-election/ (accessed 07 March 2017).
Plutchik, Robert. 1980. A general psychoevolutionary theory of emotion. Volume 1: Of theories of emotion. New York, NY: Academic Press.
Plutchik, Robert. 1994. The psychology and biology of emotion. New York, NY: HarperCollins College Publishers.
Plutchik, Robert. 2001. The nature of emotions. American Scientist 89. 344–350.
Pozzi, Federico Alberto, Elisabetta Fersini, Enza Messina & Bing Liu. 2017. Challenges of sentiment analysis in social networks: An overview. In Federico Alberto Pozzi, Elisabetta Fersini, Enza Messina & Bing Liu (eds.), Sentiment analysis in social networks, 1–11. Amsterdam et al.: Elsevier.
R Core Team. 2016. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. https://www.R-project.org/.
Rinker, Tyler. 2016. Sentimentr v0.4.0. https://www.rdocumentation.org/packages/sentimentr/versions/0.4.0 (accessed 17 July 2017).
Russell, James A. 1994. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin 115(1). 102–141.
Sauter, Disa A., Frank Eisner, Paul Ekman & Sophie K. Scott. 2010. Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences of the United States of America 107(6). 2408–2412.
Sauter, Disa A., Frank Eisner, Paul Ekman & Sophie K. Scott. 2015. Emotional vocalizations are recognized across cultures regardless of the valence of distractors. Psychological Sciences 26(3). 354–356.
Schrepp, Martin. 2006. The use of configural frequency analysis for explorative data analysis. British Journal of Mathematical and Statistical Psychology 59(1). 59–73.
Schweinberger, Martin. 2016. A sociolinguistic analysis of emotives in Irish English. Poster presented at the Annual Meeting of the Society for Text & Discourse 2016, Kassel, 18–20 July 2016.
Swafford, Annie. 2015. Problems with the syuzhet package. https://annieswafford.wordpress.com/2015/03/02/syuzhet/ (accessed 17 July 2017).
Young, Lori & Stuart Soroka. 2012. Affective news: The automated coding of sentiment in political texts. Political Communication 29. 205–231.
Donald Trump. “Remarks in Virginia Beach”, Virginia July 11, 2016, Source: http://www.presidency.ucsb.edu/ws/?pid=117815 (accessed 8 March 2017).
Source: https://www.washingtonpost.com/news/the-fix/wp/2016/07/29/clinton-and-trump-accept-their-nominations-by-telling-you-what-you-should-vote-against/?utm_term=.4879d4537d61 (accessed 7 March 2017).
Recently, it has also been put forward that there are universal vocalizations (such as sighs or groans) that are associated with universal human emotions (Sauter et al. 2010, Sauter et al. 2015), yet this claim also remains controversial (Gendron et al. 2014).
Some researchers even argue that emotions are not primitives but created by an interaction of more basic cognitive dimensions (such as valence, arousal, and dominance; see Calvo and Kim 2013 for a discussion of such approaches as well as their computational implementation).
I am grateful to the anonymous reviewer for bringing this to my attention.
As an anonymous reviewer points out, this raises the question of how frequent “not just/merely/only”-constructions are in the present data. As it turns out (see Appendix), these constructions are not very frequent in Clinton’s (202 tokens) and Trump’s (45 tokens) speeches. Nevertheless, it is to be expected that there are still further constructions with a similar effect that will need to be identified by future research and whose effect on sentiment scores will have to be addressed in any automatic sentiment analyses.
Source: http://www.presidency.ucsb.edu/2016_election_speeches.php?candidate=70&campaign=2016CLINTON&doctype=5000 (accessed 07 March 2017).
As Schrepp (2006) points out, in cases with configurations of three variables and low sample size, the CFA approach adopted in this paper might be problematic. In the present study, however, we only test configurations with two variables and a large amount of data points. In essence, this means that the association of the two variables is tested by a set of simple binomial tests, which are each corrected for multiple testing via the Holm method.