Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter November 16, 2016

Elections and the Economy: What to do about Recessions?

  • Rand Ghayad EMAIL logo , Michael Cragg and Frank Pinter
From the journal The Economists' Voice

Abstract

There is no doubt that the US is embedded in a weak economy. Bond markets are now saying that neither inflation rates approaching 2 percent targets or real interest rates substantially above zero are on the horizon anytime in the foreseeable future. Growth forecasts are being revised downwards in most places, and there is growing evidence in the US that inflation expectations are becoming unanchored to the downside. That, along with widening credit spreads and a stronger dollar as Europe and Japan plunge more deeply into the world of negative rates is alarming and suggests that we are at risk of another recession. Presidential candidates on both sides of the aisle have laid out their positions on major issues ranging from taxes, minimum wage, the federal debt, to issues such as immigration, military strategy, and climate change. While candidates’ policy positions differ generally, it is not clear what their priorities are when it comes to most of these issues. In this paper, we apply text analytics to analyze hundreds of thousands of words that appeared in policy releases, interviews, or debate transcripts related to all of the 2016 presidential candidates. Among other things, we find that macroeconomic policy or “what to do about recessions” has been largely ignored by presidential candidates in this year’s election. Perhaps not surprisingly, Donald Trump’s positions are focused on Mexico and China while Hillary Clinton’s positions are heavily focused on gender and social factors. This appears to present a contrast from Bill Clinton’s campaign in the early 1990s, which was focused on America’s declining economy, emphasizing a basic but effective slogan, “It’s the Economy, Stupid!”

Appendix

Before doing any analysis, we converted words into tokens using the Natural Language Toolkit in Python. This involves multiple steps, designed to filter out subtleties of language that do not contribute to meaning but could introduce noise into automated processes like Latent Dirichlet Allocation.

We used the following steps:

  1. Convert text to lowercase.

  2. Remove special characters, including punctuation and apostrophes.

  3. Remove words that are one character long, and contain any numbers.

  4. Remove stopwords, commonly occurring words with little informational content (like “the,” “of,” “an”) using the Natural Language Toolkit stopword list.

  5. Use the Porter Stemmer to remove the stems at the end of words, reducing each part of speech to its root token. For example, “issue,” “issues” and “issuing” become “issu.”

Once each document has been converted to a list of tokens, we construct a document-term matrix using the scikit-learn machine learning package in Python. This contains the counts of the number of times that each token appears in each document.

By using a document-term matrix, we inherently assume that each document is represented by a bag of words, where order of words does not matter. This is a useful simplification for constructing models of text, even though (strictly speaking) it is never true. For models like LDA, co-occurrence of words matters more than the order in which the words appear.

We then used the LDA package in Python to train the LDA model. We chose to use 20 topics in order to allow some differentiation among topics, without over-fitting our limited set of texts. Given the document-term matrix and the number of topics, the LDA package estimates the LDA parameters discussed below using a procedure called Gibbs sampling.

The Latent Dirichlet Allocation model assumes that documents are generated by the following process:

  1. Given the global topic parameter η: for each topic k, draw iid βk~Dirichlet(η). These are the word distributions for each topic; they are each represented as a vector, whose length is the number of unique words in the corpus.

  2. Given the global proportions parameter α: for each document d, draw iid θd~Dirichlet(α). These are the topic proportions for each document; each θd is represented as a vector, whose length is the number of topics.

  3. For each document d, draw the document’s nth word Wd,n as follows:

    1. Draw the word’s topic assignment Zd,n from the document’s topic distribution, Zd,n~Multinomial(θd).

    2. Draw the observed word Wd,n from the assigned topic’s word distribution, Wd,n~Multinomial(βZd,n).

Therefore, in order to train the LDA model, we need to estimate the latent variables, particularly βk and θd. The word clouds above are visual representations of βk for each k, while the assigned “dominant topic” for each document d is the mode of θd. The LDA package performs this estimation using a Bayesian method called Gibbs sampling (Griffiths and Steyvers, 2004).

Acknowledgment:

The authors are employed by The Brattle Group. The authors would like to thank Leif Shen and Alexander Hoyle for providing research assistance.

References

Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3 (2003): 993–1022.Search in Google Scholar

Ghayad, Rand and Michael Cragg. 2015. “Growing Apart: The Evolution of Income vs. Wealth Inequality.” The Economists’ Voice. 12 (1): 1–12.10.1515/ev-2015-0006Search in Google Scholar

Griffiths, Thomas L., and Mark Steyvers. 2004. “Finding Scientific Topics.” Proceedings of the National Academy of Sciences 101: 5228–5235.10.1073/pnas.0307752101Search in Google Scholar

Hansen, Stephen, Michael McMahon, and Andrea Prat. “Transparency and Deliberation within the FOMC: a Computational Linguistics Approach.” (working paper). November 4, 2015.Search in Google Scholar

Published Online: 2016-11-16
Published in Print: 2016-12-1

©2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.9.2023 from https://www.degruyter.com/document/doi/10.1515/ev-2016-0007/html
Scroll to top button