Do Partisans Make Riskier Financial Decisions When their Party is in Power?

(with David Rothschild and Shawndra Hill)

Link to Paper

Partisans’ beliefs about the economy vary dramatically depending on the party that holds the presidency. Do these responses represent genuine differences in beliefs about the economy, or do they reflect partisans’ expressive reporting on surveys? To answer this question, we rely on a novel dataset of Bing searches for housing, automobiles, and the stock market for over 150,000 partisans from February 2016 to July 2017, as well as a dataset of DMV personal vehicle registrations in New York State. We find that in the aftermath of the 2016 election, Democrats, as members of the losing party, were less likely to search for both cars and houses. Furthermore, we find that Republican zip codes experienced a greater increase in car registrations in 2017 than Democratic zip codes. This statistically significant and meaningful shift in purchasing behavior suggests that partisans’ survey responses are actually due to different beliefs about the economy, rather than expressive reporting.

Because of polarization, Democrats and Republicans have different levels of government trust depending on which party is in power. To what degree does differential trust affect partisan willingness to comply with government recommendations? To answer this question, I analyze compliance with government vaccination recommendations in three separate cases, using survey data and kindergarten vaccination data spanning both the George W. Bush and Obama administrations. I find that people are more likely to think vaccines are safe, more likely to say they intend to vaccinate, and more likely to actually vaccinate their children when their preferred party is in power. Using mediation analysis, I confirm that partisan differences in perceptions of vaccine safety are due to differences in government trust. These findings suggest that partisanship significantly impacts behavior, even in domains concerning health and medical care.

Does Partisanship Affect Compliance with Government Recommendations?

Link to Paper

President Trump Stress Disorder:
Partisanship, Ethnicity, and
Expressive Reporting of Mental
Distress after the 2016 Election

(With David Rothschild, Shawndra Hill, and Elad Yom-Tov)

Link to Paper

Using Natural Language Processing to Detect Partisan Polarization in Text

Link to slides

In the aftermath of the 2016 election, many Democrats reported significant increases in stress, depression and anxiety. Were these increases real, or the product of expressive reporting? Using a unique data set of searches by over 1 million Bing users before and after the election, we examine the changes in mental health related searches among Democrats and Republicans. We then compare these changes to shifts in searches among Spanish-speaking Latinos in the US. We find that while Democrats may report greater increases in post-election mental distress, their mental health search behavior did not change after the election. On the other hand, Spanish-speaking Latinos had clear, significant, and sustained increases in searches for ’depression’, ’anxiety’, ’therapy’, and anti-depressant medications. This suggests that for many Democrats, expressing mental distress after the election was a form of partisan cheerleading.


In this paper, I propose a supervised model of targeted sentiment analysis which relies on Natural Language Processing to identify a text’s sentiment towards specific political figures or parties. My model assigns “relevance” to words within a sentence based on the structure of the sentence. I  train a model using a human-coded sample of 500 sentences to classify words as “relevant” or “irrelevant” to the subject based on their dependency relationships. Words that the model recognizes as relevant to the political figure or party are then analyzed to produce a sentiment score for that entity. I find that for the task of assigning targeted sentiment to political figures and parties, this model significantly outperforms both a simple dictionary-based sentiment model and a proximity-based sentiment model. Finally, I demonstrate an application of this model by examining over 5000 Congressional candidate websites from 2002 to 2016. Using this model, I show that candidates’ expressed negativity towards the opposing party (not just the opposing candidate) has risen dramatically since 2002.