Social Media and Moral Outrage
On October 3, 2021, Frances Haugen came out as the most recent Facebook whistleblower. She provided internal documents for the Wall Street Journal (WSJ) investigation titled the Facebook Files. These documents lay out how “Facebook Inc. knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands.”1 Haugen believes that the company’s products “harm children, stoke division and weaken … democracy.”
Documents leaked by Haugen reveal concerning information about the 2018 change to the Facebook Feed algorithm that boosted Meaningful Social Interactions (or MSI) between friends and family.2 At the time, the change was generally heralded as a positive one, expected to increase engagement and improve user well-being. Internal memos reveal that, in fact, the opposite happened: the overhauled algorithm “rewarded outrage” and incentivized sensationalism.
How did this seemingly well-intended change backfire? Why would prioritizing posts from those we’re closest to result in such a catastrophe of hate, anger, and negativity? The MAD model of moral contagion offers a behavioral perspective that can help us answer these questions.
The evolutionary roots of moral outrage
Research has found that content is often circulated more widely on social media if it is “moralized,” meaning that it “references ideas, objects, or events typically construed in terms of the interests or good of a unit larger than the individual.”3 In general, emotional content is more likely to be shared online, but when it comes to politics and news stories, negative emotions are particularly effective at increasing a piece of content’s reach.”4,5,6 Political news framed in terms of morality7 and tweets containing moral-emotional words tend to propagate more on social media. In comparison, posts with only moral or emotional words do not enjoy such engagement.8
Our habit of reposting emotional content most likely has evolutionary roots. Humans share emotional stories as a means of building social bonds. It is hypothesized that sharing contributes to collective action by helping create a perception of similarity between people, facilitating emotional coordination and aligning our views of the world.9
But why our propensity for moral outrage specifically? In part, we may have evolved to gravitate towards this kind of content because moral outrage can act as a signal of our social identity, values, and ideals to others in the group (as well as to ourselves).10 Highlighting wrong or immoral behavior is a powerful way for us to maintain or enhance our reputation in a particular social circle.
Getting MAD: Why moral outrage spreads online
Taken together, moral and emotional expression has evolutionary roots in that it helps build social bonds, elevate one’s reputation, and signal one’s own identity, morality, and values.
Unfortunately, when moral-emotional content spreads, it may act as an antecedent to political polarization, encouraging the circulation of political news within political identity boundaries.11,12 Moral outrage finds encouragement within these boundaries.13 It has the potential to feed into major political upheaval, and potentially to create such a deep divide between different group identities that extreme actions (such as violence) come to be seen as acceptable.14
The MAD model of moral contagion was developed by Yale researchers Molly Crockett, Jay Bavel and William Brady to explain why moralized content spreads so fast on social media. According to their foundational paper, “The MAD model posits that people have group-identity-based motivations to share moral-emotional content, that such content is especially likely to capture our attention, and that the design of social-media platforms amplifies our natural motivational and cognitive tendencies to spread such content.”15
Let’s break down each of these dimensions, starting with motivations.
Group identity–based motivations for moral outrage
Humans are social creatures: we thrive in groups. In prehistoric times, this required us to build trust and good relationships with those who helped us during collective or individual distress. At the same time, it was crucial to be alert to threats from rival groups, lest we get killed or lose our valuable resources. We developed shortcuts to figure out who to trust and who to not.
We live with these same evolutionary instincts in the 21st century: those who we perceive to share our values and views are considered our ingroup, while those who we perceive to disagree with us are considered our outgroup.
When group identity is easily noticeable, as it often is on social media, we tend to shift from self-focused motivations to group-focused ones. Our attitudes, emotions, and behaviors start to be influenced more by evaluations made along the lines of this group identity, rather than individual goals.16 We tend to engage in actions that distinguish the ingroup from the outgroup, to reinforce our belongingness to our ingroup and display our affirmation of their values and morals—especially when threats emerge.17
Consider the events around the #MeToo campaign, which went viral in 2017. As more and more people started posting about their experience with sexual harassment and abuse, certain groups felt more threatened than others. Some reacted defensively, suggesting that all men were being punished for misdeeds of a few. These posts often framed the issue as a question of being on separate “sides”: women were the outgroup, and men were the ingroup. In some cases, this devolved into hostility towards #MeToo supporters and those who shared their stories publicly (ingroup: people who disapproved of change to the status quo, outgroup: people who supported change).18
Research shows that bashing on the outgroup and expressing animosity towards “them” on social media is far more effective at driving engagement than posts that merely express support for the ingroup.19 Moral-emotional posts expressing such animosity enjoy a bigger reach, especially since social media algorithms are designed to further promote content that is performing well on engagement metrics.
Ads on social media can further entrench these group identities. Facebook’s ad delivery algorithm seems to “effectively differentiate the price of reaching a user based on their … political alignment… inhibiting political campaigns’ ability to reach voters with diverse political views.” In other words, it is cheaper for an entity to reach an ingroup audience than it is an outgroup audience.20 For an entity acting on a small budget, this could mean that they would rather allocate a significant proportion of their budget on reaching the ingroup, thereby contributing to the political polarization of the general populace.
On the other hand, it also seems that exposure to outgroup views online can strengthen a person’s ingroup beliefs.21 U.S. researchers have found that, after Republican study participants were exposed to Democratic viewpoints online, they expressed more conservative attitudes. (The same trend was also seen in Democratic participants, but the effect for this group was smaller and not statistically significant.) So the answer to this problem is not simply to expose users to more diverse viewpoints; in some cases, this may backfire and exacerbate polarization.
Taken together, evidence points to a generally widening gap in the general populace, with social media aggravating this polarization, contrary to their general motto of bringing people together and being a stage for meaningful conversation.22 Social media seems to be amplifying the divide and is enabling a lack of shared reality between opposing groups, by promoting content based on group identities.
Let’s move on to the second part of the MAD model of Moral Contagion.
Attention and Moral Outrage
The social media business is modeled around the concept of the attention economy. In such an economy, human attention is deemed a scarce resource that can be harvested for profit. In a bid to do just that, social media algorithms are designed to promote content that engages people for as long as possible, encouraging them to spend more time online than they may have intended. This creates more opportunities to show users paid ads, as well as more data that can be used to optimize targeting algorithms and increase revenue.
The expanse of data these companies have stored over time and their computational powers have been harnessed with a singular aim to exploit human attentional resources in every way possible. These algorithms are essentially amoral: they have no sense of right or wrong, and are only sensitive to what works to maximize attention. As a result, algorithmic biases overlap with human biases to serve up more negativity and moral outrage.
Emotions and morals take center stage in any political discussion or event. Moral and emotional words that capture more attention in laboratory settings were indeed found to be associated with greater sharing when they appeared in posts on social media.23 Every moral and emotional word in a tweet is associated with an average increase of 20% in its diffusion (sharing).24
Analyzing bad behavior allows us to judge people and their character. Extreme and negative evaluations are indeed attention-grabbing.25 Interested parties have historically been able to game our sensitivity towards moral contagion to rile up political action through mainstream media such as radio, newspapers, and television. Social media has opened this possibility up to the masses, and has accelerated the speed and scope of outrage marketing.
No entity in the history of the world has ever had the kind of power social media wields on our collective attention. In past, prosocial online campaigns like #MeToo and #BlackLivesMatter captured human attention through our bias towards morality and contributed to meaningful offline activities like coordinated protests and policy changes. Other online trends have had anti-social effects. The QAnon conspiracy theory and the Capitol Hill riots are two examples of the harmful consequences of moral contagion online.
Design and moral outrage
Finally, let’s discuss the third leg of the MAD model of moral contagion. Social media is designed to appeal to our System 1 brain. It demands quick actions from the users in terms of viewing, reacting to, and writing a post, and building relationships online. It piggybacks on our understanding of words like “share,” “love,” “like,” and “friends” to help us feel comfortable with an entirely new way of building relations, a process that has historically required a lot of face-to-face physical communication.
Social media has removed significant friction from processes like sharing opinions, debating, calling out injustices—and at the same time, making it easier for people to express animosity, antipathy, and hate towards people or entities. Face-to-face interaction invokes a sense of empathy in us humans. This helps us become aware of how our comments or actions may be received on the other end: anticipating how we might feel in their shoes often keeps us from disrespectfully expressing our discontent.26 Research also shows that interacting through speech is more likely to positively influence our evaluation of someone, as compared to text-based interactions. Text, however, is how most of the social interactions take place on social media.27
People are more likely to rely on their emotions when forced to quickly make a moral decision.28 They’re also more likely to react quickly when they think along the lines of their moral beliefs and values.29 This further strips away friction, making the expression of moral outrage way easier in the online realm than in the offline.
Thanks to algorithmic recommendations, people easily find themselves in echo chambers where their outrage, especially against an outgroup, may be well received, encouraged even. The reduction of humans to two-dimensional icons also allows us to be readily vocal about punishing a wrong-doer.
Furthermore, the always-on nature of the internet and its 24-hour services around the world mean that moral outrage is no longer constrained by time or place. A person doesn’t need to be physically present or be a local to express outrage at something. Indeed, people consume more information about immoral actions online compared to offline.30
Revisiting the Facebook Files
Knowing what we know now about the MAD model of moral contagion, we may be able to see why the MSI initiative failed miserably at delivering on its promise of better content. Facebook believed that the dominance of video and professionally produced posts were turning people into passive consumers of content. Time spent was on the rise, but people were not engaging with the content. Finding a way to increase interaction with people in one’s friends list would encourage an active response to the content they were scrolling through, as well as their wellbeing.
Where the MSI system went wrong was in the way it scored content. It skewed the algorithm towards posts that evoked (or seemed to evoke) emotional reactions, which then got reshared, attracted long comments (as is often the case in internet flame wars), and were likely to circulate widely within particular communities.
Consider a post using moral-emotional language in a political context. This post is likely to evoke the author’s political identity, and readers are likely to react depending on whether the poster belongs to an ingroup or an outgroup. Even more passive users might still use the react buttons (“haha,” “anger,” “love,” etc.) to signal an emotional response.31
With each such reaction, the MSI algorithm adds 5 points to the quotient that decides how the post will be prioritized. The comments would more likely be associated with people having politically extreme views.32 They might support or go against the view being expressed (if somehow the post does find its way outside of the political-ideological boundaries.) They might also contribute to the widening of the perception gap between groups.
Given that people on social media are more likely to be friends with people who share similar tastes, and that the algorithm demotes posts from non-members and strangers, people are even less likely to see a post from an outgroup. On the one hand, there are some upsides to this, as online interaction with those who disagree can be frustrating or upsetting.33 But it also means that these groups remain consistently separated on social media, and this propagates a lack of shared reality. This post containing moral-emotional language is likely to become an instance of moral contagion, thanks to the MSI algorithm.
Final words
Social media algorithms prioritize the spread of content that has proven to be popular—irrespective of what that content actually is—for the sake of monetizing this engagement. Successful content is often crafted to provoke moral outrage, for which humans have natural, group identity–based motivations to share.
It is when this kind of online activity leads to offline consequences (like discrimination, intimidation, or even violence) that we must begin to question the stances that tech giants often take on free speech. Do we proactively moderate online content, at the expense of making some concessions on our rights to free speech? Or do we address the algorithm and the business model underneath it all, that allows for the amplification of potentially hateful speech in the first place?
In the recent past, tech companies have shown a new interest in using their platforms to protect democracy. But it never has quite been their priority. Companies claim to be neutral platforms, facilitating meaningful debate. But do they retain the right to call themselves neutral “platforms,” when they choose what each user should see?
In fact, tech companies are anything but neutral. Their algorithms maximize attention and engagement and reap profits from it. Even when companies do try to take action, their efforts are most often focused exclusively on Western, English-speaking countries.34 Countries outside of the US suffer equally, if not more, from this phenomenon. In Germany, for instance, researchers found that during times when anti-refugee sentiments are on the rise, fewer anti-refugee attacks occurred for the duration of internet outages.35
Frances Haugen, the whistleblower, strongly believes that companies like Facebook should be more transparent, and that they should support the formation of independent oversight bodies to help them traverse this sea of problems on their platform. For better or for worse, tech platforms have become part of the social fabric of the contemporary world. They have a responsibility to take action, as they exert a major influence on the path humanity takes collectively.
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
References
- (2021, September 13) The Facebook Files. The Wall Street Journal. https://www.wsj.com/articles/the-facebook-files-11631713039
- Hagey, K., & Horwitz, J. (2021, September 15) Facebook Tried to Make Its Platform a Healthier Place. It Got Angerier Instead. The Wall Street Journal. https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215
- Brady, W. J., Crockett, M. J., & Van Bavel, J. J. (2020). The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online. Perspectives on psychological science : a journal of the Association for Psychological Science, 15(4), 978–1010. https://doi.org/10.1177/1745691620917336
- Heimbach, I., Schiller, B., Strufe, T., & Hinz, O. (2015). Content virality on online social networks: Empirical evidence from Twitter, Facebook, and Google+ on German news websites. In Proceedings of the 26th ACM Conference on Hypertext & Social Media (pp. 39–47). New York, NY: Association for Computing Machinery. doi:10.1145/2700171.2791032
- Hansen, L.K., Arvidsson, A., Nielsen, F.A., Colleoni, E., & Etter, M. (2011) Good friends, bad news—affect and virality in Twitter. In J. J. Park, L. T. Yang, & C. Lee (Eds.), Future information technology (pp. 34–43). Amsterdam, The Netherlands: Springer. doi:10.1007/978-3-642-22309-9_5
- Fan, R., Zhao, J., Chen, Y., & Xu, K. (2014). Anger is more influential than joy: Sentiment correlation in Weibo. PLOS ONE, 9(10), Article e110184. doi:10.1371/journal .pone.0110184
- Valenzuela, S., Piña, M., & Ramírez, J. (2017) Behavioral Effects of Framing on Social Media Users: How Conflict, Economic, Human Interest, and Morality Frames Drive News Sharing. Journal of Communication, 67(5), pp 803–826, https://doi.org/10.1111/jcom.12325
- Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, USA, 114, 7313–7318.
- Peters, K., & Kashima, Y. (2007). From social talk to social action: Shaping the social triad with emotion sharing. Journal of Personality and Social Psychology, 93(5), 780–797. https://doi.org/10.1037/0022-3514.93.5.780
- Aquino, K., & Reed, A. II. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423–1440. https://doi.org/10.1037/0022-3514.83.6.1423
- Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber? Psychological Science, 26(10), 1531–1542. https://doi.org/10.1177/0956797615594620
- Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, USA, 114, 7313–7318.
- Brady, W. J., McLoughlin, K. L., Doan, T. N., & Crockett, M. (2021, January 19). How social learning amplifies moral outrage expression in online social networks. https://doi.org/10.31234/osf.io/gf7t5
- Mooijman, M., Hoover, J., Lin, Y., Ji, H., & Dehghani, M. (2018). Moralization in social networks and the emergence of violence during protests. Nature Human Behaviour, 2, 389–396.
- Brady, W. J., Crockett, M. J., & Van Bavel, J. J. (2020). The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online. Perspectives on psychological science : a journal of the Association for Psychological Science, 15(4), 978–1010. https://doi.org/10.1177/1745691620917336
- Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33–47). Monterey, CA: Wadsworth.
- Branscombe, N. R., Ellemers, N., Spears, R., & Doosje, B. (1999). The context and content of social identity threat. In N. Ellemers & R. Spears (Eds.), Social identity: Context, commitment, content (pp. 35–59). Oxford, England: Blackwell Science.
- PettyJohn, M. E., Muzzey, F. K., Maas, M. K., & McCauley, H. L. (2019). #HowIWillChange: Engaging men and boys in the #MeToo movement. Psychology of Men & Masculinities, 20(4), 612–622. https://doi.org/10.1037/men0000186
- Rathje, S., Van Bavel, J.J., & van der Linden, S. (2021). outgroup animosity drives engagement on social media. Proceedings of the National Academy of Sciences. 118 (26) e2024292118; https://doi.org/10.1073/pnas.2024292118
- Ali, M., Sapiezynski, P., Korolova, A., Mislove, A., & Rieke, A. (2021) Ad delivery algorithms: The hidden arbiters of political messaging. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM ’21. 13–21. https://doi.org/10.1145/3437963.3441801
- Bail, C.A., Argyle, L.P., Brown, T.W., Bumpus, J.P., Chen, H., Hunzaker, M.B.F., Lee, J., Mann, M., Merhout, F., & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences. 115 (37) 9216-9221. https://doi.org/10.1073/pnas.1804840115
- Yudkin, D. Hawkins, S., & Dixon, T. (2019) The Perception Gap. More in Common. https://perceptiongap.us/ Retrieved on 14 October 2021 14:15 BST.
- Attentional capture helps explain why moral and emotional content go viral. Source: Brady, W. J., Gantman, A. P., & Van Bavel, J. J. (2020). Attentional capture helps explain why moral and emotional content go viral. Journal of experimental psychology. General, 149(4), 746–756. https://doi.org/10.1037/xge0000673
- Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, USA, 114, 7313–7318.https://doi.org/10.1073/pnas.1618923114
- Fiske, S. T. (1980). Attention and weight in person perception: The impact of negative and extreme behavior. Journal of Personality and Social Psychology, 38(6), 889–906. https://doi.org/10.1037/0022-3514.38.6.889
- Crockett, M.J. Moral outrage in the digital age. Nature Human Behaviour 1, 769–771 (2017). https://doi.org/10.1038/s41562-017-0213-3
- Schroeder, J., Kardas, M., & Epley, N. (2017). The Humanizing Voice: Speech Reveals, and Text Conceals, a More Thoughtful Mind in the Midst of Disagreement. Psychological Science, 28(12), 1745–1762. https://doi.org/10.1177/0956797617713798
- Suter, R. S., & Hertwig, R. (2011). Time and moral judgment. Cognition, 119, 454–458. https://doi.org/10.1016/j.cognition.2011.01.018
- Van Bavel, J. J., Packer, D. J., Haas, I. J., & Cunningham, W. A. (2012). The importance of moral construal: Moral versus non-moral construal elicits faster, more extreme, universal evaluations of the same actions. PLOS ONE, 7(11), Article e48693. https://doi.org/10.1371/journal.pone.0048693
- Crockett, M.J. Moral outrage in the digital age. Nature Human Behaviour 1, 769–771 (2017). https://doi.org/10.1038/s41562-017-0213-3
- Rathje, S., Van Bavel, J.J., & van der Linden, S. (2021). outgroup animosity drives engagement on social media. Proceedings of the National Academy of Sciences. 118 (26) e2024292118; https://doi.org/10.1073/pnas.2024292118
- Yudkin, D. Hawkins, S., & Dixon, T. (2019) The Perception Gap. More in Common. https://perceptiongap.us/ Retrieved on 14 October 2021 14:15 BST.
- (2016, October 25) The Political Environment on Social Media. Pew Research Center. https://www.pewresearch.org/internet/2016/10/25/the-political-environment-on-social-media/ retrieved on 17 October 2021 19:24 hrs BST.
- (2021, October 6) Facebook disputes claim of inadequate flagging of vernacular content. Economic Times. https://economictimes.indiatimes.com/tech/technology/facebook-budget-to-curb-misinformation-in-india-paltry-lacks-hindi-bengali-content-reviewers/articleshow/86804663.cms retrieved on 17 October 2021 21:01 hrs BST.
- Müller, K., & Schwarz, C. (2020). Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association (19:4), 2131–2167. http://dx.doi.org/10.2139/ssrn.3082972
About the Author
Paridhi Kothari
Paridhi Kothari is an MSc Behavioural Science student at the University of Warwick and has a background in Mathematics and Computing. She is interested in studying the impact of social media on the socio-cultural and political fabric of society at large. She is currently volunteering at the Big Brother Watch (UK), as a Policy Researcher for their Online Free Speech campaign and has previously worked in the field of Consumer Behaviour as a Research Assistant.