Outrage and Polarization on Social Media: An Interview with William Brady

latest.jpg

We’re thrilled to present an interview with William Brady, a computational social psychologist and NSF postdoctoral fellow in the psychology department at Yale University. Dr. Brady’s research focuses primarily on how moral emotions are expressed on social media, and how these kinds of expressions affect our broader political conversation.

PTI: Most of your recent work has focused on the dynamics of moral outrage online, but you have a background in philosophy. I’m interested in how you came, through that route, to be interested in the topics you study, and whether that lens informs your current research.

WB: My interest in philosophy was in moral philosophy, so thinking about different theories of right and wrong, what humans ought and ought not to do. But I was also always fascinated with the psychology of moral judgments. There’s this famous distinction in moral philosophy that's usually quoted as “does ought imply can,” and it kind of makes us ask this theoretical question: “should normative claims be constrained by what people can descriptively do and in fact do?” That’s something that I think about all the time in my work as a moral psychologist.

PTI: There are philosophers who view the topics you research as being among the most important ethical issues in the world today. The philosopher James WIlliams, for example, has written that “the liberation of human attention may be the defining moral and political struggle of our time” and that “we have an obligation to retire this system of intelligent, adversarial persuasion before it retires us.” To what extent would you agree with this framing?

WB: Yeah, that's interesting. There's no doubt that whether we want to comment on it or not, the way that we interact with new technologies does have an impact on our moral systems and our ethical thinking, because these platforms potentially amplify certain aspects of human psychology. And we have to think about the consequences of that.

So I think it’s definitely a worthy area of inquiry. As an empirical scientist, I try to stay away from that realm -- not because it's not interesting, but just because I think there’s a limit on the extent to which empirical data can speak to these normative claims. But nonetheless, I think it's important to do the empirical work, because a lot of normative claims do rest on descriptive claims being true or not. 

For example,  Molly [Crockett] and I have a back-and-forth in print with another group of researchers about the effectiveness of moral outrage in the context of social media, and part of what came up in that back and forth was whether moral outrage can be a catalyst for “good” consequences. And that starts to get into the normative realm. 

Overall, both of our groups ended up avoiding the question of whether outrage on social media leads to “good” or “bad” consequences. Rather, we asked: is outrage effective for the particular goals that different groups have?

As to whether moral outrage is effective in promoting causes related to social justice, we come down somewhere in between. Outrage does have the potential to help, but on social media we see that it sometimes has the opposite effect because it can limit participation. Marginalized groups are disproportionately the targets of outrage, and if that makes them less likely to participate in these conversations, then that seems counter to a goal of enhancing social justice or even raising awareness of relevant issues.

PTI: Let’s dive into the details of your research. What have you learned about why and how moralized content spreads online?

WB: The first studies I did had to do with how we identify and categorize ourselves in groups. How does our tendency to think in terms of group identity potentially amplify moral emotions? The idea that we tested in one of our first papers, the 2017 PNAS paper, is that social media has a tendency to make our group identities very salient, and in particular our political group identities, and this is likely to lead to moral emotions spreading widely.

There's some survey work that I always like to cite that showed that 94% of users report seeing at least a little bit of political content in their social media feeds, whether it's Twitter or Facebook. And to me, that's just a really interesting piece of evidence suggesting that, as soon as we log on, our political group identities are very salient. 

What that means, theoretically, from the perspective of what is called social identity theory, is that our cognition and our emotions all start to become aligned with our identity, vis-a-vis a group rather than an individual. We start to express the emotions that have a function tied to our group needs rather than, for instance, a personal need. Moral outrage is one such emotion, because the function of moral outrage is inherently group-based. It signals to others that someone has transgressed a group norm, and that they are worthy of blame.

We measured different emotions based on keywords and text mining in a corpus of Twitter data and examined three different political topics. We found that the use of moral emotions was associated with increased sharing, which we call ‘social contagion’, and therefore that these messages spread most readily through the network. So this is one sense in which social media platforms shape moral discourse: they facilitate moral contagion by making our group identities salient.

PTI: Given the biases to which group-based thinking is subject, do you think that this tendency of social media to make group identity more salient has degraded the overall quality of our moral and political conversation?

I think anytime you're in contexts where group identities are highly salient, you start to get certain types of behaviors and certain types of thinking that come along with that stereotypically. That includes the tendency to see the outgroup as more homogenous, derogating the outgroup, and acting in ways that protect the image of the ingroup. There is no doubt that the organization of users into large networks can amplify group-based cognition and emotions. Whether or not that is good or bad for politics is a whole other discussion.

If the goal is to come to an agreement or “meet in the middle,” then I would say, yes, these biases that come along with strongly identifying with groups will definitely impede that. But it is interesting to ask whether that goal is worthwhile. Certainly in some cases the level of partisan hatred we see now is not productive. For example, we showed in a recent Nature Human Behavior paper that there were ideological differences in social distancing behavior, and that pro-Trump partisanship predicted COVID-19 mortality at the county level. So there are obvious cases in which we should want agreement because it can actually be a public health crisis if we don't have partisan agreement.

But when it comes to social movements like Me Too and Black Lives Matter, for example, polarization theoretically can have some benefits. It can potentially get people to be aware of issues they wouldn’t otherwise know about and make real change happen.

PTI: It does seem, though, that there is a tradeoff when expressing oneself on social media between going viral and building consensus. The former relies on polarizing, group-based expressions, while the latter depends on mutual understanding and agreement.

WB: Excellent point. In my 2017 paper, as well as a behavioral study that we have under review right now, we showed that if we manipulate the moral and emotional content of messages, political partisans will be more likely to report that they would share it. But then if you get an outgroup member to read that message versus a more neutral message, they rate the author of the message as significantly more hostile, and they say that they're less likely to want to have a political conversation with the author of the post. So there are different ways that you could frame something, and it’s interesting to think about the tradeoffs between different kinds of expression.

And it’s really nuanced because there's been a lot of research coming out of the political science world suggesting that echo chambers aren't all of what we thought they were. It seems that people are actually exposed to mainstream news very generally on both sides of the political spectrum, and also that liberals see conservative content and conservatives see liberal content.

But there's a question of how the material is actually being consumed. You can imagine two scenarios: one in which someone shares something because they found it interesting and another in which they share it because they’re “trashing” it. Those will produce very different reactions among their ingroup. Just because there's exposure to opposing views doesn’t mean it’s not promoting biased cognition. 

I also discuss the issue of how content (i.e., moral and emotional language) might impact how people react to outgroup content in a recent paper in Perspectives on Psychological Science. I think a lot more work should be done analyzing what type of content people are consuming when they read news rather than just examining what sources the news is coming from.

PTI: How does social media contribute to polarization, and to what extent is the polarization we see real disagreement about substantive issues versus just affective polarization?

WB: Actually, I would break down the distinctions you brought up a little differently. First of all, there's a distinction between issue polarization — disagreeing on substantive political issues — and affective polarization, which is an increase in negative feelings toward the outgroup. But there's also an effect called false polarization. This is the idea that there are inaccurate meta-perceptions of the outgroup. Matthew Levendusky has a paper showing that this can produce a kind of self-fulfilling prophecy in which these inaccurate meta-perceptions of the outgroup that can actually make us more polarized, just because of what we imagine the outgroup is thinking.

We’re actually in our second study for a paper now where we're looking to see if misperception of people's moral emotions on social media can contribute to affective polarization. But with respect to social media’s impact on polarization in general, it’s difficult to know which direction the causation runs. Polarization began before social media, but it’s easy to see how some of these mechanisms would make polarization worse. 

PTI: Finally, I’m curious if all that you’ve learned in your research has impacted the way that you personally use social media and form moral judgments?

WB: I don’t use social media as a key source of information, just because I know about the biases that are there. It is very tempting to just get the hot takes from our trusted friends, but I think it's important to take into account sources and their goals and things like that. I didn't use to think about that stuff. And to be honest, I've actually changed a lot of my conversations to personal text groups, rather than more public social media posts. That doesn't necessarily make me come to any less of a biased conclusion, but it’s something I've done. It's not that I don't trust social media at all. It's just that I'm aware that there are competing interests. What I mean to say is that content I see is coming both from the beliefs and preferences of people I know and trust, but content algorithms also can impact what I see based on the interests companies have to promote engagement. For better or for worse, I think we should all keep that in mind.

This interview has been edited for length and clarity.

Guest User1 Comment