The Human Error Behind Fake News with David Rand
Intro
In this episode of the podcast, Brooke is joined by David Rand, professor of Management Science and Brain and Cognitive Sciences at MIT. Together, the two explore David’s research on misinformation, trying to understand why people believe fake news, why it is spread in the first place, and what people can do about it. Brooke and David also discuss real-life applications of strategies to prevent misinformation, especially as it pertains to social media platforms like Twitter, Facebook, and news outlets.
Specific topics include:
- The categories of fake news, including blatant falsehoods, hyperpartisan news, and health misinformation
- The roles that bots, algorithms, and humans play in the dissemination of fake news
- How algorithms fail to analyze why people pay attention to certain information
- The tension between our preferences and our limited cognitive abilities
- How our beliefs can be tied to our social identities
- How media platforms can do create healthier ecosystems for information processing
- Platforms’ imperative to be proactive, rather than playing catch up with misinformation
- And does controlling the spread of misinformation infringe on the freedom of speech?
The conversation continues
TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.
Key Quotes
On how misinformation is not always driven by malintent:
“People like to differentiate between accidental misinformation … (stuff that people were sharing they thought were true and just incorrectly shared), versus what people call disinformation, which is intentionally false misinformation.”
On how bots can manipulate humans’ understanding of things:
“If you see a post and you see that it’s been liked 500 times, that will make you take it more seriously.”
On why even critical thinkers can fail to think deeply about the media they consume:
“Some interaction between basic human nature and the design of the platforms puts us in a mindset where we’re not thinking carefully and we’re not really engaging deeply with the content while we’re on social media most of the time. You’re scrolling quickly through your news feed and the news is intermixed with baby pictures and cat videos and all other kinds of stuff that’s not getting you in a careful critical thinking mindset.”
On the tension between our preferences and limited attention:
“When you’re scrolling through your feed and a post pops up, you don’t carefully stop and think, ‘Okay, let me think through all of the different relevant dimensions and assess it on all those dimensions.’ But instead, certain dimensions just pop into your attention and those are the ones that you think about … The large majority of people do not want to share content that’s inaccurate, they just forget to actually even do the computation in the first place.”
On the power of crowdsourcing:
“Even though any individual layperson is probably not going to have a great judgment about something, we find that between 15 and 20 laypeople reading a headline in feed, the average rating of “How reasonable does this seem?” aligns as well as the fact-checkers doing detailed research.”
On why publicly correcting misinformation might not improve sharing behaviors:
“[After getting corrected] they’re more likely to share stuff from news outlets that fact-checkers rated as lower quality, that were more extreme in their partisanship and that had more language toxicity in the text … Getting publicly corrected by another person is an extremely social interaction, so it focuses people’s attention again on these social factors. Maybe you feel embarrassed or something like that, so you’re thinking about that and you’re forgetting to think about whether things are true or not when you encounter them next.”
On having trusted sources as messengers of information:
“To the extent that you can find any people on the side that benefit from the misinformation who are willing to speak against it, that can be a powerful tool.”
On modifying platform design for healthier information processing:
“Essentially, just anything that mentions the concept of accuracy or that activates the concept of accuracy in people’s minds, shifts their attention towards accuracy and therefore makes them more discerning in their subsequent sharing decisions.”
On cultivating a mindset to keep us aware:
“Try and cultivate a mindset that you could think about as motivation. ‘This environment is out to get me,’ in some sense. ‘This is set up to trick me into sharing things that I don’t actually want to share, so let me not be taken advantage of that,’ and ‘Let me try and have a mindset to say, ‘Let me stop and think before I share so that I don’t make choices that I would regret.’’
Transcript
Brooke Struck: Hello everyone and welcome to the podcast of The Decision Lab, a socially conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, Research Director at TDL and I’ll be your host for the discussion. My guest today is David Rand, Associate Professor of Management Science and Brain and Cognitive Sciences at MIT.
In today’s episode, we’ll be talking about fake news on social media, what it is, how it spreads, and what we can do about it. Dave, thanks for joining us.
David Rand: Thanks so much for having me, excited to be here.
Brooke Struck: So tell us a little bit about yourself and what you’re up to these days.
David Rand: So I’m a professor at MIT in the Sloan School of Management and the Department of Brain and Cognitive Sciences. With a bunch of collaborators, most centrally Gordon Pennycook at University of Regina, we’ve been doing a lot of research over the last five years-ish on misinformation and trying to understand why people believe falsehood, why people fail to tell true from false, and why people spread these kinds of behaviors. Then also, more importantly, what we can do about it.
We’re academics: we do research and we publish papers and stuff like that, but we also try to really apply what we’re learning, so we work with tech companies and social media platforms to try to get some of these ideas tested on platforms. We also work with some philanthropic organizations to try and implement some of these ideas beyond the platforms.
Brooke Struck: Very cool, and obviously very timely as well. Fake news on social media has obviously been a topic of discussion for a number of years now and it sounds like you were really right at the head of it, when you got into it. Can you help us to dissect what different kinds of fake news there are out there?
David Rand: Yeah, so when people say fake news, there’s a lot of different things that go under that bucket. I always start out my talks by saying, okay, I start with the definition. When I say fake news, I don’t mean anything that I disagree with, because that is some people’s operating definition. But under the broader umbrella of misinformation, I think one category of things we call fake news are these entirely fabricated, totally made up stories that are presented as if they’re real news. So this is, “Pope endorses Donald Trump” type, blatant falsehoods.
There’s also what we call hyperpartisan news. This is stuff that’s not blatantly false – it’s not totally made up, but it’s really misleading coverage of events that did actually happen. I think that’s probably much more widespread than the blatant falsehoods, but is a really big problem because it can be quite misleading. Then there’s also health misinformation which people have been talking about for a long time, but with COVID it really came to the forefront. There’s all kinds of really blatant misinformation about COVID that got a lot of traction and has really had an impact on people’s attitudes and behaviors.
So I think of these as three different kinds of misinformation that we have been engaging a lot with. Then another distinction that comes up a lot in the discussion of misinformation is, people like to differentiate between accidental misinformation (stuff that people were sharing they thought were true and just incorrectly shared), versus what people call disinformation, which is intentionally false misinformation (people intentionally trying to mislead by putting things out that they know are false and misleading).
Brooke Struck: Right. So we’ve got two axes here. One is the intention of the sharer, either inadvertent sharing or very tactile and selective sharing of false information. Then the other axis is the content of the information itself, from garden variety, like sharing something that might misrepresent the lighter body of evidence, all the way to something that is intentionally constructed to be false and really predicated on the purpose of misleading people.
So in these conversations, I’ve heard a lot about different drivers of what makes fake news, disinformation or misinformation circulate so efficiently. Three words that come up, or three ideas that come up – first of all, are bots. Second of all, are algorithms, and the third is us, human beings. Can you help us to understand the role that each of those different actors plays in the rapid dissemination of fake news?
David Rand: Because of the way social media works, all three of these different actors are very much intertwined in that the algorithms determine what content any individual person sees. So clearly, if the algorithm never showed you any misinformation, then there would be no problem. Or conversely, if the algorithm really prioritizes misinformation, for example – because misinformation gets more clicks and more engagement, and what the algorithm is doing is maximizing engagement – then it’s going to serve up this really engagement provoking content.
So clearly the algorithm is playing an important role in the amount of exposure that there is in the first place. Then the way that bots can potentially influence things is, first of all, they can post the content. Also, they can engage with the content. If they like it and share it, that can tell the algorithm, “Oh, this is something that people like.” So, that can sort of manipulate what content the algorithm is showing. Also, for humans, it can manipulate humans’ understanding of things. If you see a post and you see that it’s been liked 500 times, that will make you take it more seriously. There’s research showing that people are more likely to believe and share content that has a lot of likes and prior shares, so that creates the false social signal.
I think, in terms of the part of “Oh, bots are posting all this content,” I actually think that’s likely to be much less of a problem than some reports might make you believe. There’s been some research suggesting a really large fraction of the misinformation about COVID was posted by bots, but those analyses typically don’t weigh the content by how much exposure it got, because most of the bots don’t have any followers. So it doesn’t matter if a bot is posting something. If a bot posts on Twitter to no followers, it doesn’t really make a sound.
I think a lot of where the bots are potentially playing a role is in this, not so much sharing it themselves, but creating these false signals for both the algorithms and the humans. Then this brings us to the human part where there’s a bunch of different issues with humans. We’re complicated. Part of it is that people find content that is emotionally evocative, really engaging. That makes them more likely to engage in it and more likely to share it, which then feeds the algorithm saying, “Oh, this is what people want,” so then it gives them more.
Also, some interaction between basic human nature and the design of the platforms puts us in a mindset where we’re not thinking carefully and we’re not really engaging deeply with the content while we’re on social media most of the time. You’re scrolling quickly through your news feed and the news is intermixed with baby pictures and cat videos and all other kinds of stuff that’s not getting you in a careful critical thinking mindset. Then, because the platforms provide all the social feedback – this very quantified social feedback – I think it focuses people’s attention on these kinds of social factors and away from accuracy.
So even if they’re someone who, in a work context, would be a careful critical thinker and would be very careful to not share inaccurate information, you can actually wind up sharing inaccuracies on social media by accident, because you basically just forget to stop and think about whether it’s accurate or not. Even I, as someone who spends all my time doing research on this stuff, have actually done exactly that. I’ve shared something and then a couple hours later somebody responded to me, “Hey, is this actually true?” I was like, “Ahhh,” I can’t believe that I even fell into that trap.
So I don’t know whether you want to call that being a problem of humans or that being a problem of the ecosystem that is created by the platforms. But there’s some interaction between them.
Brooke Struck: Yeah. You mentioned something interesting there, that humans are complex beings, and what the algorithms are looking to promote and to pump up is stuff that engages us, stuff that we like.
David Rand: Or actually not even necessarily like. It could be that you hate the stuff that you click on.
Brooke Struck: Yeah. So that’s, I think, exactly the direction that I want to go here. In reducing everything to this single dimension of do you click on it or not, essentially what is happening with the algorithm is all of the complexity of our human drive is really reduced to this single dimension. Sometimes I click on stuff because I like it. Sometimes I click on stuff because I hate it. Sometimes I click on stuff just because I really don’t understand it, but there’s something curious in the way it’s phrased that peaks my curiosity.
I don’t eat all the foods on my plate necessarily for the same reason either. There are some things that I eat just because I know that they’re good for me and other things that I eat because I take pleasure in eating them, despite the fact that I know they’re materially bad for my health. What ends up happening here is, we reduce the multidimensionality and the complexity of human existence down to the single dimension. So in the social media ecosystem, what are some of those dimensions that are getting collapsed? You mentioned loving/hating content. What else is there?
David Rand: I think that, as you were saying, essentially everything is getting collapsed down into just, do you spend time looking at this piece of content or do you go on to the next piece of content? So you could be looking at it because you think it’s true and it’s interesting. You could be looking at it because you think it’s crazy and you’re trying to figure out what’s going on. You could be looking at it because it’s emotionally evocative. You could be looking at it because it’s just really eye grabbing in some perceptual sense. You could be looking at it because you like it, you could be looking at it because you hate it, and so on and so forth.
In terms of collapsing everything down to one dimension, I think a lot of the reason for that is, social media platforms are not public goods. They’re businesses. Their goal essentially is to get you to spend as much time looking at their page as possible, so that they can serve you as many ads as possible. So I think it actually makes a lot of sense from a business perspective that they’re collapsing, because their primary goal is not promoting human flourishing or whatever. Their primary goal is getting you to click on ads.
Brooke Struck: Right.
David Rand: Basically, it wants you to click on ads on their platform. So they don’t want you to say, “Well okay, I’ve had enough of Twitter today, I’m going to go do something else.” That’s a bad outcome.
Brooke Struck: Right.
David Rand: They want you to be like, “Oh my God, I can’t believe that guy just posted that thing. Let me look.”
Brooke Struck: Right.
David Rand: An interesting aspect of this means that, if you think about it, people might say, “People don’t pay attention to accuracy, but that’s just human nature and there’s nothing you can do about it. What do you want the platform to do?” But one thing that is definitely part of human nature is, nobody likes paying attention to ads. Nobody wants ads in their feed, and yet the whole platform’s business model is predicated on getting people to pay attention to ads, despite the fact that they don’t want to. So basically what we have been arguing is, the platforms are creating this environment where people are not paying attention to accuracy, but they could use some of the muscle that they normally use for getting people to pay attention to ads to instead get people to pay attention to accuracy.
They have control over the ecosystem, so they can do things to redirect people’s attention where they want to. They just, right now, don’t have any incentive to get people to pay attention to anything other than the ads.
Brooke Struck: Right. So it’s interesting that you mention accuracy as one of the portfolio of things that we’re looking for in content. It’s not without value. We certainly seek accuracy as one of the things, but it’s not necessarily the overriding value that we’re looking for, in terms of the way that we engage with content. So if I understand you correctly, what you’re saying is, the way that the ecosystem is designed can help to promote certain types of categories of wants at the expense of others, or some kind of blend of the different kinds of things that we want.
David Rand: Yeah, we think about this through a sort of tensional perspective, which is, you can say that there are two fundamentally different things that go into decision making. One could be what we’ll call your preferences. That is, if you were to think carefully about something, how much do you care about all the differences? Any particular choice has a bunch of different dimensions, so in this case it might be, you’re trying to decide whether to share this post in front of you, or retweet it. One dimension might be, how accurate do you think it is? The other one is, does it align with your politics? The other one is, how funny is it?
Each of these are things that you care about to some extent. So your preference would be, if you stopped and carefully thought about all of those dimensions, how much would you value each one? At least, if you ask people that question, people will overwhelmingly say that accuracy is the most important thing – is as important or more important than everything else. That is to say, people say they would not go on to share content that they knew was inaccurate. Yet, those very same people in our experiments will have, a minute later or a minute before, said that they would share lots of false things.
Our explanation for what’s going on is that, in addition to this preference business, there’s also a separate dimension of just attention, which is, what are you paying attention to? We have cognitive constraints, we have limited attention, so we can’t think about everything at once, particularly in a social media context. So when you’re scrolling through your feed and a post pops up, you don’t carefully stop and think, “Okay, let me think through all of the different relevant dimensions and assess it on all those dimensions.” But instead, certain dimensions just pop into your attention and those are the ones that you think about.
So our argument is, even though the large majority of people do not want to share content that’s inaccurate, they just forget to actually even do the computation in the first place of, is this accurate or not? Instead, they think about, “Oh man, that’s so good! Retweet.” The platforms have a lot of control over what dimensions pop to mind.
Brooke Struck: Right. I like this shift from – or at least the differentiation between – the more reflected circumspect approach versus this more intuitive and rapid approach. So pivoting into that and thinking about our intuitive and rapid responses, especially around accuracy, I think a lot of us have this intuition that, if we see something that’s incorrect on the internet and we notify the person who posted it, “Hey, by the way, this thing that you said is blatantly false,” that this will have some kind of corrective effect. That the person who posted that thing will go and look back at it and say, “Oh my gosh, you’re right. I did post something that’s false. I should retract.”
But in fact, it turns out not to be the case. Can you tell us a little bit about that? What happens when we go out and try to correct the material falsities out there on the internet?
David Rand: Correction is a really interesting thing to think about. It is, by far, the dominant approach in terms of combating misinformation. It’s the thing that’s gotten the largest number of studies for the longest time. You can either think about this as directly correcting someone, like what you’re talking about. Someone posts something false and you’re responding like, “Hey, that’s not true.” Or you can also think about it from the perspective of the social media platforms. They could provide corrective information afterwards or they could put warning labels on content when you’re exposed to it. For example, saying “Fact checkers have said this is false” or things like that.
In terms of the ability to actually correct the wrong beliefs, I think the overwhelming majority of the evidence suggests that corrections are actually helpful in the sense that, if you present someone with a piece of misinformation, and then afterwards you’re like, “Hey, that thing I told you or that thing you saw, that’s not true,” that does tend to correct people’s beliefs. There’s very little evidence that it causes people to backfire and dig in. On social media, if I see that you posted something false and I respond saying, “Hey, that’s not right,” in addition to potentially correcting your beliefs, also if all the observers – all of my followers, anybody that would have seen it – then see the correction, there’s some evidence that makes their beliefs more accurate.
So I think, from just correcting the factual belief perspective, corrections I think, do have promise. But there’s a couple of problems – I’d say at least three problems with corrections. The first one is just from a scalability perspective. It’s much easier to create falsehoods than it is to fact check them. So to the extent that corrections are relying on professional fact checks, or warnings are relying on professional fact checks, just a very small fraction of the total number of claims ever get fact checked. Even when they do get fact checked, it blows. During peak virality, spreading phase, there’s probably not going to be a correction.
So one thing that we’ve actually been working on with both Facebook and Twitter is finding ways to scale the identification and misinformation by using crowdsourcing. It turns out that via wisdom of crowd-type principles, even though any individual layperson is probably not going to have a great judgment about something, we find that between 15 and 20 laypeople reading a headline in feed, the average rating of “How reasonable does this seem?” aligns as well as the fact checkers doing detailed research. So I think that the scalability problem has some chance to get solved, but it is a really fundamental issue.
Another big issue is that there’s evidence that corrections – although they fix people’s factual beliefs – don’t necessarily change the underlying attitudes, which are what we really care about. For example, there’s a cool study that showed that, if you show a lie that a politician made, and then you say, “Hey, that thing the politician said wasn’t true,” then people will update and be like, “Okay, I guess that wasn’t true, but I don’t like the politician any less.” I think that’s another issue with corrections, is they may not affect the thing you care about.
Then the third issue is something we just uncovered in a recent study where we did a field experiment on Twitter.
We found 2000 people that shared articles that had been debunked by Snopes and then we created accounts that looked like human accounts, and we responded and said, “Hey, that’s probably not true. I found this link on Snopes,” and then linked to the correction. What we were looking at in that study, was not their belief in the focal claim, because we don’t really have a way of assessing that on Twitter. Instead, what we wanted to look at was, what is the effect of getting corrected on your subsequent behavior? What we were hoping is that getting corrected would make people then post better content afterwards because they would be more likely to think, “Oh shoot, I messed up and I shouldn’t have posted that. Let me clean up my act.”
But in fact, what we found was the opposite. After people got corrected, the quality of the content that they retweeted in that low attention state went down and the partisan extreme [went up]. They’re more likely to share stuff from news outlets that fact checkers rated as lower quality, that were more extreme in their partisanship and that had more language toxicity in the text. Our interpretation of this is that getting publicly corrected by another person is an extremely social interaction, so it focuses people’s attention again on these social factors.
Maybe you feel embarrassed or something like that, so you’re thinking about that and you’re forgetting to think about whether things are true or not when you encounter them next.
Brooke Struck: Right. So let’s bring this into a really familiar, intuitive, gut feeling instance of this. Everyone’s got that uncle who arrives at family dinners and has the latest, greatest story about how it is that Bill Gates and Huawei are trying to inject us all with tracking particles and all these kinds of things. What is the best way to engage with that person? Let’s start away from the dinner table and think about social media.
The reason I bring up that dinner table is because of that gut feeling that I want, because everyone knows who that person is in their family or circle of friends. So when that person does that thing that they do online, that we all can identify very easily, what should we do?You mentioned correction shows some promise. Is there a proper way or a best way to present corrective information to be most effective, or is there a different strategy?
David Rand: Yeah, it’s a great question. I think that I would say that is, in general, an ongoing area of research where I don’t think there is any slam dunk answer yet. There’s a lot of people looking at it. There are lots of things that people have proposed. There’s some sort of contradictory findings and things like that, but I think in general, giving detailed evidence-based corrections is more effective than just saying, “No that’s wrong and you’re an idiot.” I think something that is much easier to do around the dinner table than it is on Twitter is to, instead of just saying “No that’s wrong, here’s what you should really believe,” is to try and get you to be like, “Why do you believe that? Where did you get that information from?”
David Rand: Approach them not in a confrontational way, but like “Oh, that’s interesting. Tell me more about that.” Then help them walk through, “Well does that really make sense?”
Brooke Struck: Yeah, it’s interesting that you bring that up because that is in one way still this very epistemological perspective. The problem you’re looking to solve – for want of a better formulation – is essentially a problem about facts and about beliefs and about accuracy. Whereas, in fact, what seems to be running through our entire conversation is that there’s this blend, this mix of the epistemic stuff that’s going on, along with a whole bunch of social stuff and group membership and sense of identity and that kind of thing.
So is there something else out there in the ecosystem? I have in mind, for instance, people don’t believe things for no reason. They have epistemic reasons, but they also have social reasons. So for instance, if we think about anti-vaxxers, one of the challenges that I think a lot of anti-vaxxers face when they’re making a choice about what they feel about evidence that they’re presented with, is that a lot hangs on that decision for them. If they are really and truly entrenched in a community of anti-vaxxers, then changing your beliefs has effects on who you might be able to be friends with or who you anticipate you might be able to be friends with.
So is there something more there, more than just, “Where did you get this information?” But also, “Why is it so important to you to believe this?”
David Rand: It’s really interesting, this question of the extent to which people believe things because they think they’re true versus they believe things because they’re socially useful. I would say that this is another place where there is ongoing academic debate and that Gord and I are for maybe one of the less popular positions currently, which is that there’s much less motivated reasoning going on than you might think. There is this idea – that when it was proposed, seemed sort of radical, but now has really taken deep hold – called identity protective cognition or motivated system to reasoning.
The idea is that when you engage in thinking – in general or at least in kind of contentious contexts – you’re not thinking what’s really true or not. Instead you’re using your reasoning abilities to protect your identity and to discount evidence that is bad for your identity, and to accept evidence that is good for your identity regardless of whether it’s actually true or not.
We’ve done a bunch of studies in the last few years that I think really question this account and suggest that, in general, when people think more, that does lead them to having more accurate beliefs. But in particular, it’s not actually that it makes people have more objectively accurate beliefs because in general, we don’t have direct access to objective truth.
Instead, if you’re a rational Bayesian doing your best to have correct beliefs and not having any motivation other than truth seeking – when you get a new piece of information, what do you do? You kind of think, “Well how well does it fit with everything else that I know about the world?Do I think that this information is right or do I think that the source that told me this information is unreliable?” What that means, is that a lot of times if you get a piece of information that contradicts your factual beliefs about the world, it can actually be rational to say, “Oh, it’s more likely that the source is wrong than that everything I know about the world is wrong.”
That doesn’t necessarily mean you’re engaging in motivated reasoning and you’re putting social factors first, but it’s that the stream of information that you’re exposed to shapes your understanding of the world. Basically, once you go down a rabbit hole of sort of bad information, it’s hard to get out. Not because you’re motivated, necessarily, but rather because it’s just really corrupted your basic beliefs about the world.
Brooke Struck: Yeah. So even a rational Bayesian who starts off with a huge pile of really bad data is going to have trouble correcting their beliefs when confronted with better data.
David Rand: Right. But the implication of this is that, if the problem is just basically you’ve got lots of bad data, rather than you are motivated to reject anything that’s inconvenient, then it means that, if you give enough good data, that can correct things. If it’s all about identity, then it doesn’t matter how much good data you give because they’re just going to throw out the data that they don’t like.
Brooke Struck: Right.
David Rand: But if it’s about shifting beliefs, then enough good data can make a difference. Obviously social stuff matters and I think that, when information is presented in a really aggressive way, it can make people tune it out. Maybe part of that actually is because people use the aggressiveness as a signal of source unreliability.There’s this balancing of, “How seriously I take this information depends on how credible I think the source is,” so that’s why things like having trusted sources as messengers can be really powerful.
Part of that would be, for example, if you’re talking about republicans in the US believing that the 2020 election was stolen. If a democrat says the election was stolen, that has very little credibility, but if you get republicans to say the election is stolen, that is more impactful both because republicans tend to trust republicans more, but also because the republicans are speaking against their interest. There’s some cool research showing that people that speak against their interest are seen as particularly credible. This was a while ago when people were looking at the Obama death panels misinformation. They showed that a republican saying, “No there’s no death panels,” was more convincing both to republicans and democrats than democrats saying “There’s no death panels,” because they were speaking against their interest.
Brooke Struck: Right.
David Rand: So I feel like one really important element of all of this is, to the extent that you can find any people on the side that benefit from the misinformation who are willing to speak against it, that can be a powerful tool.
Brooke Struck: Mm-hmm. We started out the conversation distinguishing between different kinds of fake news and misinformation, disinformation, and then just straight-up fake, fabricated news. I wonder whether there is something relevant there that we can map on here, that you’re saying it actually turns out to be epistemic factors, or at least in the research that you’re doing, the camp that you participate in right now. It seems to be, epistemic factors are actually more important than social factors.
I wonder whether that’s something that we see in the mainstream, that kind of mushy middle where people really are looking to find new information and update their beliefs. Then identity protective cognition becomes more prevalent in those extremes, where what people are looking for is something different. They’re not looking to update their beliefs. They have much less uncertainty about what they believe and they’re really therefore more polarized along those social lines.
But I want to shift gears a little bit now and talk about some of the research you’ve done about what can be done on the platforms and how the platform design can be modified in such a way to create ecosystems that are healthier from an information processing perspective. What is it that the platforms can do to tidy up the information ecosystem and bring accuracy a little bit closer to the top of the agenda?
David Rand: A lot of this is what we’ve been doing a lot of research on. I think in some sense, the answer is encouraging, which is, there’s lots of easy things that platforms can do. Essentially, just anything that mentions the concept of accuracy or that activates the concept of accuracy in people’s minds, shifts their attention towards accuracy and therefore makes them more discerning in their subsequent sharing decisions. We have a bunch of papers now in which we’ve investigated various different ways of doing this. We’ve found, for example, if you just show them a random post and say, “How accurate do you think this is?” and if you’re a platform you could say, “Help us inform our algorithms. Do you think this is true or not?”
Then, even if you threw away the responses, it would make people more discerning in their subsequent sharing, because just asking the question makes accuracy top-of-mind. We have, in addition to all our survey experiments, we did a field experiment on Twitter where we sent a message to over 5000 users that had been sharing Bright Barton’s information and just showed them a random headline and said, “Hey, how accurate is this? I’m doing a survey to find out.” Basically nobody responded to the survey, but just reading the question activated accuracy and we showed that it significantly increased the quality of the news that they shared afterwards.
Plus, as I was mentioning earlier, we’ve been doing a lot of work on crowdsourcing that suggests that platforms shouldn’t actually throw away the answers, but this intervention kills two birds with one stone. It gets people to pay attention to accuracy and it generates useful signals for actually determining what’s true and what’s not by aggregating the responses. So I think that that’s great, but given that all of this is about attention, another issue is, you can’t just do one thing because people will start ignoring it. Twitter implemented this policy of, when you go to retweet something that you haven’t read, a link that you haven’t read, it says, “Hey, do you want to read this first?”
When they first implemented that, I thought it had a big effect because people were like, “Whoa.” But now it’s just like, “Yeah yeah, okay whatever. I saw that already,” and just click through it without thinking twice about it. So they need to really keep mixing it up. You can do that in trivial ways by changing the way it looks and changing the format and so on, but also we want to know, what are a larger menu of things that platforms can do to prompt people to think about accuracy?We found that also providing minimal digital literacy tips – just something that says “Think critically about the news,” “Check the source,” “Look for unusual formatting,” things like that – that I don’t think is really teaching anyone anything that they didn’t know, but just providing the tips reminds them essentially, “Oh yeah, right. I should think about whether this stuff that I’m reading is true or not.”
Or we found, if you just ask people, “How important is it to you to only share accurate content?” people overwhelmingly say that it’s really important. Then afterwards they’re like, “Oh yeah, well I guess I should now do that.” Essentially all of the different approaches that we looked at were more or less equally effective, so I think that the point is that there’s not some particular magical way of doing it, but it’s just anything the platforms can do to, as part of the sort of basic experience, make people think about accuracy that will help.
Brooke Struck: So, that’s really at the individual level. Let’s back up just one sec. It’s an intervention that the platform can implement, but really the site of the intervention is the individual. You’re relying on the individual putting in a little bit more cognitive labor and giving a little bit more attention to the accuracy of the content that they’re seeing and potentially resharing or engaging with.
One of the papers of yours that I read, that I found to be really interesting, was thinking not necessarily just about the individuals and not necessarily just about individual pieces of news, but thinking about platforms as a whole.
I’m sorry, when I say platforms, what I mean here is news providers and newspaper. That we would not only be assessing the accuracy of individual pieces of news that they put out, that then get shared on social media, but also that in the aggregate, their portfolio would get assessed for trustworthiness and that might have some input into the algorithm. Can you tell us a little bit about that work that you’ve done?
David Rand: Yes. So I think that this is a really important point and something that is really critical for platforms to be doing.One thing that is good, in my opinion, is keeping track of what pieces of content are false or misleading and putting warnings on those pieces of content, or just down-ranking those pieces of content so people are less likely to see them. But that’s always playing catch up. It’s always coming after something gets posted, then you have to do some kind of fact check on it or rely on your machine learning algorithms that are probably not that great.
What you really want to do is get out ahead of things. One way that you can do that is by taking advantage of the fact there’s lots of studies that show that the large majority of misinformation is posted by a fairly small number of accounts. That is to say, people are repeat offenders or publishers are repeat offenders. So what you can do, is you can create account level scores of basically, “How much misinformation has this account posted?” and we want to say, “How much have they posted recently?” So they have some incentive to improve their behavior and get back off a red list.
But basically what I’d say is, the more bad stuff an account posts, the more everything that that account posts should get demoted. So that way you’re not just playing catch up, but you’re saying ahead of time, “I don’t have to check every single post. Still, I should do it, but while I’m waiting to check all of the posts, I can just preferentially say probably a lot of this stuff is not going to be good.” We have a result in one of our papers that shows that this can be quite effective, even just using laypeople to rate the headlines. We had people rate the 10 most popular headlines from a given news site from the previous year. Then we took the average of those to create a sight level average. The ratings that we came up with in that way were highly correlated with these detailed sight level research that professionals had done.
Brooke Struck: Right. So that’s a really interesting approach and I like that it’s more proactive in getting us away from so much of the reactive approaches that are out there. One of the questions that I have, one of the worries that I have, is that I see a lot of people pushing back against this, saying, “Once you start choosing who is allowed to speak and who is not, you’re infringing on the freedom of speech, and essentially what you’re going to do is just marginalize any dissenting views and no one is going to be allowed to say anything other than the party line.”
Based on the research that you’ve done using real people to go through this kind of assessment, how much do you think we should be worried about concerns like that? How difficult is it to pull together a crowd that is representative of the entire population and actually allows for those kinds of dissenting views to come to the light of day and not be quashed in the way that some people are worried about?
David Rand: I think there’s two different things to tease apart here. One is the question of, is there bias in the person or the people that are deciding what’s true and what’s not? I think that’s where a lot of the complaint – particularly from people on the right in the US – comes from. They say, “Oh, platforms and professional fact checkers have a liberal bias, so they’re discriminating against all the conservative stuff and that’s not fair.” I think that you can completely get around by using representative crowds. We’ve now found in a bunch of papers that, if you get either a representative crowd, or we actually even go a step further and create a politically balanced crowd, which means it’s sort of a more conservative crowd than a representative crowd would be, because in terms of population, the average American is slightly democrat leaning.
So we created half democrat, half republican crowds, and also that are representative of age and gender and geographic region, and all that. We find that we get extremely good agreement between those crowds and the fact checkers, even with fairly small crowds. I think using crowdsourcing completely takes away this complaint of, “Oh, the conservatives are being discriminated against.”
But there’s a second problem that the crowd approach – at least in its basic imitation like that – doesn’t deal with, which is the quashing of dissenting views. I think in the US, it’s not so much of an issue necessarily because, at least on the democrat-republican split, it’s pretty equally balanced. But in contexts where you have a majority and a fairly small minority, which could be around race in the US or in a lot of places around the world, there are these very clearly oppressed minority groups within ethnic minorities or political minorities.
There you have the danger of the majority crowd certifying falsehoods about the minority group. I think that’s an ongoing challenge in the crowdsourcing research. How do you design a platform, or how do you design a crowdsourcing approach that protects minorities in that way?
Brooke Struck: Really interesting. So ultimately, one of the big worries around fake news and echo chambers is that they move us away from a world of shared facts. In a somewhat trivial way, it’s okay for us to disagree on our values. It’s also probably okay for us to disagree about a lot of evidence that’s emerging about which there is no kind of clear answer. But there also should be some body of facts that we all share and that none of us dispute. There should be some kind of common epistemic anchor that all of us can rally around so that we can have meaningful conversations with each other and just get those conversations off the ground.
What we’re seeing now with this kind of polarization – and fake news is a great example of that – is that we get hived off into these small groups who have much less to do with each other. We move from the idea of broadcasting to narrowcasting. How do you see these nudges? And perhaps the one at the publisher level is the most powerful here – how do you see these nudges not only taking care of the very visible problem of fake news and disinformation, but this slightly deeper problem of epistemic disconnects between communities?
David Rand: I think that they are, in some sense, separate issues. If what you’re doing is just saying, “I’m going to downrank content, I’m going to make it so people don’t see content from outlets that are sharing a lot of misinformation,” that in and of itself is not going to deal with the problem that people in different partisan communities are exposed to different information streams.
Except if there’s a strong correlation between partisanship and misinformation. For example, evidence suggests in the US, a large chunk of conservatives only get their news from Fox News or more extreme right wing sites. Whereas on the left, there’s a much wider array of sites that vary in how partisan they are.So if you wound up in a situation where Fox’s content was getting demoted because they were sharing a lot of misinformation, then that might actually lead to conservatives seeing more center or left leaning content just because Fox is especially responsible for all of the right leaning content. In general, I think this is not so much a solution to that problem. But again, this is something that, if the platforms wanted to, they could try and create a more balanced media diet for people.
But I think it’s actually much trickier than it might seem. There’s sort of conflicting research out there on the impacts of increasing people’s exposure to counter attitudinal information. There was one study that paid, I think it was Twitter users, to follow a bunch of talking heads from the opposite party. That actually made them more entrenched in their policy positions. Then there was another study that got Facebook users to subscribe to a channel that showed news articles from news sites on the other side of the aisle. That didn’t have any effect on people’s policy opinions, but it did make them, on a personal level, like people from the other side more. I think presumably because you could understand why people had differing beliefs if that’s the kind of stuff that they were seeing. Also, it didn’t run for that long. It could be that – because they found that people did really engage a lot and engage positively with counter attitudinal news -over a long run, that might have a positive effect.
The point is just, all this stuff is super complicated. So rather than being like, “Oh, well this is an obvious thing that platforms should do. They should do it,” I think what my strong position on this is that platforms need to be doing experiments and testing this stuff. What I would love to see Facebook do, for example, is to run their own experiment where they show counter attitudinal news to a subset of people and look at the effect that it has, rather than either saying “Well, we’re not going to do it because we don’t know what’s best,” or saying “Well we’re just going to do it because it seems like a good idea.”
Neither of those are reasonable. Instead, you have the ability to do massive experiments trivially. You should do lots of experiments and testing things and seeing what really works.
Brooke Struck: Right. So trying to tie this up a little bit with some practical advice and some tangible things we can do, it sounds like your advice for the platform is pretty clear. You think that they should be running experiments on these kinds of things.
David Rand: Particularly, there are a bunch of ideas out there, proposals that people are making, and they should take those proposals and think about, “How would this work on our platform?” and, “Let’s test a bunch of implementations and see if we find things that seem like they work.”
Brooke Struck: Yeah, and part of that is an injunction for them to care about the outcomes as well. This isn’t just going to be a click promoter in the short term. Yet, you should care about it enough anyway to be experimenting with it.
David Rand: Right.
Brooke Struck: So at the platform level that seems like pretty concrete advice. At the individual level, something you mentioned earlier is. correct, but do so gently and with curiosity. Like, “Oh that’s surprising. Where did that information come from? I haven’t heard that before.”
David Rand: Totally.
Brooke Struck: Maybe there are some really low tech hacks that we can apply to our own brains to raise the saliency of accuracy. So for instance, just putting a little sticky note on the monitor of your laptop before you go onto Twitter, and also acknowledging that that’s probably going to stop working pretty quickly. Maybe two or three Twitter visits that sticky are still salient, but then it’s just going to fade into the back of your mind. You’ll never see that thing again, despite the fact it stares you in the face.
David Rand: Right. That’s right. That’s right. I basically want to try and cultivate a mindset that you could think about as motivation. “This environment is out to get me,” in some sense. “This is set up to trick me into sharing things that I don’t actually want to share, so let me not be taken advantage of that,” and “Let me try and have a mindset to say, ‘Let me stop and think before I share so that I don’t make choices that I would regret.’”
The hope is, if you do that enough, it can become a habit and then it becomes automatized.
Brooke Struck: That’s great. Thanks a lot, Dave. I really appreciate this.
David Rand: Thanks so much. This was a lot of fun.
We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.