Five Questions with Kim Bisheff
How does misinformation spread so easily online? A journalism faculty member breaks down the dangers of this modern phenomenon.
A pandemic, natural disasters and widespread protests are all happening across the country right now, and a significant epidemic of misinformation is exacerbating the chaos.
Unequivocally false claims have been swirling across the internet — that antifa protesters are being bussed into towns ahead of protests to loot, that hydroxychloroquine is a cure for the coronavirus — and in many cases, the sheer number of people who believe them and spread the misinformation only serve to more deeply fracture our shared perceptions of truth and reality.
We asked Kim Bisheff, a lecturer in Cal Poly’s Journalism Department and an expert in how misinformation spreads online, about why it online misinformation is so harmful — and what we can do about it.
Here’s an excerpt from that conversation.
What is misinformation?
It’s a really tricky term to define these days because it’s being thrown around so much, but in a general sense, misinformation is false information that is pushed out to the public with the intent to deceive.
Generally, misinformation is designed to tap into our emotional reactions: fear and anger, most frequently. We humans have this design flaw where, when we experience strong emotions — especially fear and anger — or in some cases, that feeling of validation that comes with having your opinions restated in an inflammatory way, we tend to react to that information without giving it much critical thought.
On social media, that means liking things, retweeting, “hearting,” commenting, interacting in any of those ways. When we do that, we amplify that message in a way that in the olden, pre-social media days we didn’t have the power to do.
Here’s an example: a lot of the misinformation around the pandemic is about miracle cures, like gargling with bleach or the whole hydroxychloroquine thing.
When that came out, it tapped into this really strong fear that people had, and in a vacuum of concrete information, they just ran with it. People, even very bright people, were getting hydroxychloroquine and taking it prophylactically even though there has never been any definitive evidence that it’s helpful. Of course, eventually it was shown that not only does it not help, but it can be very harmful.
But in that interim, and even still now, there are memes out there, there are narratives that are being passed on by people who are looking for a solution to this big scary thing that’s out there. They’re going to grab on to any possible solution they can find.
Those repeated messages are super powerful because the more we’re exposed to false information, the more likely we are to accept it as true information.
How does misinformation find a footing in an audience?
It taps into emotional triggers. We’re already on heightened alert, at really high levels right now, and then we see something that seems to legitimize that fear. We share it, we spread it, we send it to friends because we’re worried about them.
We have this feeling of having insider information because it came to us from a friend of a friend. Once someone we know is attached to that piece of misinformation, then it lends it this legitimacy that shuts down our critical thinking even more. So, it continues to spread and spread and spread.
The interesting thing about something like that is once a piece of misinformation has been debunked, the truth really doesn’t spread. That has to do with human nature. The emotions that are involved in debunking are less satisfying than the emotions that are involved in legitimizing our fear and anger.
There can also be a little bit of embarrassment, especially if we realize that we ourselves were suckered into believing something that was false. We consider ourselves intelligent beings and we shouldn’t fall for that stuff. It’s embarrassing! We maybe quietly think, “Oh that wasn’t right,” and we move on. Whereas when we experience the more negative emotions that are triggered by scary content, just as humans, we feel a greater urge to put it out there.
How do you confront a loved one about misinformation that they truly believe, or have shared on social media?
What I’ve learned is that first of all, it’s a very, very, very difficult task. Secondly, once someone has adopted a belief, if you simply present them with facts contrary to that belief, you’re not going to change their mind. They’re more likely to just dig in. The best way to start to change someone’s mind is to find common ground.
In the case of the Plandemic hoax video, for example, you could say something like this: “Wow, I saw that video. It’s really convincing. When I first saw it, I thought, ‘I can see how this makes so much sense.’ I was curious about the woman who was interviewed, so I Googled, and look what I found. What do you think about this?”
When we bring them on that journey of discovery together, it takes the shame and accusation away and makes it more likely that they’ll be receptive to the possibility that their first interpretation was wrong.
How do we see misinformation impact people both on social media and then in the real world?
I am very concerned that if Americans continue to look at their social media feeds to find out about current events and how the world works, and if they continue to turn away from legitimate sources of news and science information, then when election time comes, we’re going to make some really bad decisions.
It sounds like conspiracy, but there are plenty of people who would like to see the downfall of our democracy.
Part of that game plan is taking advantage of our tendency to react to inflammatory content by spreading it through social networks. We all need to inoculate ourselves against that by being aware that this is a problem that exists.
We need to think critically whenever we are confronted by those scary emotions because of something we saw or read. Of course, in the pandemic era it’s really hard to do that because everything we see is scary.
How should people judge for themselves whether they’re looking at a credible news source?
Mark Zuckerberg is not going to save us, we gotta do this on our own. [Laughs].
When we’re deciding what’s a credible news source, one thing we should look for is whether the stories have bylines. We should then look up the people who have published that information, and look them up across their social channels and see: do they have a journalism background, do they have an advocacy background, who are they?
Another thing to look for is attribution. Responsible, professional journalism attributes its statements to credible sources. Every statement should have clear attribution that helps us understand exactly where the reporter got that information. The word “said” is all over every professional news story. When we’re reading casually, we don’t even notice that word, but if you start looking for it intentionally, you’ll see after that “said” is a source that we can look up independently and verify.
We want our information attributed to primary sources. That means, for example, if it’s something that has to do with a real estate development, then we want to make sure the reporter is talking to the development director and not a random neighbor who’s angry about the impact that development may or may not have on their property value.
The News Literacy Project has fantastic resources for educators and for anyone who wants to improve their news literacy skills. They just launched a podcast called, “Is that a fact?”
Another good resource is MediaWise, a program through the Poynter Institute that’s geared toward empowering people to become better consumers of information. Their free, online fact-checking course for first-time voters goes live in October.
Read more of the interview at Cal Poly News.