Q&A

FactCheck.org’s Kathleen Hall Jamieson on the Golden Age of Misinformation

In a year featuring both a presidential election and a pandemic, getting your facts straight has never been more important.


combating misinformation Kathleen Hall Jamieson

Kathleen Hall Jamieson is on the front lines combating misinformation. Photograph by Eric Sucar

At a time when we’re surrounded by both more information and less certainty about its accuracy than ever, Kathleen Hall Jamieson is on the front lines. Director of the Annenberg Public Policy Center at the University of Pennsylvania, co-founder of FactCheck.org, and winner of the National Academy of Science’s 2020 Public Welfare Medal, the Minneapolis native can teach all of us a thing or two about getting our facts straight.

Philadelphia: Right now, in an election year, amid a pandemic, there’s tons of misinformation around. I open my Facebook account and immediately see things that contradict what I’ve read or seen.
Jamieson: When you see those, if they’re politically based, you should send them to FactCheck.org.

Well, there you go.
Yes. When people see things that, for example, might be from Russian trolls, I’m very interested. When you see things that look as if they’re consequential political distortions, remember: FactCheck.org is here. It’s at the Annenberg Public Policy Center. Feel free to send us an email saying, Do you mind taking a look at this? We have an Ask FactCheck function in which if someone brings something to our attention and we think it’s important — that it’s consequential deception — we will address it online.

That’s good to know. It feels right now like the spread of conspiracy theories and false information is worse than ever. Do you think that’s true, or has it always been this bad?
We are in a changed environment in which what is there is more accessible to more people with greater ease. And as a result, it’s very difficult to answer the question Is there more? — if by more you’re speaking numerically. What you can say with certainty is, there’s more access to what is there. And the likelihood that you will be exposed to it and uncritically share it with others — thereby lending credibility to it — is new and means that misinformation can now carry your personal credibility to your network of friends. That’s problematic, because it means your credibility can be hijacked through an act that really wasn’t considered on your part.

When people like and share, they do it very quickly. One of the recommendations we’ve made for how to minimize the likelihood of misinformation being spread is to take a moment to carefully look at what it is and ask whether you actually believe everything it says. When the material is something we’re disposed to believe, we’re more likely to uncritically share it than we would be if we weren’t disposed to believe it. We’re all vulnerable to this — it’s not simply a phenomenon that occurs when people who aren’t like us are duped because they’re stupid. The tendency is to think it happens to everybody else — I don’t do that — when in fact, we’re all disposed to it because of our hardwired human biases.

When the material is something we’re disposed to believe, we’re more likely to uncritically share it than we would be if we weren’t disposed to believe it. We’re all vulnerable to this.

Recently, we’ve seen social media companies stepping in to flag misinformation — or being called out for not doing enough to combat it and hate speech. What do you think the role of these private companies should be in terms of fact-checking what’s shared?
The reason I worry about the platforms doing fact-checking and then not providing access to content is that historically, when we first started seeing advertising in the age of radio, station owners were only allowing advertising by politicians they agreed with. Regulation was developed saying that if you provide access to one side, you have to provide comparable access to the other side. So the question becomes, who do you trust to decide what constitutes misinformation? When it comes to political speech, I’d prefer not to have the decision on who gets access to speech that reaches the public be in the hands of some corporate entity. I’d be much more comfortable with the idea that the corporate entity incentivizes a process by which there is aggressive fact-checking of everything that is on its site, and people are able to see the fact-checking as they see the content.

What would that look like?
What an ideal Facebook model would look like is when you search for something, on the right hand of your screen, you’ll see the fact-checking about it. That’s not denying you access to it, but it’s providing the alternatives so that you’re able to make a decision based on additional evidence that is there. There are all kinds of problems with this alternative, but I’m very distrustful of large entities making decisions about political speech, because of the history. I like the decision not to take political [advertising] at all.

What if it’s not advertising, like when Twitter flagged Trump’s tweet for “glorifying violence”?
There’s a legitimate need on the part of the public to have good, accurate information. If someone is going to tell me that I can text in my vote — which the Russian trolls told people in their Facebook feeds and on their Twitter accounts — I don’t want the platform to let that go without putting some clear signal that says that’s wrong, you can’t text your vote. Efforts to engage in voter suppression that are that clear are no-brainers, because the public is entitled to consequential, accurate information. The platform that carries that information has betrayed that voter. That boundary is clear. When does something constitute incitement to violence? is a grayer area. Should it be marked and not suppressed or should it not be seen then becomes the decision, and when should you do that becomes the question. That’s a case-by-case call, and it’s something we’ve got to debate.

What do you think about a company like Twitter making those determinations?
I prefer a marketplace speech idea in which other speech is there to tell you why something is problematic. I’m very comfortable with counter-posting, which is essentially what putting up the fact-checks does.

In your book Cyberwar: How Russian Hackers and Trolls Helped Elect a President, you said that the country is ill-prepared for a sequel to the 2016 election, which was affected by outside forces. What do you think about 2020?
We have made changes, and the changes are good. The likelihood now that a site will be taken down — if it is what the platforms call inauthentic, if it’s a site that’s pretending to be something it’s not — is very high. Secondly, it’s much harder now for a foreign national to buy advertising on one of the sites for political purposes. Third, now when you see things on YouTube that are postings from places like RT — formerly Russia Today, a Russia propaganda arm — you see a disclaimer indicating that it’s government-sponsored. Before, you didn’t. We know that the Russians were inside our voting infrastructure. The good news is, the secretaries of state have made major efforts to try to minimize the likelihood that will happen on Election Day. The question is: Are their preparations adequate?

We’re fighting the last war, and we’re not prepared for the next one. And the early warning alert that should come out of [July’s] Twitter hacking is: How are we going to prevent someone from hacking into and putting up a message on the Joe Biden account that says, I’ve decided that I don’t want to be president, so I urge you not to vote today? Our reliance on social media as an ancillary newsfeed means we just saw another form of mischief that if employed in the context of an election could be extraordinarily problematic.

I’ve got a section in the new edition of Cyberwar about how the Russians actually ran out a fake event that was treated in the local news stream as if it had happened, until the big reputable local newspaper came in to shoot it down. But it took them hours and hours to do it. Now, pretend this happened at four o’clock in the afternoon on Election Day, while you’ve got long election lines and all people are holding in their hands are their mobile phones. Are they going to go to one of the network news outlets, or are they going to trust what they’re seeing inside that stream? The sad fact is, mainstream, legitimate, reputable news doesn’t have the capacity it once had to provide quick retraction, because whole blocks of the public don’t trust it enough to turn to it when they’re faced with something they might be uncertain about.

combating misinformation

Kathleen Hall Jamieson with Eugene Kiely, director of FactCheck.org, at the Annenberg Public Policy Center in 2019. Photograph by Eric Sucar

What do we do about the lack of trust in mainstream news?
We need to do everything we can to bolster and ensure the credibility of mainstream, reputable news. Its credibility is lower among some sections of the population, and that lowered credibility is problematic, but we can forewarn people of the possible ways in which deception could happen on Election Day and as a result increase the likelihood that they don’t act on what they see, they don’t immediately share, and they use some kind of a checking process to verify before they take an action, like walking away from a polling place or deciding they’re not going to go because, Oh, the election has been shut down in Pennsylvania when in fact, no, that was deceptive.

That’s terrifying.
There’s a whole other domain we ought to worry about. When people believe misinformation about vaccination, they’re less likely to be vaccinated. Again, we have the institutional trust factor at play — the extent to which you worry that the agencies that certify the safety and effectiveness of a vaccine have been politicized and people may not trust the agency certification. Which is why it’s so important that the FDA, the CDC and the NIH be protected from political influence — and that when they make errors, they stand up and correct themselves, because that’s part of establishing that their credibility is intact. We’re going to hit a point at which there is a vaccine, and the public’s willingness to take that vaccine is going to affect the well-being of a whole lot of people.

We’re currently seeing a very strong politicization of information, like the CDC recommendations to wear masks and practice social distancing.
The way to counter that is to have the people who are identified with Donald Trump wear masks and have Donald Trump wear a mask. His statement in interviews that he thinks mask-wearing is appropriate in some places — those are really important signals, because if this becomes a signal that if you wear a mask, you’re a Democrat, and if you don’t, you’re Republican — the consequences of that political signaling are dire. This environment has been complicated by the fact that the early messaging was wrong. That early messaging may well have been driven by a need to keep the masks in the hands of the medical professionals who needed them in order to save lives. But if that was the case, the argument should have been, We don’t have enough masks, not, Don’t wear a mask, we don’t think it works. We’re not going to look back on that as a moment of successful messaging, and it makes the current moment more difficult.

Published as “Truth or Consequences” in the September 2020 issue of Philadelphia magazine.