Interview with Renée DiResta

by Zena Ryder

Renée DiResta

Renée DiResta

Renée DiResta is the Research Manager at the Stanford Internet Observatory. She’s a Staff Associate at the Columbia University Data Science Institute, and is a Founding Advisor to the Center for Humane Technology.

DiResta investigates the spread of misinformation on social media and helps policymakers understand and respond to the problem. She’s advised Congress, the State Department, and other civic, business, and academic organizations.

She is the author of the book, The Hardware Startup, and an Ideas Contributor at Wired. Her tech industry writing, analysis, talks, and data visualizations have been featured or covered in many outlets including The New York Times, Washington Post, The Economist, CNN, Forbes, and Buzzfeed.

How did you get interested in the topic of misinformation?

My interest in misinformation stemmed from my involvement in vaccine advocacy.

When I had my first child in 2013, I put him on a preschool waiting list in San Francisco. I looked into the preschool vaccination rates and was disappointed to see so many that were low. In some cases, rates were only a little over 30 per cent. I downloaded some California public health data and looked at the vaccination rates over the last 10 years statewide, and there had been a significant decline over that period. So I started trying to understand vaccine hesitancy and blogging about it.

In January 2015, there was a measles outbreak in California and a state legislator decided to introduce legislation to eliminate ‘personal belief’ opt-outs from school vaccines. I got involved and started posting about it on Facebook. I then started to get recommendations for anti-vaccine groups and pages. I found it fascinating that indicating interest in the topic led to me getting recommendations — but for the opposite side.

There wasn’t much of a pro-vaccine advocacy community on Facebook — some committed and passionate activists but not large networks. As we continued trying to pass the law in California, we also started to grow a counter movement online by establishing a page called Vaccinate California on Facebook. Now I was the administrator of a pro-vaccine page, and yet I was still getting recommendations for anti-vaccine content.

When we wanted to run ads to grow the audience for this page, the ad targeting interface showed me only anti-vaccine interests, and anti-vaccine keywords to use to target our ads. There was nothing pro-vaccine.

What was your diagnosis of the problem?

There were two main problems.

First, the way social media platforms worked was stacked against reputable scientific information in favour of sensational, high-engagement stuff. So the curation and recommendation algorithms needed to be addressed to at least level the playing field towards surfacing good information.

Second, pro-vaccine communicators needed to up their game and grow a counter-movement.

On Twitter, anti-vaccine hashtags were being dominated by automated accounts. Twenty-six per cent of tweets in the hashtag for the vaccine bill were coming from 0.05 per cent of the accounts (and there were automated accounts ‘on’ 24 hours a day, seven days a week). They were creating the impression that the anti-vaccine position was more popular, because anyone looking into the hashtag would see overwhelming anti-vaccine opposition even though polling showed public opinion in favour of the bill.

In 2018, you talked about how YouTube algorithms were promoting conspiracy theories. Have the algorithms improved since then?

Yes, there have actually been major changes to the ways in which YouTube recommendation engines suggest content. Researchers have found that the recommendation engine on YouTube in particular is now recommending this kind of content significantly less. Same goes for Facebook. Facebook started actively removing anti-vaccine groups from the recommendations at the end of 2019. They’ve also been gradually removing other types of content that’s leading to polarization and radicalization, including Q Anon and various militia groups.

So, these kinds of responses by social media platforms have been on the basis of the harm misinformation causes. Does that mean that health misinformation and political misinformation should be handled differently?

We’ve learned that health and political misinformation are linked on social media. The same kind of recommendation behaviour that promoted anti-vaccine groups was also promoting Q Anon groups to anti-vaxxers. That’s because if someone was interested in one conspiracy theory, pages about other conspiracy theories would be recommended to them.

The recommendations were keying off things like high engagement in a group, rapid membership growth, high number of active conversation threads, and so on. One of the unintended consequences of Facebook recommendations was the creation of large echo chambers. As we saw with the insurrection at the Capitol, there are communities that are deeply entrenched in unreality, who believe that the election was stolen, that there were conspiracies to change votes, and so on. These groups became insular echo chambers in which wild theories gained widespread acceptance, leading to very negative real-world harm.

The definition of what counts as ‘harm’ has evolved for the social platforms since 2018 or so. The focus has extended beyond immediate harm, such as incitement to violence. It can involve long-term harms, such as the health risks involved in anti-vaccine misinformation. The platforms are acknowledging that political misinformation can also cause real world harms — such as when people get sucked into a cult like Q Anon that may be involved in real-world violence, or takes them away from their families.

You’ve argued that transparency would help with moderation on social media, such as removing Facebook groups. Why would transparency help?

When people are on the receiving end of moderation — one of their groups is shut down or their content receives a warning label, for example — they often feel like the platform is acting against a particular viewpoint. Because the process of moderation is opaque, it can appear as though the platform is stifling conservative points of view. I think that perception of mass viewpoint censorship is harmful in the long run. One moderation incident can become a viral story in itself.

With more transparency, people could understand better why some content was recommended, or why something was labelled, or why a group was shut down. Transparency could reduce some of the anger about perceived censorship of conservative voices.

Have social media platforms handled unintentional sharing of misinformation differently from intentional sharing of disinformation?

Social media companies don’t want to adjudicate what’s true and what’s false. It’s hard, particularly when the evidence evolves quickly as in the case of the coronavirus pandemic. Instead, to put labels on misinformation, they’ve worked with fact-checking partners, such as the Centers for Disease Control and Prevention for health misinformation, and Poynter for news. These labels can dissuade people from sharing misleading content, because people generally don’t want to share stuff that’s false or misleading.

The trouble with disinformation is that the originator knows it’s false and they don’t care. So a label warning them that some content may be false doesn’t dissuade them from sharing it.

So a different way of tackling this has been to look at the networks of accounts that are deliberately sharing false information. Often, they’re fake accounts and they can be closed because of that. Similarly, spam techniques are often used to intentionally spread disinformation and get it to rise to the top of trending algorithms. So cracking down on those spam techniques — focusing on the distribution, rather than the message in the content — is another way to combat the intentional spread of disinformation.

How concerned are you about deepfakes being used to intentionally spread disinformation?

So far, most of the harm of deepfake images and videos has been limited to ordinary individuals — mostly women — whose faces are added to porn images or videos. Beyond revenge porn, deepfakes do have some potential in disinformation operations. They could be used to create scandal, such as a video that seems to show a politician saying or doing something they didn’t. But in such a case, hundreds of people would participate in assessing the video, it would not just slide through unexamined. And the technology to detect deepfakes is developing alongside the technology to create them.

I suspect that the area where fakes have the potential to be more harmful in the long run is AI-generated text. It’ll become very inexpensive to run a manipulation campaign, with vast quantities of text being produced very quickly, and the text will eventually be indistinguishable from content written by real people.

On the other hand, perhaps this will mean that people will become more skeptical of what they read from anonymous accounts online.

If you had a magic wand and could change just one thing related to misinformation what would it be?

In the short term, because the crisis is acute, it would be to change the curation and recommendation engines. Alter the algorithms so more reliable information surfaces to the top of the news feeds, as opposed to the most sensational information.

In the longer term, it would be media literacy education. My personal belief is also that everyone should take a statistics class!


Is there a vaccine for the infodemic?

The Misinformation Age

The next Roger W. Gale Symposium in Philosophy, Politics and Economics is on March 4 and 5.

Join us for a diverse expert panel on how we can combat misinformation while preserving free speech. Renée DiResta speaks on March 5 from 3:15 p.m. to 4:00 p.m.

Learn more