2023: Lessons from Ukraine: Armed Conflict, Dictatorship, and Responses from Liberal Democracies 2022: A Wicked Problem: Individual Freedoms and Climate Change
by Zena Ryder
Heidi Tworek is Associate Professor of International History and Public Policy at the University of British Columbia in Vancouver. She’s a member of the Science and Technology Studies program, the Language Science Initiative, and the Institute for European Studies.
In addition to her UBC roles, she’s a senior fellow at the Centre for International Governance Innovation, and a non-resident fellow at the German Marshall Fund of the United States and the Canadian Global Affairs Institute.
Tworek has briefed or advised policymakers and officials from multiple European and North American governments on media, democracy, and the digital economy. She’s published many book chapters and articles in both academic journals and the popular press. Her first book, News From Germany: The Competition to Control World Communications, 1900-1945, was published by Harvard University Press in 2019.
History plays an incredibly important role. It helps us see patterns and understand what is, and what isn’t, unprecedented in our own era.
My doctoral dissertation, which became my first book, looks at how Germans tried to control news in the first half of the 20th century by seizing the new technology of radio. That changed the role of news in German and global society.
History teaches us that the problem of new technologies and the spread of misinformation and disinformation is absolutely not a new problem. Nor is it new for states to use information or misinformation for their own geopolitical or economic advantages.
Policymakers should listen to experts from many different disciplines — economics, computer science, and so on. Historians should also have a seat at the table to present their body of evidence.
When policymakers are considering a policy, I can tell them the same suggestion was made in the era of radio, and this is how it turned out. I wouldn’t say it’s definitely going to go the same way today, but history gives us a body of evidence to draw on when thinking about how a proposed solution may or may not work. Good intentions can sometimes have bad consequences, so evidence from history can help us see potential unintended consequences.
One of the most important lessons is that there’s no silver bullet. Humans repeatedly deal with the problem of misinformation, disinformation, and poor-quality information. We shouldn’t seek a single remedy to solve it once and for all. Instead, we should recognize that an inevitable consequence of information in a free society is that you end up with problematic information. The question is its quantity, prevalence, and consequences. And we have to figure out how to deal with this issue in each new context as new technologies arise.
One lesson is that much of the groundwork for the Nazis’ misinformation campaign was laid long before the Nazis arrived. Laying the groundwork was a long process, which involved the development of radio technology; the news agencies they used had been subsidized by previous governments, the places they targeted, like Latin American and East Asia, were similar to those targeted by previous German governments.
It’s helpful for us to realize that the spread of misinformation does not spring out of nowhere. We need to be on the lookout for how technologies or businesses can be abused or misused.
Another lesson is that the problem isn’t necessarily officials telling journalists exactly what to write. With some nationalist, right-wing news outlets even in the Weimar period, it was made clear to journalists that they had to take a particular direction or suffer the consequences. It was less about individual pieces of content and more about pushing towards certain types of content rather than others.
Starting from the mid-19th century, there were only a few news agencies that supplied the vast majority of newspapers. By influencing those companies, different German regimes attempted to control the news. Although social media involves many-to-many communication, there are still only a small number of companies controlling that possibility. Companies like Facebook, Google, and Twitter are the bottlenecks that largely control how we communicate in these many-to-many broadcasting environments.
Studies show that a lot of misinformation still comes from the top. In theory, social media enables many voices to be heard, and they sometimes are (for better or for worse). But we still see that the most problematic information comes from the top, like from former US president, Donald Trump. Social media hasn’t eradicated hierarchies.
One of the things that makes the phenomenon tough to study, though, is that there’s lots of data that researchers can’t access. For example, our team did a study on online harassment of Canadian politicians during the 2019 election. We ended up studying Twitter, but we couldn’t study Facebook, because we couldn’t access the information we needed.
So any study of misinformation and how it spreads has to come with caveats about how much access researchers have to the data.
The idea of segmenting people in order to show them ads is a very old phenomenon. If someone wants to advertise in a newspaper, they’ll choose the specific newspaper and the section based on the demographic they’re trying to reach.
Using ads to subsidize content for consumers is also a long-standing phenomenon. Looking at this phenomenon in history, we can see how mechanisms to regulate advertising developed over time. For example, in the 19th century United States, people could place newspaper ads selling quack cures, and the newspapers bore no responsibility for them. In the early 20th century, people died from using those products. So during the first 30 years of the 20th century, the Food and Drug Administration ended up regulating advertising, saying people couldn’t advertise products before safety trials were done.
So we can pay attention to the past to see how regulations affected advertising and this scrutiny might be helpful for us now.
Here’s a specific example. Twitter was initially designed with a very libertarian view of free speech — the idea that more speech is good speech. This had consequences for how people used the platform and the kind of conversations it facilitated.
Someone who had bad experiences with people making hateful comments had few choices other than blocking tweets, muting people, reporting the tweets, or leaving the platform.
Twitter began thinking about how to design a space for productive and constructive conversation, what the company calls ‘healthy conversation’. Now, Twitter gives users control over who can reply to their tweets. So if someone has had problems with misogynist or racist comments from random people, they can restrict comments only to people they follow.
That’s a starter solution that doesn’t involve deletion, but uses a different kind of curation.
Throughout history, people have scapegoated others in deeply troubling, racialized ways during pandemics. So, alas, it’s not a surprise that we’ve seen the rise of conspiracy theories laced with racism during this pandemic.
One of the solutions is just to get the pandemic under control. When people see an improvement in their everyday lives, they’re less inclined to scapegoat others.
A lot of conspiracy theories are founded on prejudice. So it’s important to forestall stigmatization of others through effective communications from public health officials and politicians. Places like South Korea and Taiwan have done a good job of this, and we could learn from their communications.
Tackling misinformation should be informed both by history and also by taking a global perspective.
The next Roger W. Gale Symposium in Philosophy, Politics and Economics is on March 4 and 5.
Join us for a diverse expert panel on how we can combat misinformation while preserving free speech. Heidi Tworek speaks on March 4 from 1:45 p.m. to 2:30 p.m.