Around the time the U.S. government was testing nuclear bombs on Bikini Atoll in the spring of 1954, residents of Bellingham inspected their windshields and noticed holes, pits and other damage.

Some blamed vandals, perhaps teenagers with BB guns. Once
Bellingham residents reported the pits, people in nearby towns
inspected their windshields and found similar damage. Could
sand fleas have caused it? Or cosmic rays?

As more people examined their windshields and found more pits, a frightening
hypothesis emerged: nuclear fallout from government hydrogen bomb testing.

Within a week, people around Seattle, 90 miles away, were reporting damage as well. But the rumors faded almost faster than they began as scientists and local authorities refuted the most prominent theories.

What would become known as the “Seattle windshield pitting epidemic” became a textbook example of how rumors propagate: a sort of contagion, spread through social networks, shifting how people perceive patterns and interpret anomalies. Car owners saw dings that they’d previously overlooked and shared observations with others, who repeated the process.

Today this phenomenon would probably be described using the terms misinformation, disinformation, and perhaps fake news. Certainly, communication has changed dramatically since party-line telephone calls and black-and-white television, but scholarship from that era holds critical insights that are essential to the digital era. The study of rumors, which surged around World War II, is still very relevant.

Advertising

Our team of researchers at the University of Washington has been investigating these issues for more than a decade. Initially, we centered our research on rumors. But we shifted focus in 2016 as public and academic attention lasered onto misinformation and social media manipulation. By 2020, our collaborators in an effort called the Election Integrity Partnership developed an analytical framework that allowed dozens of students to scour social media platforms
in parallel, feeding information to trained researchers for analysis and to authorities and communicators for potential mitigation.

As we worked to build ways to quickly prioritize unsubstantiated claims about election processes and results, we found that the terms misinformation and disinformation were often cumbersome, confusing or even inaccurate. But
we came full circle in 2022, during a second iteration of our collaboration on election integrity, because the concept of rumors worked easily and consistently to assess the potential for unsubstantiated claims to go viral online.

We are convinced that using rumor as a conceptual framework can enhance understanding of today’s information systems, improve official responses, and help rebuild public trust.

In 2022, we created a prioritization tool around the concept of rumors. The idea was to help anticipate rumors that could undermine confidence in the voting process and assess whether a given rumor would go viral. Much of the concept’s utility is that responders can engage with an information cascade before veracity or intent can be determined. It also encourages empathy by acknowledging the agency of people spreading and responding to rumors, inviting serious consideration of how they can contribute to the conversation.

In the scholarly literature, misinformation refers to content that is false but not necessarily intentionally so; disinformation, which has roots in Soviet propaganda strategies, refers to false or misleading content intentionally seeded or spread to deceive.

These terms are useful for certain problematic content and behavior, but they are increasingly politicized and contested. Mislabeling content that turns out to be valid — or potentially valid, like the theory that COVID-19 began with a so-called lab leak — violates public trust, undermines authorities’ credibility, and thwarts progress on consequential issues like strengthening democracy or public health. In contrast, the label of rumor does not declare falsity or truth.

Advertising

Rumoring can be especially valuable when official sources are incentivized to hide information — for example, when an energy company is withholding information about pollution, or a government agency is covering up incompetence.

Historically, researchers defined rumors as unverified stories or “propositions for belief” that spread from person to person through informal channels. Rumors often emerge during crises and stressful events as people come together to make sense of ambiguous, evolving information, especially when “official”
information is delayed. In this light, the numerous rumors that spread in the early days of the pandemic about its origins, impacts and potential antidotes are unsurprising.

The production and spread of rumors are often taken as a natural manifestation of collective behavior with productive informational, social, and psychological roles. For instance, rumors help humans cope with anxiety and uncertainty. A population coming to terms with the risks of nuclear weapons could find a way to voice fears by seeing dings in the windshields of their Ford Thunderbirds and Chevrolet Bel Airs. Recognizing these informational and emotional drivers of rumoring can support more empathetic — and perhaps more effective — interventions.

We initially developed this framework to guide our “rapid response” research. After conversations with local and state election officials struggling for guidance about when and how to address false claims about their processes and
procedures, we adapted the framework for their perspective. Since then, we presented it to a small number of election officials for feedback. We aim to develop, deploy and evaluate trainings based on the framework for 2024.

Though we developed techniques to classify rumors specifically for elections, we see potential for much wider application.

Evaluating rumors for potential virality can be more useful than deciding whether to apply a label like misinformation or disinformation. It may not be worth drawing attention to a rumor likely to lapse, but it would be valuable to correct a harmful false rumor with high spread potential before it gets started. Such insights can inform crisis communication, platform moderation or fact-checking.

Advertising

Making the right calls on information is crucial because these phenomena are now inherently adversarial. Mistakenly assessing intent or accuracy can cause a responder to lose credibility. One overarching benefit of a framework like ours is that journalists, authorities and researchers can get a handle on ever-shifting flows of ambiguous information without risking a reputation-damaging mistake.

More than that, by inviting a stance that seeks to engage with rumors rather than correct misinformation, the framework could make responses more resistant to bad-faith criticism. It could also allow communicators to acknowledge their own uncertainty, recognize the potential information in
communities’ rumors and help rebuild lost trust. We hope this framework around rumors, and what others might build from it, can support quick, effective responses.

This is excerpted from an article published in Issues in Science and Technology.