Technology alone will not help. Like the email-spam problem, a less complex problem than fake news, it will require a multipronged solution involving technology, education and policy.

Share story

EDGAR Maddison Welch and Khawaja Muhammad Asif may have nothing in common. Welch, a North Carolina resident, opened fire in a Washington, D.C., pizzeria, while Asif, the minister for foreign affairs of Pakistan, made nuclear threats to the nation of Israel. But both men were driven to take action by fake news or misinformation or disinformation. Whatever we choose to call it, fake news has real consequences.

To understand fake news, one has to understand the underlying drivers — and technology is a major driver. The success of the internet and social media led to various opportunities to monetize our news feeds and viewing habits. Programmatic advertising and ad exchanges cropped up to slice and dice our online attention.

The increasingly low costs in setting up websites and the monetization opportunities offered by internet advertising have resulted in a mushrooming of fake-news sites, many operated by foreign actors in Macedonia and other countries, purely for financial gains. Some have likened the economics behind this to the growth of subprime mortgages before the housing crisis.

Fact vs. Fake: Fighting back against fake news

The Seattle Times LiveWire event Sept. 13 at University of Washington’s Kane Hall is sold out, but it will be streamed live at 6:30 p.m. on Facebook at facebook.com/seattletimes

Readers can replay the live feed on Facebook shortly after it finishes airing.

The advances in technology, particularly in artificial intelligence, have also made it possible to create increasingly realistic fake audio and video. Project VoCo from Adobe and speech synthesis projects such as Lyrebird and Facebook’s Project Loop seem precursors to a potent “Photoshop of voice” tool.

Augmented-reality projects from Apple’s ARKit and Google’s ARCore have made it easy for anyone with reasonable programming skills to create videos of real-life scenes containing fake objects or persons inserted by the creator.

A facial feature manipulation project, Face2Face, promises to create videos of anyone saying anything. Behavior engineering companies like Cambridge Analytica are known to employ artificial intelligence and big data to shape narratives and affect mass-opinion shifts. The 2016 election may have been the first U.S. election fought using technology enabled narrative shaping and will not be the last election to do so. Without proper checks and balances, we might innovate our way to dystopia.

Working from the trenches of technology creation, I am making a case for hope. Fortunately, “technology solves the problems it creates” may as well be the refrain of humanity’s innovative spirit. Leaders of technology companies such as Facebook and Google have pledged openly to fight fake news by partnering with reputed third-party fact-checking organizations and by investing significant resources with the goal of disincentivizing fake-news purveyors and educating content consumers of disputed content.

Initiatives such as Google’s Project Redirect aim to penetrate thought bubbles and are being deployed to counter radicalization of individuals by extremists.

The global participation of more than 900 AI researchers, hackers, journalists and fact-checkers in the Fake News Challenge, created with no budget or sponsorship, gives me immense hope. Fake News Challenge’s goal is to enable the sharing of ideas among these communities and investigate solutions that minimize the burden of fact-checking.

Artificial intelligence, in particular, has tremendous potential to combat fake news. Our company, Joostware, is working with the support of the Knight Foundation and the Internet Archive to build an audio and video claim verification tool for fact checkers and journalists. AI can be used to build trust and reputation systems, and for early detection of potentially viral content to decide what to fact-check proactively. AI also can be applied to identify content evoking strong emotional reactions like many fake news stories do.

Of course, technology alone will not help. Like the email-spam problem, a less complex problem than fake news, we will require a multipronged solution involving technology, education and policy. Educators like Jevin West, an assistant professor at the University of Washington, are training young minds to be more discerning about what they read online. Countries such as Germany are considering legal options for the fake-news problem. Above all, journalists and fact-checkers are working tirelessly to prevent the truth from being stifled or distorted. I am hopeful that, working together, we can build a society cemented in truth.