The message is ubiquitous: Update your software. Software patches, in addition to containing new emoji and features, include security fixes that remediate vulnerabilities in our technology. Keep up to date, the advice goes, and you will be safer.

But each new revelation of the major hack carried out by Russian intelligence operatives – which as of this writing has hit five government agencies and probably many corporations, with more to come – shows our problem isn’t broken technology. The problem is we haven’t figured out how to manage trust throughout the software supply chain and haven’t reckoned with the consequences of that failure.

Many of the organizations breached in this operation knew they were in the sights of adept foreign hackers. To guard against the threat and manage their complex networks, they sought technical solutions, including a piece of software called SolarWinds. This spring, when SolarWinds offered a software update, they installed it – trusting the new code, like so many updates before it, would make them more secure.

These organizations were wrong. Hackers reportedly working for the Russian Foreign Intelligence Service (SVR) appear to have placed a stealthy back door in the updated versions of SolarWinds. When customers applied the update, the hackers gained access to their networks. From there, the hackers could reach many additional machines and user accounts, spying undetected for up to nine months. Even though the campaign has come to light, it is nowhere close to over; the hackers still have illicit access in many organizations that will be extremely difficult to remove.

The principle that emerges from this espionage campaign is as apparent as it is alarming: The very systems that organizations, by necessity, trust to manage and secure the increasing technical complexity of modern networks can also allow hackers to penetrate those networks. This web of reliance is so complex – with network defenders trusting software that in turn trusts other software and so on – that it is almost impossible to comprehensively audit or understand. There is simply too much software and too many software updates. When any thread in the web is yanked, the whole web falls.

This isn’t the first time that a security breach has made the vulnerability of the software supply chain apparent. Contrary to the spy-thriller trope of the lone hacker manipulating a single targeted computer, some of the most insidious operations are slow-burning efforts and gain privileged access to many targets via key software. The United States’ most capable competitors have known this for years. A Russian attack in 2017 known as NotPetya, which caused more than $10 billion in damage, gained access to its initial tranche of victims by compromising tax software that is widely used in Ukraine. Chinese espionage operations against other parts of the software supply chain compromised at least six companies since 2016, using each as a springboard to more victims. A different Chinese operation added back doors to the widely used security products of Juniper, a popular American company, in 2012 and 2014. Against this backdrop, the recent Russian operation indicates how little progress we’ve made in guarding against this kind of threat.


Yet it’s hard to know what to do or recommend. Skipping software updates is a bad idea; they do, usually, make code better and safer. Instead, the kinds of organizations that find themselves in foreign hackers’ line of fire have to figure out how to get better both at building some degree of trust in their software supply chains and at operating in a high-threat environment in which anything close to full trust is inherently elusive.

Increasing trust in the software supply chain won’t be easy. Still, even if it’s impossible to audit every line of code that’s running on a network or to inspect every software update that’s applied, large organizations can improve on the status quo. They could try to limit the reach of outside software that has far-reaching privileges and select firms with strong security practices for key administrative functions. Potential targets also would do well to develop mechanisms for evaluating the trustworthiness of the code they use and to create market-based incentives for vendors to compete on security. (Reporting has indicated SolarWinds’ practices left a lot to be desired.)

Given that complete trust is unattainable, potential targets should also assume their networks have been compromised, prioritizing detection and response as much as they prioritize prevention. Their security will depend on how effectively they can limit the damage and how quickly they can detect the intruders: Well-trained teams of human analysts should be tasked to assume hackers have breached the network and should continually hunt for intrusions. In this case, the broader operation came to light only after FireEye, a prominent cybersecurity company, identified some of its systems had been compromised. Nine months into this espionage campaign, many of the targets are probably only just realizing they have been compromised, and it’s likely most still don’t know the extent of the harm.

The bottom line is simple and disappointing: This case is a reminder that we have systemic issues that no new technical fix and no new policy are likely to solve in the near term, even for large and capable organizations. We have downplayed that reality for too long, in part because we have made progress on how we manage key parts of our technology – for example, by administering our networks and fixing some software vulnerabilities. But we haven’t made nearly as much progress in figuring out which systems in the software supply chain deserve our trust. That problem is intractable and won’t be fixed by technology alone. The only thing worse than recognizing that fact is not recognizing it.

– – –

Buchanan, an assistant teaching professor at Georgetown University’s School of Foreign Service, is the author of “The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics.”