Cyber Deterrence is a Reduction to MAD
Some thought I had while reading an article in The Register based on an interview with Kenneth Geers, ambassador for the NATO Cyber Centre and senior fellow of the Atlantic Council. What follows are just some disjoint idle thoughts.
Bottom line up front
- Cyber, as a domain, is inherently asymmetric in both risk exposure and offensive capability
- “No one wants to be the first to die for a mistake”
- Trustable attribution requires “showing your work.” No intelligence org wants to reveal their secrets – burning their best assets and reducing their long term capabilities in trade for uncertain short term gains
- Extraordinary claims require extraordinary evidence. Intelligence work is not the place to look for extraordinary evidence. Except, of course, with extraordinary access (see above, re: burning assets)
- Securing critical infrastructure and data is generally in the hands of (private) civilians, but the (public) nation state is expected to provide deterrence. That seems like an externality
Not my problem
Firstly, a lot of infrastructure and data is in civilian hands. The people and organizations responsible for securing things are not the ones responsible for retaliating against attacks. They can’t do it anyway because they lack the resources, capability, and legal authority to do so.
As for investing in securing their systems, there is no regulatory requirement to do so. Indeed, for a board of directors, I’ve seen it suggested that they have a fiduciary duty to shareholders not to bear the cost burden of securing critical national infrastructure. It’s not their business to directly bear the costs of defending the nation, that’s what the nation state is for in the first place!
the domain of cyber-warfare, which is particularly fraught because of the fuzzy overlap between outright conflict, cyber-espionage and the responsibility of civilian agencies to protect critical infrastructures (banking, utilities, transport) – most of which is in civilian hands anyway.
Clearly, no one knows how to even conceptualize cyber as a domain. As I phrased it on Twitter:
I'm wondering if thinking about cyber is sort of like admirals in 1910 contemplating the concept of Air Power.
— thaddeus e. grugq (@thegrugq) June 19, 2016
"what if we limit the maximum size and number of cannon on an airplane?" "how do we cross the T in the air?"
— thaddeus e. grugq (@thegrugq) June 19, 2016
The more you hack in peace, the less you bleed in war
Hacking takes time. Developing the tool chain takes time, recon takes time, sometimes systems get hardened and the optimal time to hack them was in the past, and so on and so on. The best time to collect intelligence about an adversary is before you need it.
Kenneth Geers, ambassador for the NATO Cyber Centre and senior fellow of the Atlantic Council, explained that an air force could bomb a target overnight but (as evidenced by Stuxnet) you “can’t hack on the same day”.
….You have to decide who, when, how deeply you want to hack… and study networks beforehand,” Geers told El Reg.
“You’d better do a lot of hacking in peace-time [since] you can’t do it overnight,” he added.
Deterrence requires Attribution => Trust => Transparency
The level of attribution required to prove a cyber event was orchestrated by a particular threat actor is pretty high. There needs to be a level of transparency that is sufficient to convince the population that an attribution is accurate. No one wants to start a shooting war, or massive escalation, over a case of mistaken identity.
The problem with transparency is that it will invariably reveal sources and methods which the adversary can then use to improve their own security. For example, if the evidence is an audio recording of General Baddeed saying “hack company XYZ and steal their secret formula.” Well, General Baddeed’s security team is going to be looking for audio bugs in his office within the hour. And that will be the last time anyone learns anything about what General Baddeed discusses in his office! Is “proving” that General Badeed was responsible for the theft of Company XYZ’s secret formula worth the cost of not knowing what General Baddeed says in the future? Unless you’re Company XZY (who probably doesn’t get a say in the matter), the answer is “nope.”
Another problem is that much of the cyber intelligence analysis and production is done by private companies these days. Not only do they want to keep their sources and methods safe from the opposition, they also have a fiduciary duty to their share holders to protect their trade secrets. Consequently, they don’t really want to reveal too much of how they arrived at their conclusions. It may damage their position in the market, probably directly aid their competition, and will certainly aid the opposition.
Yet another problem is that frequently the level of “proof” in an intelligence based investigation is not quite to the “smoking gun” level that, for example, a normal person would expect. It will probably be a bunch of circumstantial evidence, a complexity of timelines, snippets of information from various sources with different levels of confidentiality and reliability. This patchwork of data needs to be processed and analyzed via complicated techniques designed to reduce cognitive bias. All of this, only to arrive at a sort of high probability of “maybe”. “Given this available data, this is the range of conclusions that fit, and this particular one seems the most likely, maybe.” Hardly something you’d feel comfortable starting a war over.
Trust, hard to earn, easy to lose
Complicating matters further still, a number of high profile misattributions in the past has undermined public confidence in cyber attribution. Indeed, Intelligence failures in general have reduced the credibility of Intelligence based statements with the public.
If the opposition is capable of creating sufficient doubt about attribution, for example by following the Russian Maskirovka doctrine, then it’s hard to see how to get the population to support escalation. The days when the FBI could get away with just saying “trust me” are long gone.
Establishing “plausible deniability” in the face of accusations of launching a cyberattack is always a potential option. “Fear of retaliation is low and there’s not much deterrence,”
How exactly deterrence would work seems “left as an exercise to the reader.” So how does one nation state deter another from conducting cyber operations? If DPRK is behind the SWIFT hacks, as allegedly they were behind the Sony hack, what nation state level tools are available to deter them? What is Bangladesh supposed to do to deter DPRK from stealing millions? How does the US deter Russia from conducting offensive “active measure” campaigns, if they are purely executed in the cyber domain? Unless theres a full escalation chain to MAD war.
Essentially, for deterrence to be effective it needs buy-in from the population. That requires a lot of transparency about the attribution process, including data which might come from valuable sources and methods that are irreplaceable. And even then, there is still no guarantee people will trust it or believe it.
This doesn’t add up
It is a law of intelligence that no information should be given away, it must be traded for something of equal of greater value. With the low probability of public acceptance, the not unreasonable possibility of misattribution, and the near certainty of exposing valuable sources and methods; the incentives are not aligned for a sufficiently transparent attribution.
Where it went off the rails
I personally believe the problem comes from a poor conceptual understanding of the cyber domain. The best work on this was years ago, and that has been discarded for this “how is a cyber like a nuke?” mentality. NATO says they will respond with kinetic force against cyber attacks, and yet they don’t even know what a cyber attack looks like. Good luck with that.