Disclaimer: The views expressed in this blog post are my own and do not necessarily reflect those of my employer. They are based on my personal experiences and reflections.
Old flamewars die hard. In January 2021, a debate had coursed through the infosec community regarding Google’s TAG and P0 decision to publish a report on what they suspected could be a western antiterrorism operation. While there were dissenting voices back then, it seemed like there was a degree of industry consensus that Google’s decision was the correct one. Fast forward to June 24, 2024: Michael Coppola releases a blog post firmly in favor of not disclosing such cyberattacks, which I encourage reading. This is as good an occasion as any to lay down my thoughts on the subject, which I never did despite giving a few conferences on the subject in the past years (chiefly Ethics in Cyberwar Times at PTS2022).
Let us start with breaking down Coppola’s blog post into core arguments, which I’m reproducing as faithfully as I can:
- If defenders aim at reducing harm, then they should leave counterterrorism operations be because nothing reduces harm like preventing terrorism;
- Both current and future operations are impacted by the destruction of capabilities (i.e., patched exploits) without which agents might need to be put at risk on the field;
- There is an ontological difference between counterterrorism and espionage in the cyber world;
- By putting a halt to a counterterrorism operation, Google meddled with national security affairs.
The post also lists a number of examples in which offensive operations conducted by (Western) countries have led to positive outcomes – I do not intend to contest those on a factual basis, although I will note for now that a number of them happened in the context of wartime operations.
The aim of my own writing is not to categorically establish that cyberoffense is wrong – I do not believe that. Instead, I hope to shed some light on the (I believe intractable) moral dilemmas defenders are faced with, and provide a more nuanced account of the tradeoffs at play.
Acknowledging bias
It’s probably worth pointing out that this is a difficult debate for the infosec community as a whole, because all participants have vested interests that likely impact their thinking on the matter. Looking at both sides, it’s quite interesting to note that proponents of burning operations tend to be defenders, while people in favor of not patching exploits happen to work as vulnerability researchers or are members of the intelligence community. Did we pick our respective fields based on pre-existing beliefs, or are those beliefs a consequence of the environments we ended up in?
I’m a strong believer of systemic thinking, so it is very important for me to mention explicitly that nothing I write is an attack on Michael Coppola or anyone else involved in the 0day trade. Had I made a couple of different career choices ten years ago, I might be in their place, and if I were in their place odds are I would think as they think. This post is a critique of the system they evolve in.
Regardless, it can be argued that defenders reap secondary benefits from a world where more operations get burned: status, speaking slots at prestigious conferences, and so on. Conversely, people on the attacker side sleep better as long as they believe their work is used for the common good, and understandably do not want their toys broken by random basement dwellers. There are grounds to call everyone involved hypocritical, so let’s not do any of that and just move on.
On terrorism
Terrorism is one of those joker-card words that gets thrown around for quick wins in arguments. “We’re doing this to fight terrorism. Do you have a problem with that? So you support terrorism then? Thought so.” Wait, hold on, let’s talk about terrorism for a second. One of the assumptions made in the original blog post is that terrorism has a sort of obvious nature – when you see it, you know what it is. It might be true in some cases, but generally speaking it’s really not.
Consider that Nelson Mandela used to be on terrorist watchlists. That the Uighur minority is accused of terrorism in China. I’m not equating Mandela to ISIS, in fact that’s my whole point: having them in the same category shows how little the “terrorist” qualifier means on its own. You do not recognize terrorism by simply looking at it, and when you do, there’s another country in which your company operates that strongly disagrees with your assessment. This is where I’m usually accused of defending moral relativism: if different people have different and contradicting visions of terrorism, then terrorism doesn’t really exist, right? Wrong. All I’m saying is, studying an attack campaign with the limited technical information I have, I don’t think I can in good conscience make a call such as who’s a terrorist and who isn’t (which, incidentally, would be interfering with national security affairs).
How about trusting the government then?
I will answer your question with two of mine: “which government?” and “have you lost your freaking mind?”. The first is problematic enough on its own. Imagine country A wants to go after a separatist that went into exile in country B. The first says they’re a wanted person, the second is happy to offer them protection. Would you report on a possible intrusion case? Here, there are two perfectly legitimate but incompatible jurisdictions, with you sitting on the overlap. What if you’re doing business in both of them? What if the board says you might one day? Pick any side and you end up on a powerful someone’s naughty list.
It only gets downhill from here: the NSO story has proven that even Western governments cannot be fully trusted with powerful cyber-weapons – odds are if there was no risk of being discovered, such capabilities would be abused even more. I have to say I find it quite funny when I see respected members of the infosec community (rightfully) rise up against the curtailing of human rights in their home country, only to swear the next day that their government can do no wrong in cyberspace. Democracies were designed with built-in counter-powers because they’re needed. Whether we like it or not, this is the role the defense industry plays in today’s ecosystem.
Exploits over time
One more thing that we need to take into account is related to Coppola’s argument #2 which, I have to point out, cuts both ways. It’s true that exploits are generally not one-shot devices used for a single operation. But who’s to say the next one will also be related to counter-terrorism? There are documented cases of cyber-espionage between allied countries – nothing wrong, it’s their job, I was told by overseas friends back then. Fine, but you can understand how it also makes sense for local defenders to get such tools disabled if they get the opportunity. Fair’s fair.
We don’t even have to dabble into the subtleties of geopolitics for this argument to make sense. Exploits sometimes get captured by rivals. They get leaked. Imagine being the analyst who decided to look away from EternalBlue, only to see it stapled onto ransomware down the road – how’s that for failing to reduce harm?
Cogs in the machine
It could be argued that my argument amounts to a collection of speculative what ifs. That’s sort of the point. Being a defender is tough, due to our very limited perspective. Given an incident, we rarely know what truly motivates the attackers and have no way of guessing what they’ll do next. Was this journalist targeted for political reasons, or are they in fact an undercover operative from a systemic rival? Faced with impossible choices and tremendous outside pressure, we have no option but to decide on a policy, a clear set of rules which, at least, shields us from accusations of partisanship.
That policy can only be: “we will protect customers against all cyber-attacks”. Not “we will never protect against state-sponsored cyber-attacks” (good luck pushing product with that), not “we’ll protect you from non-local cyber-attacks” (how would you even know? Are you so close to your intelligence community that they’re open about their TTPs and if so, why should I trust you?). Just: “we will do the job we promised we would to the people who paid for it”. I understand this will inevitably lead to undesirable outcomes there and then, but I don’t see anyone asking criminal lawyers to botch their defense when it’s convenient either.
Overall, defense is part of a vast ecosystem that is the cyberspace, where every participant has a role to play – leading to a delicate balance which (by the way) doesn’t necessarily favor us in the first place. It’s best that everyone involved stick to their place.