Disclaimer: The views expressed in this blog post are my own and only my own. They are based on my personal experiences and reflections.
Early 2023, a weird thought crossed my mind. Everyone was pondering on how AI would affect the tertiary job market, and there was already a soft consensus around the idea that those systems shouldn’t be left running unattended; that their maximum utility could only be attained under human supervision. I wondered then if there was anything computers wouldn’t be able to do for us in the future, and eventually it hit me. The reason why we would always need human supervision is not because machines make mistakes, but because when they do, we need someone to be held accountable. The one job computers and androids can never take away from us is taking the fall.
That concept of accountability felt very foreign to me – in the sense that it rarely if ever came up spontaneously in my mental landscape. I decided to dig a little more and explore how it factored into the IT space. Eventually, I realized it underpinned it entirely and that my initial intuition on AI supervision was completely upside down. But we’ll get to that.
EULA-LA
In the midst of this week’s major incident involving software from CrowdStrike, I thought it would be interesting to look at their End User License Agreement (EULA) and figure out what it said about liabilities in such cases. The original thread contains more details, but here is what the contract boils down to:
- The program comes with no warranties that it does anything (including what it is advertised for) or even run reliably.
- If any damages incur from the use of the program (loss of data, disrupted operations, injuries, you name it), they’re on you even if the vendor “knows or should have known about the possibility of the damages”.
- The program should never be used in sensitive environment where fault-tolerance is a prerequisite (i.e., nuclear plants, aircraft control systems, life-support systems, etc.).
- Even in the case that liabilities can be established, they are limited to whatever you paid for the program. There’s a line I love because it really drives the point home: “CrowdStrike would not enter this agreement without these limitations on its liability”.
People pointed out to me that such terms are pretty boilerplate as far as software EULAs are concerned. That was in fact my broader argument: look at Windows 11's (§9.d.) or Nvidia GeForce’s (§6) EULAs, they contain the exact same provisions. I understand that major accounts may have the power to negotiate custom contracts, but I also notice that the contract I based my analysis on was meant for a government customer. It’s safe to say that for almost everyone on the planet, and definitely for us mere mortals, standard terms apply.
We end up with a very interesting situation where software vendors create possibly insecure programs, make billions in profit, and the associated risk is entirely shouldered by the customer. It’s been this way since at least the 90s.
Cloud at last
Then, mid-2010s, the whole industry packed up and moved to the cloud. Many years later, after countless heated debates on the subject, I can publicly admit that I don’t get it. I’ve seen your AWS and Azure bills and there no way in hell I’m ever believing you couldn’t buy dozens of servers and a couple of SREs with that money. Whenever I make comments about owning and controlling hardware, it backfires on what I now realize is the accountability issue: managing cloud-like infrastructure is too hard[1]. But hard is another way of saying “we don’t feel like we can reasonably assume responsibility for managing this service on our own”.
So this risk of running possibly insecure software, that was initially offloaded onto you by vendors, is in turn passed to cloud providers (who, admittedly, are pretty good at offering these services). This is where it becomes a thing of absolute beauty:
- EULAs for cloud services contain the exact same language (§9) as their software counterparts, which means that they won’t be liable for any damages either.
- You will be paying huge premiums to transfer accountability back to, sometimes, the very same company who put it on you in the first place… and it will take your money but still won’t accept liability. I swear I’m rolling on the floor laughing as I’m typing this sentence.
“I accept the risk”
In the infosec world, in case of an incident and for fault of a better culprit, the axe will usually drop on the CISO –individuals sometimes known to utter the phrase dreaded by security engineers everywhere: “I accept the risk”. While it seems we’re nearing the end of our quest to find accountability somewhere, there is actually no such thing here. The naked truth is that I can’t name a single instance where a CISO accepted a risk in the sense that they had to face the consequences for it. Consider that the average tenure for a CISO is between 18 to 26 months. Despite doing what I’m sure is their best with the limited means available, the fact is CISOs are in fact playing the most cynical game of musical chairs: accumulating long-term risks and moving on to the next position before they can ever materialize. The looser can always blame their long-gone predecessor. But wait, there’s more! A whole industry has developed around mitigating exposure not to incidents, but to responsibility. We call it cyberdefense.
Our industry, I’ve come to understand, primary functions as a plausible deniability system for decision makers which happens to create security as a byproduct of its operations. And boy are they willing to pay for this service. You have a metric ton of products to choose from, consulting firms who devise exorbitant pay-to-win quadrants to prove you made the right purchase, compliance frameworks where all those ticked boxes attest you did your best. Cyberdefense is a business centered around accountability transfer: you don’t buy security but someone to blame when it all goes down. Yet it’s largely symbolic since nobody is liable for whatever happens.
The accountability black hole
A beautiful sentence I heard somewhere reads: “the machine’s purpose is what the machine does”. Looking at the whole industry and being unable to find anyone responsible for anything anywhere, I find myself forced to conclude that lack of accountability is the goal of the system. We built a black hole which shifts, attracts and distorts blame, dragging it inexorably and forevermore into an event horizon it can never reach.
This is where I circle back to my introduction, about getting things wrong on AI. For what I thought was the irreplaceable function of human beings (being socially accountable) in a machine-run world… turns out to be, in fact, the one variable we’ve collectively been trying to remove from the equation. It’s not that we need humans to supervise AIs for accountability purposes, it’s that we need AIs so that humans are not accountable anymore.
You see, accountability black holes are everywhere. Western societies are risk-averse and litigation-heavy. The emergence of black holes is a natural systemic response. The financial market reproduces those exact mechanisms, so does the insurance industry[2], and you don’t have to look too hard at the political realm to discover superstructures meant for dissolving any possible responsibility either. It’s not just the cybersecurity world.
What do we do about this? First, we need regulation. The hot potato only gets thrown around because the risk creators are allowed to eschew it in the first place. I get how securing software is a hard job, but I can’t help wonder what our industry would look like if vendors had much stronger incentives to ship secure products. After 50+ years of free-for-all, I’m willing to try something else. On the other hand, and perhaps counter-intuitively, I think it would be a good idea to cap legal costs and liability to reasonable levels. If the structural problem is the overwhelming dangers of being found responsible for any mistake, perhaps we can collectively take a chill pill, admit that accidents happen and not sue/fine each other out of existence at the first offense – in other words, punish consistently but reasonably.
Or we could go the exact opposite direction and ratify once and for all that nobody will ever have to answer for anything related to computers. It’s unclear that we’ll end up more secure but at least we can stop wasting all that time, effort and money pretending otherwise.
Your move, decision-makers.
[1] At some point I also have to ask: are we using the cloud so we can leverage very complex technologies with huge overhead almost nobody really needs (looking at you Helm & Kubernetes), or are we choosing those complex technologies so we’re locked into using the cloud?
[2] Someone smart working for a software vendor explained to me how insurance companies typically cover direct damages but not operational ones (i.e., having IT systems unavailable for days). A long time ago, a representative from an insurance company also explained to me that the point of insurance is not to shift risk to the insurer, but to spread risk all over the insurer’s customer base.