Hundred Billion Dollar Infosec Question

A Thought Experiment - If someone gave you a hundred billion dollars to spend on improving information security how would you spend it?

Hundred Billion Dollar Infosec Question

If someone gave you a hundred Billion dollars to improve information security, how would you spend it? This is a thought experiment I decided to do after a Twitter conversation with Jeremiah Grossman and Gabe The Engineer. Jeremiah had tweeted that customers were set to spend $96 billion on cybersecurity in 2018 (per Gartner*) while adversaries would spend very little to keep up. I asked if it might be better to give $100B to a hundred of the smartest infosec people we could gather up and let them try to attack the problem.

Gabe suggested that was actually a pretty interesting idea, at least in the abstract.

So here’s my attempt to enumerate what I’d do with $100B to improve information security. I’m not saying I’m one of the smartest people in infosec, by any means. I’m no more qualified to opine on this than anyone else who’s been doing infosec for a few decades; and less so than many.

But maybe this will be a springboard for throwing around ideas. Questions, suggestions, and politely worded constructive criticism are definitely appreciated. Disclaimer: These are my own opinions and don’t necessarily represent the thinking of my employers: the CERT Coordination Center, the Software Engineering Institute and Carnegie Mellon University. Also, Jeremiah and Gabe bear no responsibility for my ideas. :-)

When I first started thinking about this (I had an eight hour drive to think), I realized there are a lot of assumptions implicit in the question. So first, let’s make a few of them explicit:

One: I’m not planning to spend an additional $100B on the problem, on top of the $100B Gartner says we are already spending. The idea is to spend that money more efficiently. Which means we either stop spending money on some of the things we’re spending it on now, or we make those things way better.

Two: I’m going to focus on technical fixes. I’m not going to focus on things like mitigating risk via cybersecurity insurance. Similarly, I’m not going to focus on policy fixes. You could spend the $100B to lobby governments to implement super strict security standards, but you might not be successful (either with the lobbying or the standards.)

Three: I’m going to talk about how we could improve security over the next 5, 10 or 15 years. It seems pointless to discuss how we’d spend $100B to defend against today’s threats. That’s basically hindsight at this point.

The first thing I had to break down was, what does it mean to “improve information security”? I have a longstanding pet peeve that people often conflate “cybersecurity” with information security. This isn’t merely an aversion to the use of “cyber” as a marketing buzzword but a recognition that the security of information relies on more than just the security of the underlying computers, networks, and software.

The industry vaguely recognizes this — you often hear people talking about how it’s the ultimately the business risk, or the process, or the human factor that’s important, rather than the technology. Yet the vast majority of spending on “information security” is on tools to protect those computers, networks, and software. This is a necessary but insufficient part of protecting the actual information. When I say security, I mean the overall security of information, not the security of the digital devices which are often used to store, process and exchange it.

Also I’m talking about improving security globally, not just for one organization or group. A CISO might be content with improving their organization’s security to the point where attackers give up and go away. That’s like locking your house so robbers will go to your neighbor’s house. It might help you in the short term, but it won’t drive down the crime rate in your neighborhood.

For $100B, I feel like I should be improving everyone’s ability to control the confidentiality, integrity, and availability of their own data.

OK, someone just gave me an AmEx Black Card and a mandate to improve security.

What am I going to spend my newfound wealth on?

First, I would dump this paradigm of information security as a “war” between two sides. A war implies two sides trading control of a finite resource (land, cities, bridges, whatever). There aren’t two sides in infosec; there are thousands. As a defender, your adversaries include almost every nation on Earth (possibly including your own), criminal groups, terrorists, hacktivists, corporations, disgruntled employees, all of whom are also each others’ adversaries. The territory you have to defend is amorphous, corporate networks, mobile devices, cloud providers, manufacturing or service delivery systems, products, suppliers, customers, public systems and so on, any of which can become an adversary at any time.

“Cyberspace” is an ecosystem, and we should treat it as such. There are occasionally suggestions to approach security issues using biological, medical, or public health approaches. We even refer to viruses and anti-virus products. But we still can’t break out of the “defend the castle” mindset.

I’ll propose (although I’m sure others have already said this) that the key difference between our infosec status quo and the way biology deals with threats is that biological / ecological systems don’t try to stop 100% of adverse effects. They go all out to stop existential threats (i.e., a hungry lion) but have proportional responses to lesser threats (i.e., I inhaled some dust.)

Look more closely at our own bodies. We have a many layered defense system. Our bodies don’t attempt to keep everything out. Sure there are some “border protections”, e.g. skin, that only allow certain traffic (food, air) through certain ingress points (mouth, nose). But our bodies don’t assume that anything that comes in should have free rein throughout our bodies! If something tastes or smells bad, we don’t eat it. It it gets past but is still bad, we might cough it up or, uh, otherwise expel it. But most powerful of all, we have an immune system that attacks anything that doesn’t belong (possibly supplemented by medicines.)

Contrast this to computer anti-virus software, which only attempts to eliminate things it can identify as bad, using signatures or basic anomaly detection. We need aggressive “cyber immune systems” that are corporate-wide if not Internet-wide. I don’t know exactly what those would look like, but fortunately I have a hundred billion imaginary dollars to sink into research.

We don’t need our computers to be 100% healthy any more than we need our bodies to (or our gardens, or our cattle herds). Most companies and home users probably already have adversaries using their compute resources, whether those adversaries are bitcoin miners, botnets, or nation states. Let’s build that into our equations rather than pretending we’ll ever have a clearly defined “enterprise” with authorized users “inside” and everyone else “outside”.

We also need to embrace artificial intelligence for security. The current method of “researcher finds a vul, reports it, a patch gets created eventually, and users may or may not ever apply the patch” just doesn’t scale. Identifying potential attacks and mitigating them in functional environments needs to be accelerated by orders of magnitude. Running a vulnerability scanner and a web scanner and sending the results to IT to patch isn’t fast enough. I’d like to see a machine-learning system that attempts to emulate attackers — something smarter than current vulnerability scanners. It should be able to learn from skilled red teamers. It should be able to try variations of attacks, re-write exploits, obfuscate payloads, etc.

Most importantly it should let our equally-smart patching system know about any problems it finds and have them patched in real time. In other words, rewriting binaries and updating running code in memory. There are some rudimentary (but still impressive) technologies that do these things a little bit. Our team has worked with the Mayhem system that won the DARPA Cyber Grand Challenge a few years ago. At a very high level, Mayhem attempts to intelligently find and patch vulnerabilities. It’s a great start, but it still only handles a small subset of vulnerabilities, architectures, etc.

Finding vulnerabilities is my bailiwick, but the same level of automation and autonomy needs to be applied to detecting and responding to attacks. This speaks to the whole biological paradigm. Our system should be able to detect and classify anomalous behavior, decide on how much of a threat it is, and decide if/how to respond, all without human intervention and at machine speeds.

The same goes for other security functions. Development and testing need to be automated and make much greater use of tools like model checking, concolic testing, directed fuzzing, etc. We need high fidelity simulated environments, both for training people and for testing products and patches. For example, car manufacturers should be able to model or simulate the application of a software patch as well as they can model the physics of traction control.

And while we’re at it, these biologically-inspired, artificially intelligent security systems need to be able to interact with each other in real time. Not just exchange “indicators” or “signatures”, either. Security systems at multiple companies should be able to join together like Voltron if they need to, because of a systemic attack on the Internet or something.

Of course, plenty of people are already trying to apply machine learning to security. I see two shortcomings in the work I’ve seen so far. One is that they’re often aimed at performing existing security tasks more efficiently. This is useful and necessary, but it might be that there are completely new tasks we could apply them to, rather than doing the same old things faster. The second is that they are usually using ML primarily for pattern matching and data analysis. The results still need to be acted on. They need to be more autonomous, to use the buzzword.

What I’m suggesting is moving beyond just ML to actual artificial intelligence. But the AI doesn’t necessarily need to replicate *human* intelligence. There are lots of other forms of “intelligence” (defining it loosely) that could be useful. The aforementioned intelligence of our immune systems. The intelligence of RNA, bacteria, or ants. (Some of the bee-swarming stuff is interesting.) Even the emergent behaviors of entire ecosystems, from predatory patterns to evolution to weather resistance (a tree bends rather than trying to avoid any effects from the wind.)

Finally, we need to revisit the architecture of the Internet and how we do authentication (basically through SSL certificates.) I feel like we could create a higher-security version of the Internet for things like financial transactions, communications between safety critical systems, etc. There’s really no need for those to go over the same network as tweets and music and cat videos. There are precedents for this: classified networks, the network that manages the energy grid, etc.

I am going to make one public policy suggestion, based on the idea of treating information security as a biological and/or public health problem rather than a computer problem. There should be some sort of international organization akin to the World Health Organization — or to national agencies like the Center for Disease Control in the U.S. — to monitor the health of the Internet as a whole and, if necessary, take actions against systemic attacks.

My employer, the CERT Coordination Center, was envisioned as such when it was started in 1988, but the scale of the Internet has far outpaced what one small research lab can (or should) police. There needs to be a “meta-CERT” to coordinate all the national CERTS, or a U.N. body, or something. I’m not saying they should have control of our AI security systems; just that they can coordinate between decentralized nodes when necessary. As we’ve learned recently, coordinating between a lot of different stakeholders is hard.

So there’s my two cents. Hopefully this spurred some thoughts in your head, even if you totally disagree. Feedback is welcome! I don’t pretend to have all the answers. I mean, the actual first thing I’d do with all that money is convene a workshop and invite lots of people smarter than me to discuss the problem.

The awesome image used in this article is called 'Pure Money' and it was created by FAME.