Cybersecurity and AI  – two words guaranteed to create a headache. Within the security community, AI is either dismissed like vegan turkey or praised as the industry's magical elixir. Either way, I think we can all agree that the level of conversation is underwhelming. There are already plenty of interesting takes on how AI might apply to security. In this blog, I want to instead explore why the discussion often disappoints and how it might it be improved.

I am neither an AI evangelist nor sceptic. Yet, what is immediately striking is just how polarised the discussion has become. AI is typically characterised as either the snake oil or silver bullet of cyber security. The threats and opportunities afforded by AI are rapidly inflated, only for such fanciful claims to then be outright dismissed.

Rarely, do we see a measured middle path.

Silver bullet advocates readily stoke fears of AI-enabled offense or overpromise on the potential of AI in defensive security. Of course, not all AI vendors push this message, yet it remains worryingly prolific. Within such a narrative, AI is portrayed as both the primary threat, and ultimate solution, to cyber security. Despite their boldness, these hyperbolic claims are rarely backed up with any real evidence.

This message is widely mocked within the security community, leading to questions of why it continues to be pushed. Surely getting buy-in from security professionals is vital to securing business? However, perhaps this doesn't matter as much as it might seem (for now at least).

AI is the buzzword of the day – a term bandied about as society's next disruptor. It is an issue not just of interest for infosec professionals, but one that executives from a range of sectors read about in The Economist and Financial Times. Most business leaders are convinced that embracing AI is vital to their organization’s long-term success (and often for good reason). For AI-focused cyber security vendors, vague language, hyped up claims and phoney solutions might all represent rational and highly effective marketing strategies if the pitch is aimed towards a c-suite demographic. Executives are clearly not stupid, yet they are understandably often  ignorant of the finer technical details.

Cyber security is a harsh environment. The average CISO tenure is short. Crucially results talk; products must ultimately deliver. A strategy that overpromises and fails to perform is typically only effective in the short-term. Here, I see parallels between AI in the context of cyber security and where the threat intelligence industry was a few years ago. Similar to how AI cyber security solutions are sold today, the  well-publicised Norse 'pew pew map' was a similar example of a flashy product that made good eye candy for the c-suite, yet a solution that was ultimately hollow.

The issue with such short-term solutions is that they risk discrediting the industry at large. Despite the progress made within threat intelligence, there is still a perception that too many products simply vomit meaningless and unactionable data – often based on the phoney solutions promoted a few years ago. Assuming AI will continue to have meaningful application to cyber security, today's marketing hype could create long-term skepticism that will inhibit its application and progress.

I want to stay as vendor-neutral as possible in my writing, yet I do think Dragos is worth highlighting as a vendor that has shown how a measured take on security issues is not mutually exclusive with success. By securing critical infrastructure, Dragos operates in a space primed for hype (where warnings of cyber armageddon and an impending disaster scenario could feasibly still drive business). Yet, by moving beyond such rhetoric and instead offering sober analysis that demystifies fear-mongering, they have developed a reputation as a real source of insight.

I would like to see AI vendors learn from these lessons. The cyber security market is increasingly maturing and bogus solutions are simply not sustainable. AI vendors now have an opportunity to offer more precise, nuanced and testable claims. If AI really can help adversaries in offensive campaigns (as has been claimed), then which parts of the kill chain would this relate to and where is the evidence?

If AI could enhance attribution, then what specific threat intelligence processes would it compliment? Where does AI-enabled anomaly detection fit in (or not) with broader non-AI threat detection approaches? Of course, AI skeptics also have to take more responsibility. Although some of the AI hype in cyber security is clearly ridiculous, the outright dismissive sentiments and often mocking tone of those that seem to doubt AI's application, in any form, are also unhelpful. Jokes mocking AI’s application to cyber security (often with blockchain or quantum thrown in for good measure) already feel tired.

AI vendors that discuss both the opportunities, and crucially the limits, of their technology, have the potential to establish themselves as meaningful players that add real value. Moving beyond the hype is not only good for the industry – I am convinced there is now a compelling business case to do so. Crucially, AI can escape the silver bullet / snake oil dichotomy. Moving towards a more measured and mature conversation is in everyone's interest.

The most beautiful artistic interpretation of an AI brain we have ever seen, created by Gleb Kuznetsov.