Artificial Intelligence in cybersecurity foreshadows a change in hiring practices and the nature of cyber attacks we defend against.
Nearly half of enterprises surveyed in a joint study in 2018 were deploying Artificial Intelligence (AI)-driven security automation, with a further 38 percent planning to implement AI before the end of 2019. It's a tough call for executives who would prefer to maintain control over every aspect of their organization. However, the reasons for introducing AI to bolster cybersecurity protections are compelling.
Corporations face an extraordinary dearth of cybersecurity professionals in the job market. They also must defend against the increasing volume, frequency, and sophistication of cyberattacks on organizations. The convergence of the issues and the growing sophistication of AI make the introduction of the technology a logical decision for companies. However, there is a downside to the choice.
The same technology is available to the bad guys. Look forward, then, to a cyber arms race that will not only pit hackers against businesses but nation-states against other countries.
Not Enough Humans to Go Around
Every organization needs to protect its information by deploying and implementing cybersecurity solutions and practices. However, all these tasks require people to carry out. The lack of suitable skills to install, configure, and maintain these security platforms is a significant concern for many enterprises. Due to a shortage of cybersecurity talent, many organizations are facing increased risk as cyber threats grow in both volume and sophistication.
There were over one million active cybersecurity jobs in 2018. Over 30% of those positions went unfilled. Experts expect that by 2022, the industry will suffer over 1.8 million vacant positions. The global shortage of cybersecurity professionals is already around 3 million.
The Impact of the Cybersecurity Skills Shortage
The dearth of qualified cybersecurity staff in the job market is not the fault of the HR department. The people are not there.
Unfortunately, the lack of people to fill organizational roles has massive implications for the IT infrastructures that support businesses. For instance, current IT employees have to assume heavier workloads to fill the cybersecurity skills vacuum. The reality is that — however skillful — many IT employees have to work overtime to perform every task. In cybersecurity, oversights can lead to dramatic consequences, as when Tesla left its cloud security settings deactivated. Staff responsible for the configuration of the company's AWS security infrastructure were negligent. They left exposed highly sensitive records relating to the company's car development programs. Also, hackers used Tesla's AWS infrastructure to mine cryptocurrency. Another high-potential consequence of overwork is employee burnout.
Nearly 40-percent of cybersecurity professionals say the cybersecurity skills shortage has led to high burnout rates and staff attrition.
Another concern is that current IT staff do not have adequate time to master the growing complexity of cybersecurity infrastructure. The average enterprise has 60- to 70- cybersecurity solutions to manage. The systems are separate from the myriad other infrastructure items, after which they must look after, like network operating systems, software application development, and end-user support.
The squeeze on their time implies they cannot use the solutions their enterprises have purchased to their fullest. The strain on IT departments drives up the cost of doing business. Underutilized cybersecurity technology resources increase enterprise risk.
Since IT has become a key to strategic advantage in markets, enterprises run the risk of not keeping up with technology developments. On the one hand, companies need technology to remain close to their customers and their demands. On the other hand, technology has become an enabler that facilitates the development of new products and services. IT departments, however, are finding themselves in a bind.
The lack of human resources in the IT department leaves the cybersecurity function woefully understaffed. Also, businesses find themselves falling behind the competition and losing market share because their technology development resources are woefully overstretched.
Another risk to enterprises is that they have to hire more junior staff into IT departments to augment current IT teams. Hiring, training, and retention become expensive investments for companies as they attempt to maintain cyber fortifications. Bringing junior staff up to speed and keeping them on board in a competitive job market impacts the organization's overall return on its IT investment.
However, 62-percent of cybersecurity professionals felt their organization is not providing sufficient levels of training to cybersecurity staff.
Organizations are investing that cybersecurity training budget and more on top of that in AI software.
In AI We Trust
At face value, AI is a perfect fit for cybersecurity defense. AI requires lots of data to operate effectively. The logs of activity that cybersecurity and network operating systems generate are just the sort of environment in which AI flourishes.
Essentially, AI mimics the way humans analyze patterns in data. Only, AI does it with far more data, far faster than humans can, and with greater accuracy than professionals are capable. Also, AI does not burn out or require time off for further training. AI even creates an operational baseline far more efficiently than humans.
AI takes a snapshot of a computer network at its most quiescent. That's a period when, perhaps, there is no one internally working on the system. The network is just "humming along." Then the AI takes another snapshot of another kind of network operation when the network supports the day-to-day activity of users without attacks. The AI uses the millions of network interactions to build a baseline of the natural state of the network operation.
AI engineers develop a feed to the AI of the past "pathologies" of malware "in the wild." The AI can then superimpose malware activity onto a topology of network activity to see where potential vulnerabilities in the network lay. Some AI may be able to preemptively probe the system for weaknesses that may escape human inspection.
When the AI detects activity on the network that is outside tolerances set on the baseline, the AI alerts staff that an intrusion may be in progress. The procedure is the same way humans work with standard Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). The systems rely on the same reactive signature-based solution model as full human interaction does.
In the case of AI taking over repetitive tasks and depending on the nature of the intrusion, cybersecurity staff can choose either to allow the invasion to continue or to end it before it burrows deeply into the network. If it infiltrates too deeply into the network, it may infect end-user systems and central databases that contain vital information.
However, sometimes, infosec specialists want to watch the activity of malware. Their observations may give staff vital information about the origin and nature of the probe. The intelligence they gather about the behavior of the infection can help inform the broader infosec community about the nature of the attack. The information IT groups collect about its behavior may point to whether the intrusion fits the profiles of nation-state attackers like North Korea. It may also divulge whether the group is instead an apolitical criminal ring.
The Potential Downsides of AI
While AI in cybersecurity may initially backfill job vacancies in the sector, its broader use could imply greater homogeneity in network defenses. Algorithms may be able to sift through far more data sets than humans. However, algorithms are merely pre-programmed sets of instructions. They are not able to "think out of the box" to detect vulnerabilities that intuition may be able to identify or create.
Also, AI-driven solutions will come from a finite number of vendors with groups that tend to think similarly. Eventually, as is the case in unregulated marketplaces, there will be a consolidation of vendors. The similar algorithmic approaches of a few makers will subsequently find themselves outnumbered by the diversity and creativity of multiples of bad actors.
And though automation can significantly increase efficiency, it can also create blind spots in an organization's cybersecurity defenses. The possibility exists that automation may overlook potential threats or generate and react to false positives. Infosec professionals ponder that hackers could defeat AI algorithms by targeting the data the machines use. Also, by manipulating machine learning algorithms, hackers can either subvert security or leverage it to create a denial of service attack.
Hackers may also use AI themselves to attack the AI cyberdefenses head-on. The black hat AI could search for anomalies in network defenses in microseconds. AI-driven malware could use cryptographic algorithms to create patterns of attack that defensive AIs cannot effectively counter. Further, attack-AIs may create false-positives that trick corporate AIs. Corporate AI defenses may perceive the network has an issue that cannot be considered a cyberattack.
The concerns around leaving AI to corporate devices become magnified when infosec professionals consider it may be nation-states wielding cyber AI weapons. Well-known internet malcontents like China, Iran, and Russia can put their military budgets and resources toward AI dedicated to obliterating entire markets, instead of just individual corporations. The United States itself may find one of its AI-tipped hacking tools stolen and sold to the highest bidder or released into the wild.
Few argue that AI can increase the efficiency of cyberdefense. However, organizations implementing these solutions must be on their guard. They should not rely on a single product or algorithm to protect their networks or data. Ultimately, they will find they must always keep humans in the loop.
The Human Touch
The shortage of cybersecurity skills is a challenge for every enterprise. However, there is a concern that non-technical executives may see AI as a cybersecurity silver bullet. They may choose to reduce cybersecurity staff or services even further and feel they can leave network defenses to AI. A transition to AI could lead to infosec support without humans.
Instead, organizations should implement a synergistic approach to cybersecurity. They can institute AI defenses to protect network perimeter activity. Meanwhile, humans should remain in the loop to catch false positives, proactively consider countermeasures to AI blind spots, and for remediation and forensics.
Further, organizations and governments should form partnerships to tackle the infosec staffing shortage. Automation alone will not stop bad actors from misbehaving. Instead, AI, coupled with public-private training and certification initiatives, will benefit all of society — not just AI vendors.