
Every few decades, the world goes through an “AI spring,” and we are in the middle of one right now. With accelerating progress in AI research and the arrival of emerging capabilities exemplified by tools like ChatGPT, hopes are surging that AI applications will soon help organizations to detect threats in their IT environment, prevent data breaches, and block incoming attacks with a much higher success rate.
But nothing is ever that simple. First, AI tools are part of the future for cyber defenders and malicious actors alike. As long as that is true, human expertise will always be the deciding factor in who wins and loses. Second, with IT environments increasing in complexity, expertise is needed to determine where AI can make a real difference, and where it is more of a liability than an asset.
In a previous article, we explained how VPNs can give organizations a false sense of security – not because they are not useful, but because their role in a larger perimeter security strategy is misunderstood. In this article, we will explain why the same is true for nearly any tool or set of tools, however “smart” they may be. But first let us set the scene.
Why AI-Driven Security is Desirable
In today’s cyber landscape, the allure of AI-powered tools is not hard to understand. In Q1 of 2023, cyberattacks rose by 7% over Q1 2022, with organizations facing an average of 2,057 attacks per week. At the same time, organizations are struggling to find help: today, the global cybersecurity workforce gap stands at 3.4 million, with nearly 700,000 unfilled cyber positions in the U.S. alone.
Worst of all, global cyber actors – who are always opportunistic in their pursuit of new vulnerabilities and attack vectors – are already leveraging AI for social engineering and targeted attacks. According to a study by the Cloud Security Alliance, free tools like ChatGPT can be used to find attack points, gain unauthorized access to target networks, conduct reconnaissance and develop malicious code. That does not even count specialized AI-powered toolkits passing around on the Dark Web.
AI-Driven Cybersecurity is Already Here
Clearly, organizations need all the help they can get. But none of these issues are entirely new, and AI-powered solutions are already being employed across many organizations to address them. These include:
- Security Orchestration, Automation and Response (SOAR) – SOAR platforms bring together data about security threats from multiple systems, offering automation for repetitive security operations center (SOC) processes, including vulnerability scanning, auditing and log analysis. SOAR platforms increasingly offer AI features to analyze information, prioritize threats, and suggest – or even execute – remedial actions.
- User and Entity Behavior Analytics (UEBA) – UEBA tools focus on user and entity behavior, using algorithms to establish a baseline for normal activities and identify anomalous ones. Like SOAR, UEBA is often augmented with AI to generate better risk scores and flag potential threats more reliably.
- Extended Detection and Response (XDR) – as an evolution of endpoint detection and response (EDR) systems, XDR brings threat detection and response functions to systems throughout your organization, providing a clearer picture of your IT environment and developing attacks. Like SOAR and UEBA, XDR tools are increasingly integrating AI-driven functionality.
But despite widespread deployment of SOAR, UEBA, XDR and other emerging cybersecurity products, cyber incidents have not decreased, and the need for human talent has not diminished. This picture is unlikely to change any time soon for many reasons. Here are just a few:
1. Much Assembly Required
It is often taken for granted that AI will reduce the need for reliance on human talent – but the contrary is just as likely. The more tools organizations introduce, the more talent is needed to configure them safely, monitor their performance, and delineate their role in the midst of changing trends and priorities.
Cyber defenders already rely on a plethora of tools – but just as often as they solve problems, they cause more when they are deployed improperly. This is true in the context of cloud, endpoint detection, VPNs, IoT, and more. There is every reason to believe the same will be true for AI-driven tools, however smart they may be. At a minimum, the wrong rules will lead to overfitting (too many false flags) or underfitting (too many threats ignored).
2. AI Has Limitations
Recent progress in AI has given many the impression that there’s no upper limit on what AI applications can achieve. But until the arrival of artificial general intelligence (AGI) (at which point organizations will have bigger problems on their hand than cyber actors) AI solutions are necessarily narrow in scope, which limits their effectiveness against human targets.
For now, any AI-driven solution can only integrate with software if the proper APIs are in place. It can only detect and respond to threats it has been trained to anticipate. It can only navigate within a realm of generally defined problems and responses.
With cyber actors innovating new attack strategies around the clock and adopting AI as rapidly as cyber defenders, the measure of a cybersecurity program will never be technology alone: it will be creativity, expertise, and an understanding of factors ranging from organization-specific issues to the way hackers think.
3. Cybersecurity is a Human Issue
While cyber actors often aim at system intrusion and penetration of network defenses, digital exploits are nearly always downstream from human exploits. According to Deloitte, more than 90% of attacks begin with a phishing email. This is just one of many ways that malicious actors manipulate and deceive your employees into providing them with a foothold – whether that takes the form of credentials, malicious downloads, or sensitive data.
Even now, AI’s role as a hacking tool is primarily confined to the creation of personalized phishing campaigns and social media messages. While AI can potentially help organizations to identify and flag malicious messages, it will not replace cyber training and awareness to help your employees avoid the mistakes that imperil your sensitive data and assets.
Beware of False Promises
As with every new trend, vendors have been quick to jump on the AI bandwagon, offering AI features and promising the moon with it. Often, they exploit the ambiguity of the term “AI”, with products that do not leverage ML models, or any other breakthrough technologies associated with the current AI spring.
But even when they do, organizations must be wary of believing these tools provide a level of unsupervised protection beyond what their existing toolsets provide. They must resist complacency and situate any new acquisitions within a larger strategy guided by human expertise, and an awareness of their unique needs.
Securicon provides tailored cybersecurity assessments with planning and implementation for secure AI-driven capabilities. We are comprised of veterans from the U.S. security community, including DoD, DHS and the U.S. Cyber Command. In addition to providing gap analysis, compliance consulting, assessment support and more, we have the expertise to evaluate emerging cybersecurity solutions and apply them within your IT environment. To learn how we can help you, contact us today.