When implemented properly, artificial intelligence (AI) is a vital tool for cybersecurity. Advanced AI tools can analyze massive amounts of data to detect patterns that are indicative of a cyber threat, detect unusual behaviors and restrict unauthorized access to systems, help to prioritize risk, and rapidly detect the possibility of malware and intrusions before they begin. 

With appropriate governance, AI systems can be an engine for security automation, freeing up the time and resources of employees by automating repetitive tasks. That means we’re seeing new, exciting AI pop up everywhere, including systems that can better secure us.

AI is indeed good news – an additional arrow in the cyber defense quiver.

But AI is equally terrifying – a well known AI system and publicly available information can be so effectively utilized to identify attack vectors. Research is underway to turn those same systems back to mitigating and resolving vulnerabilities too, pushing us into a never ending cycle.

A recent article reporting on findings from security researchers using an advanced LLM to autonomously hack makes this point clear. It makes clear that we are just beginning to objectively assess where AI works well, where it lags, and potentially where the risks to widespread use lies.

As widely reported earlier this year, a team of researchers released a paper saying they'd been able to use GPT-4 to autonomously hack one-day (or N-day) vulnerabilities – what the researchers refer to as “security flaws that are already known, but for which a fix hasn't yet been released.” The researchers, utilizing the Common Vulnerabilities and Exposures (CVE) list, found that GPT-4 was able to exploit 87% of critical-severity CVEs on its own. An underground market for malicious LLM tools already exists. While these tools typically don’t match the skills of an advanced malicious actor or red-teamer, they’re advancing quickly.

It has been generally assumed – and tested to confirm – that AI systems used for N-day vulnerabilities perform poorly in the zero-day setting – finding new exploits is harder than using known ones. But there has always been the question of whether it is possible for a more complex AI system to exploit zero-day vulnerabilities. 

Then, in early June, the same group of researchers released a follow-up paper saying they've been able to hack vulnerabilities that aren't yet known – a form of zero-day vulnerabilities. 

Instead of relying on a single Large Language Model (LLM) approach, the researchers used a "planning agent" that oversees the entire process and launches multiple "subagents," which are task-specific. Analogous to a boss and their subordinates, the planning agent coordinates via a “managing agent” which delegates all efforts of each "expert subagent," reducing the load of a single agent on a task it might struggle with. Much like human actors, using multiple specialized tools can result in a better result. 

The researchers found that this approach “can hack over half of the [zero day] vulnerabilities in our benchmark, compared to 0% for open-source vulnerability scanners and 20% for our previous agents (without the CVE description).”  An outcome like this gives any cyber professional pause.

What do we take away from this research?  First, testing AI systems in a simple context – like in the chatbot setting, as the original GPT-4 safety assessment did or using single LLMs – simply isn’t sufficient. Yes, it is one tool in cyber defense and software vulnerability, but it isn’t enough to assess the risk one faces. 

Second, more public research is essential to understand and monitor a diverse array of AI systems and their potential – for good and bad.  Researchers at Cornell University, for example, found that underground exploitation of LLMs for malicious services – i.e., Malla – is on the “uptick, amplifying the cyber threat landscape and posing questions about the trustworthiness of LLM technologies. However, there has been little effort to understand this new cybercrime, in terms of its magnitude, impact, and techniques.   

No doubt, nefarious actors – whether lone hackers or well-staffed and resourced nation states – have begun developing and utilizing more complex AI systems like those used by these researchers. It is imperative that this kind of research be prioritized by cyber professionals, academics and government.

NIST recently launched a project called Assessing Risks and Impacts of AI (ARIA) for exactly this purpose. It is a pilot to more scientifically assess the risks and benefits of LLMs.  Its work will inform the work of the U.S. AI Safety Institute at NIST. 

There are also other efforts underway – including those stemming from the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. For example, DoD and DHS are working together to evaluate whether AI can be used to find and remediate vulnerabilities in U.S. government systems. The Department of Energy has also launched a program to develop public testbeds for AI. 

Policymakers and industry groups are working to educate around these tools, and how to protect AI systems and build trust and resiliency in our digital infrastructure. As this space evolves, our work at the Center will, too. We’re working on several new projects around AI and security. If you’re interested, we’d love to hear from you and understand what you’re seeing in this space, and what gaps we need to fill to ensure that AI can protect us.

Mark Bohannon is a Fellow with the Center for Cybersecurity Policy & Law. 

Heather West is a Senior Director at the Center.

Mark Bohannon & Heather West

Read Next

Securing the Future of AI: What’s Next?

The intersection of AI and security is a hot topic but we find that people haven’t spent time to understand what is truly new about cybersecurity, and where organizations need to bolster defenses as AI use cases promulgate.

ZScaler, Wiz, and Infoblox Join As New Members of the Cybersecurity Coalition

The Cybersecurity Coalition announced three new members - ZScaler, Wiz, and Infoblox. This collaboration aims to enhance the Coalition’s efforts in advancing robust cybersecurity policies and practices across various sectors.

CISA Promotes Secure by Design Principles with Industry Pledge

CISA announced its “Secure by Design Pledge,” a voluntary commitment by software manufacturers to work towards implementing several cybersecurity best practices.