AI Hacker Exploits Known Vulnerabilities

Are we on the brink of a cybersecurity dystopia, where threat actors can launch AI  programs to attack and compromise their targets automatically? If you’re in cybersecurity, you’ve likely seen recent reporting about a study that showed that AI, specifically GPT-4, could autonomously exploit various vulnerabilities. Such capabilities signal the potential for a massive shift in cybersecurity defenses and demand a deeper understanding of the potential impacts. Let’s cut through some sensationalist reporting and uncover precisely what the study showed, why this isn’t the end of the world, and what we need to be worried about.

Overview

The researchers set out to test 10 different LLMs [Large language models] for their ability to exploit vulnerabilities in an environment with or without information from the Common Vulnerabilities and Exposures [CVE] system. To provide a testbed, they recreated vulnerable environments in a Sandox and then paired the LLM model with various tools that allowed the model to interact with the environment.

The study used 15 different vulnerabilities for their tests.  All included vulnerabilities had to be related to open-source software, citing difficulties obtaining vulnerable versions of closed-source software after a vulnerability has been patched.  Additionally, challenges replicating vulnerabilities in the sandbox environment lead to other open-source-based vulnerabilities being excluded, leading to the final selection of 15. The final list of vulnerabilities includes a variety of different exploit types ranging from XSS to remote-code execution.  

Key Findings

The results of the experiments provide insight into the capabilities and limitations of LLMs like GPT-4 for vulnerability exploitation and demonstrate a significant disparity among the tested models. 

After running their tests, the researchers found that GPT-4 was the only LLM able to exploit these vulnerabilities. All other models had a 0% success rate. However, GPT-4 was quite successful, successfully exploiting thirteen of the fifteen vulnerabilities. However, when rerunning the tests without giving GPT-4 access to the CVE information, it could only exploit one out of fifteen vulnerabilities.

This indicates that LLMs, in general, and GPT-4, in particular, can be used as tools for automating and upscaling vulnerability exploitation. However, they don’t have the meaningful capability to be used as vulnerability scanning or discovery tools.  

What You Need to Know

So, what does that mean for those of us involved in cybersecurity? At the surface level, it puts more pressure on security teams to rapidly implement security patches on vulnerable systems.  Suppose threat actors can fully leverage these capabilities in the wild, coupled with threat-scanning tools to find vulnerable systems. In that case, they may be able to target many environments before a patch is implemented. However, this doesn’t change how security teams should be approaching their patching process in the first place.  

There are also several limitations to the study that are worth highlighting. First, the relatively small sample size of different vulnerabilities, and restrictions to easily replicated, open-source software naturally limits the degree to which this research may apply to the real world.  Most CVE vulnerabilities are for closed-source programs and are only publicly disclosed when a patch is already available, reducing the attack surface this use case can exploit.

There are further limitations to the study, and the evidence presented suggests that we should be cautious about how well this demonstrates the ability of GPT-4 to exploit vulnerabilities independently. One critical response to this research paper reviewed the vulnerabilities tested and found publically available examples of exploits for 13 of the 15 test cases. Since the LLM had access to web results, it was likely able to identify these examples and replicate them in the breaches rather than using the CVE information to exploit the breach as hypothesized by the researchers independently.  

The researchers did not publish the prompts they used and the steps taken by the LLM to exploit the vulnerability out of a reasonable ethical concern about empowering threat actors. At the same time, this limits the ability of other researchers to verify and replicate these results.  

While the findings are compelling at first glance, these caveats highlight the importance of context and the need for transparency and further research.  

Conclusion:

The ability of AI to exploit vulnerabilities when provided with the right tools and information highlights its potential in cybersecurity. While AI poses new challenges from a security perspective, these tools can also be used to bolster your security.  

Machine learning continues to be in a rapidly evolving state of innovation and learning. We may see LLMs or other AI models develop more robust capabilities. Many cybersecurity solutions providers are already including some AI-powered functionality in their tools.  

It can be challenging to stay current with the rapid pace of change and innovation in cybersecurity. If you want guidance on navigating this shifting landscape, speak with one of our Atlantic Data Security advisors today.

Talk to an Atlantic Data Security Advisor

Allow our experts to help you with your specific need.