It’s fun to use the latest artificial intelligence (AI) chatbot to create a lullaby for your dog or asking a voice assistant to tell a joke. And there are other useful applications too, like the AI systems that make my house more efficient and pleasant to live in. 

And while these use cases are entertaining or helpful, there are more serious applications for AI too and we need to make sure these use cases are protected. To help with this effort, the Center for Cybersecurity Policy and Law is releasing  “Cybersecurity and AI in Policymaking: Protecting the use of artificial intelligence in cybersecurity.

The use cases for AI in cybersecurity are numerous. AI is well suited to assist in cybersecurity, going through a huge amount of data to find the malicious needle in a haystack. AI is uniquely suited to lend a metaphorical hand to security processes where complexity is high and speedy response is critical. AI can often find patterns in network traffic much more quickly than traditional analytics or human analysis, identifying threats and malicious activity based on many different interactions across a global network or large set of infrastructure. And indeed, AI is used across sectors to secure, protect, and harden digital and physical systems against malicious actors.

But governance of any new technology can be challenging. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while protecting and enabling the development of new, beneficial applications of AI for security. 

Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations in order to protect our ability to protect ourselves with AI.

To start with, it’s important to remember what developers need to create AI for cybersecurity, including large volumes of high-quality data, data science teams to train and oversee systems, and the ability to customize systems for deployment. The paper discusses what AI is, how it’s used in cybersecurity, and what teams need to develop and deploy effective AI systems to advance cybersecurity.

The paper discusses potential regulation of AI and how policymakers should approach this topic. It is important to ensure that rules and regulations enable the positive contributions that a technology can make while curtailing the behaviors and outcomes that society seeks to minimize, especially for applications as important as cybersecurity.

The paper recommends regulations to maximize AI’s security role, including:

  • Basing regulations on the potential for risk, and scoping rules to outcomes rather than tools used, while considering security exceptions to broader rules where appropriate.
  • Clarity in definitions and scope to ensure that it’s easy to understand when it does and does not apply.
  • Data collection and analysis guardrails, including around privacy and data protection, data quality, and the need for comprehensive and unbiased data.
  • Protection of the robustness, accuracy, and security of AI systems, especially for high-risk applications.
  • Clear guidelines around scoring and discrimination rather than bans.
  • Human oversight requirements that reflect the risk posed by a given system.
  • Documentation and recordkeeping requirements that help people understand without undermining security goals.
  • Oversight of national security and law enforcement use that have the potential to significantly impact people’s lives.

The full paper can be found here.

Heather West

Read Next

Building PQC and Crypto Resiliency Across the Public and Private Sectors

A webinar that featured industry leaders from AT&T, the National Institute of Standards and Technology (NIST), InfoSec Global, The White House, and Venable LLP, focused on cryptographic resilience and post-quantum transition.‍

Securing the Future of AI: What’s Next?

The intersection of AI and security is a hot topic but we find that people haven’t spent time to understand what is truly new about cybersecurity, and where organizations need to bolster defenses as AI use cases promulgate.

NTIA Report Reveals Support for Open AI Models

The NTIA released a report examining the risks and benefits of dual-use foundation models with publicly available model weights, also examining the impact of openness on innovation and how to evaluate and quantify risk for these models.