It’s fun to use the latest artificial intelligence (AI) chatbot to create a lullaby for your dog or asking a voice assistant to tell a joke. And there are other useful applications too, like the AI systems that make my house more efficient and pleasant to live in.
And while these use cases are entertaining or helpful, there are more serious applications for AI too and we need to make sure these use cases are protected. To help with this effort, the Center for Cybersecurity Policy and Law is releasing “Cybersecurity and AI in Policymaking: Protecting the use of artificial intelligence in cybersecurity.”
The use cases for AI in cybersecurity are numerous. AI is well suited to assist in cybersecurity, going through a huge amount of data to find the malicious needle in a haystack. AI is uniquely suited to lend a metaphorical hand to security processes where complexity is high and speedy response is critical. AI can often find patterns in network traffic much more quickly than traditional analytics or human analysis, identifying threats and malicious activity based on many different interactions across a global network or large set of infrastructure. And indeed, AI is used across sectors to secure, protect, and harden digital and physical systems against malicious actors.
But governance of any new technology can be challenging. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while protecting and enabling the development of new, beneficial applications of AI for security.
Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations in order to protect our ability to protect ourselves with AI.
To start with, it’s important to remember what developers need to create AI for cybersecurity, including large volumes of high-quality data, data science teams to train and oversee systems, and the ability to customize systems for deployment. The paper discusses what AI is, how it’s used in cybersecurity, and what teams need to develop and deploy effective AI systems to advance cybersecurity.
The paper discusses potential regulation of AI and how policymakers should approach this topic. It is important to ensure that rules and regulations enable the positive contributions that a technology can make while curtailing the behaviors and outcomes that society seeks to minimize, especially for applications as important as cybersecurity.
The paper recommends regulations to maximize AI’s security role, including:
- Basing regulations on the potential for risk, and scoping rules to outcomes rather than tools used, while considering security exceptions to broader rules where appropriate.
- Clarity in definitions and scope to ensure that it’s easy to understand when it does and does not apply.
- Data collection and analysis guardrails, including around privacy and data protection, data quality, and the need for comprehensive and unbiased data.
- Protection of the robustness, accuracy, and security of AI systems, especially for high-risk applications.
- Clear guidelines around scoring and discrimination rather than bans.
- Human oversight requirements that reflect the risk posed by a given system.
- Documentation and recordkeeping requirements that help people understand without undermining security goals.
- Oversight of national security and law enforcement use that have the potential to significantly impact people’s lives.
The full paper can be found here.
Read Next
AI Profile for NIST CSF Would Help Risk Management Pros
Last week, the Cybersecurity Coalition submitted comments in response to the National Institute for Standards and Technology (NIST) Cybersecurity and AI Workshop Concept Paper.
Crosswalk Analysis for Artificial Intelligence Frameworks
Organizations worldwide are developing frameworks to ensure that AI systems are safe and secure but there’s a gap in how they are compared. This analysis seeks to understand the commonalities by using the the NIST AI RMF as a baseline.
What is DNS? - A DNS Security Primer
DNS is woven into the fabrics of almost every network, and it’s critical that DNS deployments are done securely and with the most modern best practices, lest attackers compromise this vital component.