One of the projects the Center for Cybersecurity Policy and Law is really excited about - and we know you probably are too – is the intersection of AI and security. This intersection has been the topic of many conversations, but we find that people haven’t spent time to understand what is truly new about cybersecurity, and where organizations need to bolster defenses as AI is incorporated throughout our lives. Remember the early days of the Internet? Imagine if we could go back and bake in better security from the start. We have that chance with AI, and we don't want to miss it.

That is not to undersell the importance of traditional cybersecurity efforts for AI: clearly, the ways that we protect all our digital assets apply to AI systems. But there’s new ground too with demonstrations of hacking AI models, protecting model weights, and adversarial machine learning.

To that end, we’d love your help: we’re starting a new project, with a series of papers and working sessions to explore the critical intersection of AI and cybersecurity. As AI rapidly integrates into our digital infrastructure, we need to address its security implications through better educating our cybersecurity professionals, augmenting and improving our existing security frameworks, and developing new best practices and guidance for securing AI. We plan for this effort to specifically cover:

  • What's Really Novel?: We'll examine what truly sets AI security apart from traditional cybersecurity. From adversarial machine learning to supply chain threats, we'll map out the unique challenges to securing AI.
  • Mind the Gap: We'll examine the NIST Cybersecurity Framework and the AI Risk Management Framework to identify areas that can be updated to address AI systems. We will also examine emerging frameworks and their overlap with existing security controls. This could shape the future of how organizations strategically plan for securing AI systems.
  • Closing the Gaps: Finally, we'll roll up our sleeves and develop practical guidance based on existing security concepts such as zero trust, secure-by-design, defense-in-depth, and many more. We will leverage security community experts to demonstrate how we can apply these existing cybersecurity practices to AI and where new innovation is needed.

Soon we'll be hosting working group sessions with experts from academia, industry, government, and standards bodies and your perspective is invaluable to its success. Whether you're developing AI systems, integrating them into your operations and products, or just concerned about the future of cybersecurity, we want your input. 

Reach out to Heather West (hewest@venable.com) to learn more.

Heather West & Davis Hake

Read Next

ZScaler, Wiz, and Infoblox Join As New Members of the Cybersecurity Coalition

The Cybersecurity Coalition announced three new members - ZScaler, Wiz, and Infoblox. This collaboration aims to enhance the Coalition’s efforts in advancing robust cybersecurity policies and practices across various sectors.

Research Needed for the Good and Bad AI Cybersecurity Use Cases

When implemented properly, artificial intelligence is a vital tool for cybersecurity but more public research is essential to understand and monitor a diverse array of AI systems and their potential – for good and bad.

CISA Promotes Secure by Design Principles with Industry Pledge

CISA announced its “Secure by Design Pledge,” a voluntary commitment by software manufacturers to work towards implementing several cybersecurity best practices.