Introduction
Artificial intelligence (AI), automated decision making (ADM), and machine learning (ML) have hit the big time: academics, journalists, policymakers, and pretty much everyone else is talking about it, generally collectively referring to it as AI. From voice assistants in living rooms to chatbots that write songs, from intelligent mapping software to intelligent security that protects our digital platforms and infrastructure, AI is pervasive in our lives and - as a result - in our policy work. AI is already significantly impacting our day to day lives and is increasingly used to secure both digital services and physical infrastructure. AI is uniquely well suited to assist in the realm of cybersecurity, given that it looks for patterns and aberrations in large amounts of data in order to identify, predict, or mitigate outcomes - the same work often necessary for cybersecurity - particularly with the ever-growing scale of the systems in need of defense.
Cyberattacks continue to increase in volume and sophistication, with the potential to cause huge digital, financial, or physical damage. AI is already helping to address the shortage of qualified members of the cybersecurity workforce by automating detection and response to threats and doing work that is hard to do without automation. Policymakers must approach AI regulation in the same way that they approach cybersecurity, with a thoughtful and deliberate approach that assesses and mitigates risks while enabling the development of new, beneficial applications of AI for security. Research has shown that a large majority of executives believe that AI is necessary for effective response to cyberattacks, and that they have been able to respond faster to incidents and breaches when they use AI. While there is speculation about the role that AI may take in malicious cyberactivity, we are addressing the regulation of legitimate actors in this paper.
AI governance is notoriously tricky. AI deals in huge amounts of data and computing power to detect threats and potential risks in real-time, learning while it works. AI is made of behavioral models that can detect and even predict attacks as they develop. Pattern recognition and real-time mapping of fraud and cybercrime can allow another AI to bolster defenses where it’s needed and prevent privacy breaches, identity theft, business disruption, and financial loss. Detection and mitigation of cyberattacks against the operations of critical infrastructure ensure that public necessities like water and electricity remain available, often using AI in ways that are increasingly sophisticated. But despite its sophistication and appearing to work like magic (or like a person’s brain), AI is simply a set of computing techniques used in a vast number of ways to accomplish myriad goals.
Like any rapid technological innovation, AI presents new and unique challenges for policymakers and regulators. It is increasingly incorporated across the digital and physical operations of industrial and consumer facing sectors. In global conversations on how best to guide and regulate technology that uses AI, we must keep in mind the important role that AI plays in protecting our digital and physical infrastructure and operations to protect our ability to protect ourselves with AI.
Read Next
Crosswalk Analysis for Artificial Intelligence Frameworks
Organizations worldwide are developing frameworks to ensure that AI systems are safe and secure but there’s a gap in how they are compared. This analysis seeks to understand the commonalities by using the the NIST AI RMF as a baseline.
What is DNS? - A DNS Security Primer
DNS is woven into the fabrics of almost every network, and it’s critical that DNS deployments are done securely and with the most modern best practices, lest attackers compromise this vital component.
CyberNext BRU: Countering the Proliferation of Commercial Spyware
The proliferation of Commercial Cyber Intrusion Capabilities has been challenging with European government officials exploring different policy options. Find out more about this process at the second annual CyberNext BRU conference on 5 March.