Executive Summary

Every week, more public-sector agencies report their continued use of artificial intelligence (AI) to support their mission, operational, and enterprise requirements. A sampling of the headlines leading up to the release of the paper include:

New, innovative AI tools are here, and adoption is moving at a rate that outpaces the early days of cloud computing. As governments adopt and consume a range of AI capabilities, they must leverage and adapt existing governance structures and control frameworks to ensure the safe and secure use of these systems. The governance of AI must adhere to the defined cybersecurity tenets of confidentiality, integrity, and availability but also expand to consider explainability, safety, and potential for bias—all called for within recent guidance on AI governance for federal agencies. Because the guidance expands the individuals and types of actions involved, the implementation may be complex.

This paper examines the challenges and opportunities in implementing AI governance within the U.S. government. Our primary goal is to propose actionable strategies for integrating AI governance into established risk management systems, enhancing both operational efficiency and public trust. We explore federal agencies' current AI governance practices, identify gaps in existing frameworks, and recommend enhancements to improve implementation and oversight. Leveraging existing frameworks and policies can help to ensure AI deployment is safe, secure, and ethical, while addressing AI-specific challenges such as explainability, safety, and bias.

We explore the role of Chief AI Officers (CAIOs) in federal agencies, the creation of AI governance boards, and the adaptation of existing risk-management frameworks like the NIST AI Risk Management Framework (AI RMF). We highlight the importance of cross-departmental collaboration, standardized playbooks for AI use cases, and the development of tools to address AI-specific risks. Building on established IT and cybersecurity protocols is proving to be a pragmatic approach that avoids the pitfalls of creating entirely new governance systems, ensuring that AI adoption can be both rapid and responsible.

Some government officials we spoke with for this paper advocate for governance structures and regimes that are custom-built for AI tools, technologies, and systems. However, many more policymakers, agency technology professionals, and mission owners are pushing to integrate AI-specific aspects into existing risk-management processes and technology governance.

Every government official interviewed for this paper stated that existing policies, frameworks, and guidelines should be supplemented — not changed — for AI, and that there is no need to reinvent the wheel and create something new. AI-specific governance accelerate if existing applicable guidance, such as risk-management frameworks for cloud computing, cybersecurity, and other emerging technology, is clarified and new policy is focused on specific enhancements required for government use. They also agreed that proper AI governance is crucial, but as public-sector use of the technology evolves, separate governance structures may not be necessary, and in fact might create unintended gaps if processes are disparate. AI-governance structures must evolve in a way that incorporates them into existing structures that guide technology and cybersecurity.

This paper examines the approach federal agencies are taking and provides key recommendations to make the federal implementation of AI governance and use more effective. It also does the following:

  • Reviews existing policy and guidance on AI deployment and applicable guidance from other overlapping disciplines such as cybersecurity, data, risk management, and identifies where gaps for AI exist.
  • Enumerates recommendations and observations for AI governance that may be necessary to better incorporate AI into organizations.
  • Examines the status and defines the responsibilities of individuals filling the role of Chief AI Officers (CAIOs) within federal agencies.
  • Captures how AI governance is progressing and evolving within federal agencies.
  • Discusses AI governance best practices that agencies have implemented up to this point.

Methodology

For this paper, CCPL spoke with numerous U.S. Government information technology (IT) officials, including CAIOs, across several agencies. To ensure frank and honest conversations, the interviews were off the record. Additionally, the Center reviewed numerous policy and governance documents from U.S. agencies, state agencies, foreign governments, and multinational companies. These documents included policy proposals, standards, technical requirements, philosophical approaches, and lessons learned from current AI use-case adoption. We have included a subset of those documents in our policy overview or in our footnotes as references.

We anticipate this report will capture a unique perspective in the discussion concerning AI governance and adoption. To facilitate further policy development and dialogue, CCPL will hold an event to bring policymakers and thought leaders together. 

Zack Martin, Heather West & Alice Hubbard

Read Next

National Cybersecurity Awareness Month: Awareness & Training

October is National Cybersecurity Awareness Month, making now the perfect time to highlight two essential components of every organization’s security and privacy posture - awareness and training.

Putting the Work into Workshop with NIST’s Privacy Engineering Program

NIST hosted the Ready, Set, Update! Privacy Framework 1.1 + Data Governance and Management Profile Workshop, a two-day event to solicit feedback on updates to the Privacy Framework and the creation of a Data Governance and Management Profile.

S02 E05: EU Cyber Policy with Despina Spanou

In our latest Distilling Cyber Policy podcast episode, Alex Botting and Jen Ellis from the Center for Cybersecurity Policy & Law are joined by Despina Spanou, the Head of the Cabinet of the Vice-President of the European Commission.