This week, the Biden Administration released its roundup of actions related to the 270-day deadlines for the AI Executive Order. One of the most interesting points is a report supporting open models for AI from the U.S. National Telecommunications and Information Administration (NTIA). 

The report examines the risks and benefits of dual-use foundation models with publicly available model weights. This is a mouthful, but it refers to the most advanced AI models, some of which have open model weights that allow others to adjust, modify, or recreate parts or all of the model itself. It also examines a few related issues, like the impact of openness on innovation and our understanding of how to evaluate and quantify risk for these models. Could open model weights, and openness for AI generally, be a catalyst for incredible innovation from players who don’t have the resources to train their advanced foundation models, or could it open the door for malicious actors to customize their own advanced models? Are there approaches that we can take to effectively govern these models and reap the benefits of more openness in AI while managing risks?

The NTIA - the U.S. government's tech policy think tank - concludes that the government should not restrict the availability of open model weights for currently available systems but instead focus on understanding and managing risk. This approach recognizes the immense potential of "open-weight" models, which allow developers, including small companies, researchers, nonprofits, and individuals, to build upon and adapt existing AI technologies.

That doesn’t mean that NTIA is ignoring potential risks. Instead, they outline a robust monitoring system for the AI ecosystem leveraging the U.S. AI Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST). Their new draft for evaluating and managing risk from these advanced models is a critical read. Knowing more about the risks - and benefits - is a core element of a risk management strategy and builds into potential actions in the future. 

This focus on gathering information and developing effective risk management aligns with the growing recognition that dual-use foundation models offer tremendous opportunities but also present unique challenges. NTIA looks to strike a balance by actively monitoring for risks and also promoting open access. It does so by emphasizing the need to assess “the marginal benefits and risks of harm that could plausibly be affected by policy and regulatory measures.”

The emphasis on evidence collection and evaluation underscores the importance of ongoing research and collaboration in AI. It also highlights the need for a flexible, adaptive approach to governance that can evolve alongside rapid technological advancements - like any other kind of risk management. For the AI community, these recommendations signal support for innovation, particularly for smaller entities that can benefit from access to open-weight models. At the same time, they serve as a call to action for responsible development and use of AI technologies.

While the report eschews a direct regulatory response, it advocates for adopting a monitoring framework to inform ongoing assessments and possible policy action. As the report clearly indicates, “the government should not restrict the wide availability of model weights for dual-use foundation models at this time.” By instead actively monitoring and maintaining “the capacity to quickly respond to specific risks across the foundation model ecosystem, by collecting evidence, evaluating that evidence, and then acting on those evaluations,” the report is taking a risk-management approach which is consistent with recognized and solid cybersecurity policy. 

The diverse areas covered in the report – public safety, including cyber defenses; geopolitical considerations; societal risks and benefits; and competition, innovation, and research – all point to the vital need for risk management. As the report concludes, “models are evolving too rapidly, and extrapolation based on current capabilities and limitations is too difficult, to conclude whether open foundation models, overall, pose more marginal risks than benefits (or vice versa), as well as the isolated trade-offs in specific sections.”  

While the report is cautious in its recommendations, it leaves no doubt that “the government should maintain the ability to undertake interventions, which should be considered once the risk thresholds described above are crossed such that the marginal risks substantially outweigh the marginal benefit.” This represents a relatively high standard that NTIA believes should be met before the government contemplates context-appropriate risk mitigation measures or potential access restrictions to models and model weights.

The watchwords here continue to be “appropriate transparency” as indicated in NTIA’s report last March on AI Accountability. NTIA concludes that industry needs more information to set pragmatic policies around advanced AI and develop risk evaluation and management strategies. 

As we move forward, it will be crucial for all stakeholders - developers, researchers, policymakers, and users - to engage in ongoing dialogue and collaboration. By working together to develop, implement, and refine these risk management strategies, we can help ensure that advanced AI technology aligns with broader values and goals, remains safe and secure, and realizes potential benefits.

Heather West & Mark Bohannon

Read Next

Building PQC and Crypto Resiliency Across the Public and Private Sectors

A webinar that featured industry leaders from AT&T, the National Institute of Standards and Technology (NIST), InfoSec Global, The White House, and Venable LLP, focused on cryptographic resilience and post-quantum transition.‍

FedRAMP Finalizes Emerging Technology Prioritization Framework

The GSA FedRAMP PMO released the final version of its Emerging Technology Prioritization Framework that seeks to expedite FedRAMP authorizations for select cloud offerings with emerging technology features, such as generative AI.

PQC: Lead the Way or Fall Behind

NIST has selected the Post-Quantum Cryptography algorithms and now is the time for organizations to decide to lead or get left behind. Establishing a foundation of trust and protecting information and infrastructure with these standards is crucial.