The U.S. Copyright Office clarified legal rules for artificial intelligence (AI) trustworthiness research and red-teaming under Section 1201 of the Digital Millennium Copyright Act (DMCA). AI red-teamers have cause to celebrate – the Copyright Office took up the Hacking Policy Council’s request to clarify that common AI research techniques do not violate DMCA Section 1201. However, there is some not-so-great news as well.

The Hacking Policy Council was closely involved in the effort to clarify legal protections for AI researchers. HPC partnered with the AI research and security communities to submit detailed testimony and safe harbor language to the Copyright Office. The Hacking Policy Council repeatedly called on the Copyright Office to confirm that AI trustworthiness research techniques are not violations of DMCA Sec. 1201, and is pleased that the Copyright Office provided this clarification in the final rule. 

This post will provide analysis on what the rulemaking means for AI red-teaming, and will then provide background on the need for greater clarity on laws related to AI red-teaming and information sharing. 

Rulemaking on legal protections for AI research - key takeaways

In its rulemaking, the Librarian of Congress acknowledged the importance and benefits of red-teaming AI for trustworthiness and evaluating risk. However, the Librarian, which oversees the Copyright Office, denied a petition to provide legal protections to AI researchers under DMCA Section 1201, which is a major federal anti-hacking law.

However, the rationale for denying the petition is actually good news. At the recommendation of the Hacking Policy Council, the Copyright Office clarified that common AI research techniques alone do not violate DMCA Section 1201. [See the Register’s Recommendation, pgs. 122-129.] Specifically:

  • Prompt injection and the use of jailbreak prompts to bypass AI system guardrails.
  • Creating new accounts to access AI systems after research methods result in an account ban or suspension.
  • Bypassing rate limits that prevent researchers from performing the same actions rapidly within a set period of time.

DMCA Section 1201 prohibits circumventing technological access controls to software, and the Copyright Office stated it does not consider the above methods to fall into that category. According to the Copyright Office, no legal protections are needed for AI trustworthiness research in part because DMCA Section 1201 simply does not apply. 

Essentially, this means AI researchers using these common techniques alone may not be liable under DMCA Section 1201. This also means that if AI trustworthiness researchers are threatened with lawsuits under DMCA Section 1201, they may have legal grounds to claim such threats are bogus.

However, this is not an exemption. That’s the bad news. The gap in legal protection for AI research was confirmed by law enforcement and regulatory agencies such as the Copyright Office and the Department of Justice, yet good faith AI research continues to lack a clear legal safe harbor.

Other AI trustworthiness research techniques may still risk liability under DMCA Section 1201, as well as other anti-hacking laws such as the Computer Fraud and Abuse Act (CFAA). For example, DMCA Section 1201 likely continues to prohibit circumvention of Digital Rights Management (DRM) or encryption for independent good faith AI research that is performed for purposes other than security. 

Lack of safe harbors for AI red-teaming and info-sharing

There are legal safe harbors for security research under DMCA Section 1201, as well as a federal charging policy protecting security researchers under CFAA. Organizations that share information for cybersecurity purposes have a broad shield from liability under the Cybersecurity Information Sharing Act of 2015. However, as the Hacking Policy Council has pointed out in its white paper, legal protections for independent AI research are not so clear. 

Researchers test AI across several non-security metrics, such as synthetic intimate imagery, inaccuracy, discrimination, toxicity, copyright infringement, and other harmful or undesirable outputs. While identifying and mitigating these flaws are increasingly important as AI systems grow more prevalent, this is not necessarily a “security” or “safety” activity, and therefore may not be covered by existing legal protections. Current legal protections and company vulnerability disclosure policies are generally scoped to exclude non-security issues and algorithmic flaws. Yet another case of technology outpacing the law.

The AI research community has begun to address these gaps in legal protections by organizing efforts to advocate for extending safe harbors to AI red-teaming. In the most recent DMCA rulemaking, AI researchers submitted a petition to the Copyright Office for safe harbor, followed by a group of academic researchers, the Hacking Policy Council, and other allies. Notably, the Department of Justice and Members of Congress also issued letters to the Copyright Office, calling for specific legal protections for AI research. 

Although the next DMCA rulemaking is three years away, there are still concrete steps the community can take to advance protections for independent AI evaluation and information sharing. This includes engaging with the U.S. Department of Justice (DOJ), which recently announced it is exploring protections for AI flaw disclosure under CFAA, efforts to promote bias bounty programs, and the upcoming Congressional reauthorization of the Cybersecurity Information Sharing Act of 2015. Advocacy for broader protections is already underway within the AI community.

Adapting laws to AI - inevitable?

The DMCA rulemaking revealed both significant progress and glaring gaps in the current legal framework. While the ruling provides much-needed clarity on certain common AI testing techniques, the absence of a formal exemption for AI trustworthiness research highlights the need for continued clarity and legal reform. 

Ultimately, independent AI red-teaming for non-security issues will not go away, nor will the need to share non-security information between organizations. Legal protections – or at least legal clarity – for these activities will continue to be a priority. In the meantime, AI researchers should continue to engage with federal and state legislators to promote AI-specific legislation that would protect researchers from legal liability. 

This will take time. The safe harbors established for independent security research and information sharing took several years of persistent advocacy to achieve. But the effort is worth it – strengthening legal protections for independent research and information sharing helps protect people, advances technological innovation, and makes the digital ecosystem safer for all.

Harley Geiger & Tanvi Chopra

Read Next

Building PQC and Crypto Resiliency Across the Public and Private Sectors

A webinar that featured industry leaders from AT&T, the National Institute of Standards and Technology (NIST), InfoSec Global, The White House, and Venable LLP, focused on cryptographic resilience and post-quantum transition.‍

NTIA Report Reveals Support for Open AI Models

The NTIA released a report examining the risks and benefits of dual-use foundation models with publicly available model weights, also examining the impact of openness on innovation and how to evaluate and quantify risk for these models.

FedRAMP Finalizes Emerging Technology Prioritization Framework

The GSA FedRAMP PMO released the final version of its Emerging Technology Prioritization Framework that seeks to expedite FedRAMP authorizations for select cloud offerings with emerging technology features, such as generative AI.