Hro Banner
February 23, 2024

DOJ Puts AI in the Hot Seat

On February 14, 2024, Deputy Attorney General (“DAG”) Lisa Monaco delivered remarks at the University of Oxford on artificial intelligence (“AI”).[1]  The Deputy Attorney General’s speech, delivered at the Oxford Martin AI Governance Institute, was equal parts discussion of the perils of AI and distillation of the Biden Administration’s desire to achieve a global consensus on “trustworthy” AI.  DAG Monaco promised the robust punishment of crimes aided by AI and described the foundation of an international alliance on combating AI-based cyber-crimes.  Given the technology-resources deficit often faced by prosecutors, however, only time will tell whether the Deputy Attorney General’s remarks are a precursor of enforcement activity to come or remarks on a trendy topic.

DOJ’s Enforcement Strategy for AI

According to the DAG, the Department of Justice (the “Department” or “DOJ”) is “laser-focused” on AI as “the most transformative technology we’ve confronted yet,” and it is pursuing multiple enforcement strategies to address the risks presented by AI.

DAG Monaco noted a number of ways that this emerging technology can be employed by wrongdoers.  She stated, for instance, that AI can be used to create harmful content; to amplify existing discriminatory practices; and to speed the spread of disinformation, particularly by foreign adversaries and hostile nation-states during elections.  Despite the nascent technology at play, however, she also recognized that many of these misdeeds are already themselves crimes: “[p]rice fixing using AI is still price fixing” and “identify theft using AI is still identity theft.”

The Deputy Attorney General also emphasized the ways that the Department is working to ensure that advanced American technology is not employed against America.  To “neutralize these adversaries” who seek to “siphon off America’s most advanced technology and use it against us,” DAG Monaco has directed the Disruptive Technology Strike Force—a joint undertaking with the Department of Commerce focused on protecting critical technological assets—to prioritize AI enforcement.  This means that the Strike Force,  which “enforces export control laws to strike back against adversaries” that attempt to steal American technologies, will be particularly vigilant about AI’s misuse by foreign actors.  Finally, DAG Monaco also explained that the Department may crack down on AI’s use by criminals by pursuing stiffer sentences: “Going forward,” she declared, “where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI — they will.”  From our perspective, this could manifest itself in more aggressive charging decisions by prosecutors or a stricter application of any relevant enhancements in the U.S. Sentencing Guidelines.  The Deputy Attorney General pointed to sentencing enhancements for the use of firearms by criminals as an analogue: “The U.S. criminal justice system has long applied increased penalties to crimes committed with a firearm. Guns enhance danger, so when they’re used to commit crimes, sentences are more severe.  Like a firearm, AI can also enhance the danger of a crime.”

Leveraging the Benefits of AI

In addition to discussing the risks that AI presents—and the ways the DOJ is attempting to mitigate those risks—DAG Monaco recognized that AI presents tremendous opportunity for the Department.  The Deputy Attorney General’s remarks described the several ways in which the Department has used this nascent technology to improve its law-enforcement efforts.  For example, the Department has leveraged AI to classify and source illegal drugs, including opioids; to triage the over one million tips submitted annually to the FBI; and to synthesize large volumes of evidence in sprawling cases.

Moreover, and consistent with President Biden’s October 2023 Executive Order on Safe, Secure, and Trustworthy AI,[2]  the Department is undertaking a number of initiatives meant to “anticipate the impact of AI on our criminal justice system, on competition, and on our national security.”  The Department is bringing together both law enforcement and civil rights staff to participate on an “Emerging Technology Board” that will provide guidance to Department leadership (including the Attorney General and DAG Monaco) on how the Department can responsibly and ethically leverage these tools.  The Department has also appointed its first Chief AI Officer, Princeton Professor Jonathan Mayer, to help the Department focus on ways to leverage AI for enforcement.[3]

Nor is the Department of Justice going it alone.  DAG Monaco explained that DOJ is working to develop and “internationalize responsible codes of conduct for AI Systems” through the Hiroshima AI Process, a working group launched by the G7 countries meant to promote safe, secure, and trustworthy AI.  Furthermore, the DOJ is launching an initiative called Justice AI, by which it hopes to bring together “individuals from across civil society, academia, science, and industry,” as well as the Department’s foreign counterparts, to better understand the impact of AI on law enforcement.


It is unclear to us whether these remarks will represent a shift in the Department’s enforcement priorities, charging strategies, or sentencing practices.  What is clear, however, is that the Department does not plan on being caught flat-footed in the face of technological change.

Click here to download this article.

[1]       Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI, Department of Justice (Feb. 14, 2024),

[2]       Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023),

[3]       Attorney General Merrick B. Garland Designates Jonathan Mayer to Serve as the Justice Department’s First Chief Science and Technology Advisor and Chief AI Officer, Department of Justice (Feb. 22, 2024),