On October 30, 2023, President Biden signed an Executive Order (“EO”) establishing key policy priorities and directing federal agencies to take specific steps to address privacy, security, and governance issues raised by the use of Artificial Intelligence (“AI”) technologies by federal agencies, critical infrastructure entities, and others. The EO explains that AI has the potential to solve urgent societal issues, but also presents risks that could exacerbate social problems, undermine national security, and compound other potential harms. The EO is intended to advance a “coordinated, Federal Government-wide approach” to governing the development and use of AI safely and responsibly, and to implement the President’s vision of “[h]arnessing AI for good and realizing its myriad benefits [while] mitigating its substantial risks.”
In the absence of legislative progress on AI-related issues at the federal level, the EO represents the most significant and comprehensive AI-related policymaking by the United States federal government so far. While much of the initial work of implementing the EO likely will not be public — e.g., reports to the President on national security implications of AI use — the EO identifies and establishes a significant number of workstreams that are expected to result in agency rulemaking, guidance and standards development, and that could have wide-ranging implications for businesses in the United States seeking to develop or use AI models. The AI EO is only the beginning of that work, with more to come as the agencies execute the President’s directives.
Key Policy Principles
The EO builds upon the Biden Administration’s previous actions on AI, including the Blueprint for an AI Bill of Rights and voluntary commitments from 15 companies to develop safe, secure, and trustworthy AI. In particular, the EO sets forth eight “guiding” principles for agencies to follow as they execute the directives in the EO. These include:
- Establishing new AI safety and security standards. This includes, among other things, requiring developers of the most powerful AI systems to share safety test results with the federal government, and requiring critical infrastructure sectors to comply with the National Institute of Standards and Technology (“NIST”) standards for red-team testing.
- Promoting innovation and competition. The EO seeks to catalyze AI research through the National AI Research Resource and encourages the Federal Trade Commission to exercise its authority to promote a fair, open, and competitive AI ecosystem.
- Supporting workers and workplaces. The EO directs relevant agencies to develop principles to mitigate AI-driven harm to workers and produce a report on AI’s potential labor-market impacts.
- Advancing equity and civil rights. The EO includes directives designed to address algorithmic discrimination (via training, technical assistance and coordination between the Department of Justice and other federal civil rights offices).
- Protecting consumers, patients, and students by advancing responsible use of AI in healthcare, education, and other industries. The EO also includes directives to the Department of Health and Human Services, Veterans Affairs, and Education to report on AI implications in their respective arenas.
- Protecting Americans’ privacy. The EO directs the Office of Management and Budget (“OMB”) to develop standards and procedures related to the use, sharing, and dissemination by federal agencies of commercially available information containing personally identifiable information (“PII”).
- Ensuring responsible and effective government use of AI. The EO directs OMB to chair an interagency council to develop and issue guidance for agencies’ use and acquisition of AI technologies.
- Advancing American leadership abroad. The EO directs the State Department and Commerce Department to work with other nations to collaborate and develop robust international frameworks for AI risk management.
The EO includes over 100 directives to different federal agencies — from the Small Business Administration and the National Science Foundation to the Department of Defense and Department of Justice. In some cases, the agencies are directed — or, in the case of independent agencies such as the Federal Trade Commission (“FTC”) and Federal Communications Commission (“FCC”), encouraged — to adopt rules or issue guidance on the use, sale, or development of AI technologies. In other cases, the directives involve the development of standards or other guidance related to the use of AI in critical infrastructure. Here, we highlight several directives with potentially broad implications for companies seeking to develop, build, or use AI-powered tools.
Safety & Security
- Within 90 days of the EO, the Secretary of Commerce must require companies developing or demonstrating an intent to develop potential dual-use foundation models (defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters”) to provide the Secretary of Commerce information about ongoing or planned activities related to training and developing dual-use foundation models, results of relevant AI red-team testing based on NIST guidance, and ownership and possession of model weights of such dual-use foundation models. Additionally, the Secretary of Commerce must provide regulatory recommendations to the President related to dual-use foundation models for which the model weights are widely available based on input from the private sector and other stakeholders on the risks, benefits, and policy approaches of such models.
- Within 90 days of the EO (and at least annually thereafter), heads of agencies with authority over critical infrastructure, heads of Sector Risk Management Agencies, and the Director of the Cybersecurity and Infrastructure Security Agency must develop and provide to the Secretary of Homeland Security an assessment of potential risks to critical infrastructure sectors from the use of AI, including critical failures, physical attacks, and cyber-attacks, as well as ways to mitigate such risks and vulnerabilities. The Secretary of Homeland Security must establish an Artificial Intelligence Safety and Security Board, comprised of “AI experts from the private sector, academia, and government, as appropriate,” to provide the Secretary and the critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.
- Within 180 days of the EO, the Secretary of Commerce must propose regulations that require US Infrastructure as a Service (“IaaS”) Providers to ensure that foreign resellers of US IaaS Products verify the identity of any foreign person that obtains an IaaS account from the foreign reseller.
- The Copyright Office is currently conducting a study regarding copyright issues raised by generative AI. Within 180 days after the Copyright Office publishes the results of that study, or 270 days of the EO, whichever is later, the Director of the Copyright Office must provide recommendations to the President on potential executive actions related to copyright and AI, including the scope of protection for works produced by AI and treatment of copyrighted works in AI training. The Copyright Office has collected initial written comments, and reply comments are due November 29, 2023.
- Within 120 days of the EO, the Under Secretary of Commerce for Intellectual Property and the Director of the US Patent and Trademark Office (“USPTO”) must publish guidance for USPTO patent examiners and applicants addressing inventorship and the use of AI, in addition to guidance concerning other considerations at the intersection of AI and intellectual property to be issued within 270 days of the EO.
- Within 180 days of the EO, the Secretary of Homeland Security and Attorney General must develop a training, analysis, and evaluation program to mitigate intellectual property risks, such as AI-related intellectual property theft.
- The EO sets forth initiatives to foster capabilities to identify and track the authenticity and provenance of synthetic and non-synthetic content produced by or for the Federal Government, including that, within 240 days of the EO, the Secretary of Commerce must submit a report identifying standards, tools, methods, and software for authenticating, tracking, and labeling content.
Promoting Competition and Innovation, and Protecting Consumers
- Independent regulatory agencies (e.g., the FTC, Securities & Exchange Commission, FCC, etc.) are encouraged to consider using their respective authorities, including rulemaking authority, to protect consumers from fraud, discrimination, threats to privacy, and to address other risks from the use of AI. Potential regulation could address “clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.”
- Within 180 days of the EO, the President’s Council of Advisors on Science and Technology must submit to the President and make publicly available a report on the potential role of AI in research aimed at tackling major societal and global challenges.
- The Secretary of the Department of Health and Human Services (“HHS”) must establish an AI Task Force to develop policies and suggest potential regulatory action on responsible deployment and use of AI in the health and human services sector.
- The Director of the Consumer Financial Protection Bureau is encouraged to use its authority to require financial entities to use appropriate methodologies to ensure compliance with federal laws, and to evaluate underwriting models and collateral-valuation and appraisal processes for bias.
Workforce & Civil Rights Issues
- The Secretary of Labor must submit a report to the President analyzing the abilities of agencies to support workers replaced by the adoption of AI, as well as develop and publish best practices for employers to mitigate AI’s potential harms to employees’ well-being, and guidance that makes clear that employers that use AI to “monitor or augment employees’ work” must comply with fair compensation laws and requirements. Within 365 days of the EO, the Secretary of Labor must publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.
- The Attorney General and federal civil rights agencies must convene to discuss use of their respective authorities to prevent and address discrimination in automated systems, increase coordination between federal civil rights offices and the DOJ’s Civil Rights Division, and improve external stakeholder engagement to promote public awareness of potential discriminatory uses and effects of AI.
- The Director of the OMB must take steps to identify commercially available information that contains PII procured by agencies, including from data brokers, and consult with the Federal Privacy Council and the Interagency Council on Statistical Policy about agency standards and procedures related to the use, sharing, and dissemination of commercially available information containing PII.
- Within 365 days of the EO, to better enable agencies’ use of privacy-enhancing technologies, the Director of NIST must create guidelines for agencies to evaluate differential-privacy-guarantee protections, including for AI.
Federal Government Use of AI
- Within 60 days of the EO (and on an ongoing basis), the OMB Director must convene an interagency council to coordinate the development and use of AI in federal programs and operations.
- The OMB Director must issue guidance to agencies to strengthen the use of and manage the risks of AI, including a requirement to designate at each agency a Chief Artificial Intelligence Officer and recommendations for external testing and red-teaming of AI, among other requirements.
- Within 180 days of the EO, the Office of Personnel Management and OMB Directors must develop guidance on the use of generative AI for work by federal employees. Agencies are discouraged from imposing broad general bans on agency use of generative AI; instead, they should limit access to particular generative AI services based on specific risk assessments, and establish guidelines and limitations on the appropriate use of such services.
- Within 180 days of the date of the EO, GSA and other agency heads must take steps to facilitate access to acquisition solutions for specified types of AI services and products.
Promoting U.S. Leadership on AI
- Within 270 days of the EO, the Secretary of Commerce must establish a plan for global engagement on promoting and developing AI standards. Within 180 days of releasing that plan, the Secretary must submit a report to the President on priority actions that are guided by principles set out in the NIST AI Risk Management Framework and US Government National Standards Strategy for Critical and Emerging Technology.
- Within 365 days of the EO, the Secretary of State and the Administrator of the United States Agency for International Development, in collaboration with the NIST Director, must publish an AI in Global Development Playbook that incorporates the AI Risk Management Framework’s principles, guidelines, and best practices.
The White House Artificial Intelligence Council
Finally, the EO establishes the White House AI Council to coordinate the activities of federal agencies and to ensure the effective formulation, development, communication, industry engagement related to, and timely implementation of AI-related policies.
The White House AI Council is composed of various agency heads, including the Secretary of State, Secretary of HHS, Secretary of Homeland Security, heads of other agencies as the Assistant to the President and Deputy Chief of Staff for Policy (Chairs of the Council) may designate, and other entities.
Click here to download this article.