close
close

Federal Agencies Take Broad Action on Artificial Intelligence Under AI EO – AI: The Washington Report (Part 2 of 2) | Mintz – Viewpoints on Antitrust Law

(co-author: Raj Gamhir)

  1. President Joe Biden’s October 2023 Executive Order on Artificial Intelligence directed agencies to take a significant number of actions on artificial intelligence. On April 29, 2024, the White House announced that federal agencies had completed “all 180-day EO activities on schedule, following recent successes, completing each 90-day, 120-day, and 150-day activity on time.”
  2. The 180-day actions involved agencies spanning the executive branch and covering a wide range of topics, including health care, national security, labor standards and grid modernization.
  3. Last week’s bulletin discussed three main activities that were announced to be completed at the end of April. This week we discuss the remaining activities implemented in accordance with the AI ​​EO in April 2024.

President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) launched a series of rulemakings, studies, and convenings on artificial intelligence across the executive branch. As outlined in our AI EO Roadmap, April 27, 2024 sets the deadline for all ordered actions to be completed 180 days after the signing of the Executive Order.

On April 29, 2024, the White House reported that federal agencies completed all 180-day activities under the AI ​​EO on schedule, with some activities completed ahead of schedule. In last week’s bulletin, we discussed three significant actions completed in late April: the Generative AI Risk Management Framework, the Guidelines for Implementing AI in the Workplace, and the Department of Energy’s AI Action Set.

This week, we discuss the remaining AI EO activities completed within 180 days. In its press release, the White House divided these actions into four categories:

1. AI security risk management

  • Office of Science and Technology Policy (OSTP) has published a “Framework for Nucleic Acid Synthesis Screening,” which aims to “help prevent the inappropriate use of artificial intelligence in the design of hazardous biological materials.” By the end of October 2024, “federal research funding agencies will require recipients of federal research and development funds to order synthetic nucleic acids only from suppliers that implement” the best practices identified in this framework.
  • In addition to the Generative AI Risk Management Framework discussed in last week’s bulletin, the National Institute of Standards and Technology (NIST) published three additional drafts of the publication. Comments on each of these draft documents can be submitted via Federal Registerand are scheduled to take place on June 2, 2024.
    1. Secure software development practices for generative AI and core dual-use modelsa guide offering “guidelines for handling training data and the data collection process.”
    2. Mitigating the threats posed by synthetic contenta document “aimed at mitigating the risks of synthetic content by understanding and applying technical approaches to improve content transparency based on use case and context.”
    3. A roadmap for global engagement on AI standardsan action plan “designed to drive the global development and implementation of AI-related consensus standards, cooperation and coordination, and information exchange.”

Along with these draft documents, NIST launched NIST GenAI, a program that will evaluate generative AI technologies by issuing “challenging problems designed to evaluate and measure the capabilities and limitations of generative AI technologies.”

  • United States Patent and Trademark Office (USPTO) requested comments on “the impact of the spread of artificial intelligence (AI) on the state of the art, the knowledge of the person of ordinary skill in the art (PHOSITA) and the patentability determinations made in light of the above.” Comments can be submitted via Federal Registerwebsite and are due to expire on July 29, 2024.
  • Department of Homeland Security (DHS) created Safety and security guidance for owners and operators of critical infrastructure, a guide that discusses the threats AI poses “to safety and security that are solely a consequence of critical infrastructure.” DHS also established an “AI Safety and Security Council” to advise the Secretary, critical infrastructure operators, and other interested parties “on the safe development and deployment of artificial intelligence technologies across our nation’s critical infrastructure.”
  • The White House announced that the Department of Defense (DoD) has made progress on a pilot tool that “can find and fix vulnerabilities in software used for national security and military purposes.”

2. Defending employees, consumers and civil rights

3. Using artificial intelligence for good

  • In addition to the actions outlined in last week’s bulletin, the Department of Energy (DOE) announced funding opportunities “to support applications of artificial intelligence in science, including energy-efficient AI algorithms and hardware.”
  • Presidential Council of Advisors on Science and Technology (PCAST) is the author of a report for the president entitled Supercharging Research: Using Artificial Intelligence to Meet Global Challenges. The purpose of this report is to diagnose and propose solutions to the obstacles to creating an effective artificial intelligence research and development ecosystem in the United States.

4. Introducing AI talent into government

  • General Services Administration (GSA) will join its first-ever cohort of Presidential Innovation Fellows AI in summer 2024.
  • The DHS will hire 50 artificial intelligence specialists as part of the newly established DHS AI Corps, which will be tasked with building “safe, accountable and trusted artificial intelligence to improve service delivery and homeland security.”
  • Office of Personnel Management (OPM) published “skills-based hiring guidelines to increase access to federal AI jobs for people from non-traditional academic backgrounds.”

Application

Through the AI ​​EO, President Biden has mobilized agencies across the federal government to establish policies, mandate reports, and initiate rulemaking related to AI. As comprehensive AI legislation has not been developed, actions taken by the executive branch may be the most important form of AI regulation for the foreseeable future. Stakeholders interested in the state and trajectory of federal AI policy should closely monitor the implementation of AI EO and related developments emanating from the executive branch.

There will be much more AI EO activity in the coming months and into 2025. These include the USPTO’s Recommendations on Potential Enforcement Actions Related to Artificial Intelligence and Copyright (due: July 2024), the Attorney General’s Report on the Use of Artificial Intelligence in the Criminal Justice System (due: October 2024), and Standards Department of Education on Artificial Intelligence and Education (deadline: October 2024).

We will continue to monitor, analyze and report on these changes.

(Show source.)