Executive Actions
The government is responding to AI. The federal government has undertaken hundreds of AI-related actions, agencies have implemented comprehensive frameworks and programs, spanning national security, financial regulation, transportation safety, environmental protection, and international cooperation. An ongoing list of those major actions is detailed below.
Please send me additions.
Key documents #
- Winning the Race: America’s AI Action Plan (July 10, 2025) – The AI Action Plan is Trump’s flagship AI policy document. “America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security. The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
- NIST AI 100-1: NIST AI Risk Management Framework (April 29, 2024) – NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.
White House Actions under Trump (2025-present) #
- Adjusting Imports of Semiconductors, Semiconductor Manufacturing Equipment, and their Derivative Products into the United States (January 14, 2026) – This Proclamation uses Section 232 to impose a new 25% import duty on certain semiconductors, semiconductor manufacturing equipment and their derivate products. The Proclamation includes exclusions for U.S. data centers, U.S. research and development, consumer electronics, industrial applications, and the public sector.
- Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles (December 11, 2025) — Large Language Models (LLMs) procured by the Federal Government must produce reliable outputs free from harmful ideological biases or social agendas. Section 4 of the E.O. requires the Director of the Office of Management and Budget (OMB) to issue guidance to agencies to implement these principles. This memorandum fulfills this requirement.
- Executive Order on Ensuring a National Policy Framework for Artificial Intelligence (December 10, 2025) – This EO establishes a federal policy to sustain and enhance U.S. AI leadership through a minimally burdensome national policy framework and to limit conflicting state requirements. Specific directives include: an FCC proceeding to consider a federal reporting and disclosure standard for AI models; an FTC policy statement on how the FTC Act applies to AI models and could preempt certain state laws; an evaluation of conditions on federal funding provided to states; and the creation of a DOJ AI Litigation Task Force to challenge state AI laws inconsistent with that policy.
- Launching the Genesis Mission (November 24, 2025) – This EO establishes a plan to develop an integrated AI platform that harnesses federal scientific datasets to “train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs." The mission will be implemented within the Department of Energy, overseen by the Secretary of Energy, and use DOE facilities.
- U.S. - Korea Technology Prosperity Deal (October 29, 2025) – This Memorandum of Understanding aims to enable collaboration on AI research and innovation between the U.S. and Korea, including joint efforts to reduce barriers, promote U.S. and Korean AI exports, support AI safety standards and alignment, and promote AI education.
- U.S. - Japan Technology Prosperity Deal (October 28, 2025) – This Memorandum of Cooperation seeks to promote collaboration between the United States and Japan on research and development in science and technology: “The Participants intend to collaborate closely on promoting pro-innovation AI policy frameworks, promoting exports across our full AI stack, ensuring the rigorous enforcement of existing protection measures while acknowledging the importance of strengthening such measures related to critical and emerging technologies, advancing shared work on industry standards, and safeguarding our children’s digital wellbeing, with a shared commitment to promoting a secure and trustworthy AI ecosystem in a mutually beneficial manner. "
- Memorandum of Understanding Between the Government of The United States of America and the Government of The United Kingdom of Great Britain and Northern Ireland Regarding the Technology Prosperity Deal (September 18, 2025) – The Memorandum of Understanding aims to promote cooperation in science and technology between the United States and the United Kingdom, including on AI: “The Participants intend to collaborate closely in the build-out of powerful AI infrastructure, facilitate research community access to compute, support the creation of new scientificdata sets, and harness their expertise in metrology and evaluations to enable adoption and advance our collective security. The Participants intend to leverage this infrastructure and the AI expertise across industry and elsewhere, to deliver transformational AI-driven change for our societies and economies. "
- Major Organizations Commit to Supporting AI Education (September 9, 2025) – Following the President’s AI in education executive order in January, numerous organizations that have committed to providing AI education resources including Google, Code.org, IBM, Pearson Education, HP, Zoom, NVIDIA, MasterCard, Dell Technologies, Microsoft, Amazon, Apple, Adobe, xAI, OpenAI, Anthropic, Meta, Siemens, ScaleAI, MagicSchool, Learning.com, Arist, Palo Alto Networks, AT&T, Cisco, Qualcomm, ARM, Charter Communications, Salesforce, Cengage Group, McGraw Hill Education, Houghton Mifflin Harcourt, Kyndryl, Mason Contractors Association of America, SAP America, Silver Lake, Accenture, Walmart, Intuit, Deloitte, Booz Allen, ServiceNow, Roblox, Cognizant, Software & Information Industry Association, Business Software Alliance, ISACA, Micron Technology, Groq, Intel, and Snap.
- Executive Order on Preventing Woke AI in the Federal Government (July 23, 2025) – While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas. Building on Executive Order 13960 of December 3, 2020, this order helps fulfill that obligation in the context of large language models.
- Executive Order on Accelerating Federal Permitting of Data Center Infrastructure (July 23, 2025) – This EO facilitates expedited permitting for data centers and related infrastructure, energy, and manufacturing projects in numerous ways, including changes to the Clean Water Act, the Clean Air Act, the National Environmental Policy Act, and Fixing America’s Surface Transportation Act.
- Promoting The Export of the American AI Technology Stack (July 23, 2025) – This order is meant to “establish and operationalize a program within DOC aimed at gathering proposals from industry consortia for full-stack AI export packages. Once consortia are selected by DOC, the Economic Diplomacy Action Group, the U.S. Trade and Development Agency, the Export-Import Bank, the U.S. International Development Finance Corporation, and the Department of State (DOS) should coordinate with DOC to facilitate deals that meet U.S.-approved security requirements and standards.”
- Winning the Race: America’s AI Action Plan (July 10, 2025) – The AI Action Plan is Trump’s flagship AI policy document. “America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security. The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
- Advancing Artificial Intelligence Education for American Youth – The White House (April 23, 2025) – The executive order establishes the White House Task Force on Artificial Intelligence Education, composed of the department heads of various federal agencies. The task force is charged with establishing the Presidential Artificial Intelligence Challenge, encouraging AI adoption and achievement in education across different geographical areas; seeking public-private partnerships and identifying Federal funding for AI education; and generally increasing AI proficiency and literacy through AI education. Additionally, the Secretary of Education is charged with issuing guidance on the use of funds for this purpose, as well as enhancing training for educators on AI. Finally, the Secretary of Labor is tasked with increasing participation in AI-related Registered Apprenticeships.
- OMB Memo M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust + OMB Memo M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government (April 3, 2025) – According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 and shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”
- Public Comment Invited on Artificial Intelligence Action Plan (Febraury 25, 2025) – President Trump’s recent Artificial Intelligence (AI) Executive Order shows that this Administration is dedicated to America’s global leadership in AI technology innovation. This Order directed the development of an AI Action Plan to sustain and enhance America’s global AI dominance. Today, the American people are encouraged to share their policy ideas for the AI Action Plan by responding to a Request for Information (RFI), available on the Federal Register’s website.
- Removing Barriers to American Leadership in Artificial Intelligence (January 23, 2025) The executive order tasks the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto , and the Assistant to the President for National Security Affairs with reviewing all policies, directives, regulations and other actions pursuant to Executive Order 14110. It charges them to suspend, revise, or rescind any such actions in accordance with law to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The order also tasks the aforementioned officials to develop an Artificial Intelligence Action Plan in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, and the Director of the Office of Management and Budget.
- Initial Rescissions Of Harmful Executive Orders And Actions (January 20, 2025) – This catch-all executive order revokes a larger number of the Biden Administration’s executive orders, including Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
White House Actions under Biden (2020-2025) #
- inactive Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure (January 14, 2025)
- inactive Framework for Nucleic Acid Synthesis Screening (April 29, 2024) — This Framework established requirements, as a condition of receiving U.S. governmental life sciences research funding, that synthetic nucleic acids, and benchtop devices capable of synthesizing them, are only procured from providers and manufacturers that comply with the requirements of the Framework.
- inactive Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence (March 28, 2024)
- inactive CPM 2024-06 Memorandum for Heads of Executive Departments and Agencies
- National Artificial Intelligence Research Resource (NAIRR) pilot (January 2024) – Launched by NSF. Provides researchers with resources for responsible AI research.
- inactive
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 30, 2023)
- 90 Day Progress (January 29, 2024)
- 180 Day Progress (April 29, 2024)
- 270 Day Progress (July 26, 2024)
- Stanford HAI’s Tracking U.S. Executive Action on AI
- inactive Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023) – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have voluntarily committed to internal and external security testing of their AI systems before their release, and committed to sharing information across the industry and with governments, civil society, and academia on managing AI risks.
- inactive New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment (May 23, 2023) – In May 2023, the Biden-Harris administration updated the National AI Research and Development Strategic Plan, emphasizing a principled and coordinated approach to international collaboration in AI research.
- inactive New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (May 4, 2023) – OSTP releases National AI R&D Strategic Plan, updated for the first time since 2019.
- inactive Biden-Harris Administration Tackles Racial and Ethnic Bias in Home Valuations (March 23, 2023)
- inactive Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government (February 26, 2023)
- inactive Blueprint for an AI Bill of Rights (October 4, 2022)
- inactive Key Actions to Advance Tech Accountability and Protect the Rights of the American Public (October 4, 2022)
- inactive Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government (January 20, 2021)
White House Actions under Trump (2016-2021) #
- Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government EO 13960 (December 3, 2020) – Directed agencies to conduct an annual inventory of their AI use cases and publish them to the extent possible, resulting in the AI Use Case Inventory.
- Guidance for Regulation of Artificial Intelligence Applications (January 2020) – In January 2020, the Office of Management and Budget (“OMB”) published a draft memorandum featuring 10 “AI Principles”[8] and outlining its proposed approach to regulatory guidance for the private sector which echoes the “light-touch” regulatory approach espoused by the 2019 Executive Order.
- Maintaining American Leadership in Artificial Intelligence EO 13859 – Attempts to ensure American leadership in R&D related to AI.
- Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities (February 11, 2019) – Blocks the purchase of IaaS capable of supporting certain AI models by non-US persons known/suspected to engage in malicious cyber activities.
- Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities (January 19, 2021) – Ensures that providers offering United States IaaS products verify the identity of persons obtaining an IaaS account and maintain records of those transactions.
Consumer Financial Protection Bureau (CFPB) #
- inactive
Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) – “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
- CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior (April 25, 2023) – The Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission released a joint statement outlining a commitment to enforce their respective laws and regulations.
- Generative AI Chatbots Warnings (June 6, 2023) – The CFPB warns banks about using AI chatbots that fail to provide timely or straightforward answers.
- CFPB circular (May 2022) – asserting creditors using AI in credit decisions must still provide specific adverse action notices. CFPB circular (September 2023) – reaffirms the same.
- CFPB report (June 2023) – Warns financial institutions about legal obligations when deploying chatbot technology.
- CFPB proposed rule (June 2023) – Mandates entities adopt policies to ensure certain AI models used in credit decisions adhere to quality control standards.
Department of Commerce #
- Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China (PRC) (October 7, 2022) – Controls on Advanced AI Chips: The Commerce Department’s Bureau of Industry and Security (BIS) imposed sweeping export controls on advanced computing chips and semiconductor technology that underpin high-end AI systems. This seminal rule requires licenses for exporting to [China (and other adversary nations) ]( https://www.bis.gov/press-release/commerce-strengthens-restrictions-advanced-computing-semiconductors-semiconductor-manufacturing-equipment#:~:text=These controls were,in military applications.)certain graphic processing units (GPUs) and AI accelerators that exceed specified performance thresholds, [as well as the equipment to manufacture such semiconductors]( https://www.bis.gov/press-release/commerce-strengthens-restrictions-advanced-computing-semiconductors-semiconductor-manufacturing-equipment#:~:text=These controls were,in military applications.). The controls, issued in response to [national security concerns]( https://www.bis.gov/press-release/commerce-strengthens-restrictions-advanced-computing-semiconductors-semiconductor-manufacturing-equipment#:~:text=Advanced AI capabilities,violations and abuses.), aim to restrict China’s ability to obtain the cutting-edge chips needed for training large-scale AI models and developing advanced military applications. BIS officials noted that[ AI capabilities enabled by supercomputing]( https://www.bis.gov/press-release/commerce-strengthens-restrictions-advanced-computing-semiconductors-semiconductor-manufacturing-equipment#:~:text=necessary to enable the development and production of technologies such as artificial intelligence (AI) used in military applications) could improve an adversary’s military decision-making, autonomous weapons, signals intelligence, and facial recognition surveillance. Thus, the rule also added certain Chinese supercomputing and AI entities to the Entity List, barring U.S. technology transfer to them. This export control represented one of the most significant tech leverage actions to date, often referred to as the start of a “Tech Iron Curtain” around AI hardware.
- Commerce Strengthens Restrictions on Advanced Computing Semiconductors, Semiconductor Manufacturing Equipment, and Supercomputing Items to Countries of Concern (October 17, 2023) – Enhanced AI Export Safeguards: BIS updated and strengthened the 2022 controls to close loopholes and address evolving tech. The updated rules expanded the scope of chips covered (introducing a new “performance density” metric to catch chip designs that might circumvent prior thresholds) and imposed a worldwide license requirement on any export of controlled AI chips to companies headquartered in any country of concern (preventing proxy routing). It also added 43 more countries to the list requiring notification for exports of less advanced (but still sensitive) chips, beyond just China/Macau. Commerce emphasized these measures ensure “those seeking to obtain powerful advanced chips… will not use these technologies to undermine U.S. national security”, and that the U.S. will “[continue to hone these controls as technology evolves]( https://www.bis.gov/press-release/commerce-strengthens-restrictions-advanced-computing-semiconductors-semiconductor-manufacturing-equipment#:~:text=continue to hone these controls as technology evolves)”. The tightened rules reflect an ongoing policy to constrain authoritarian regimes’ access to the highest-caliber AI computing infrastructure. (Notably, in parallel, the U.S. is working with allies on similar controls and considering outbound investment screening for AI sectors.) These export controls are a tool to slow rival militaries’ progress in AI by targeting the hardware “choke points” required for training advanced AI models, while still allowing civilian AI collaboration in areas like healthcare to continue under license exceptions.
Department of Defense, Department of War (DOD, DOW) #
- Memo on “Artificial Intelligence Strategy for the Department of War” (January 9, 2026) – As is noted in the first paragraph of this memo: “In the national security domain, AI-enabled warfare and AI-enabled capability development will re-define the character of military affairs over the next decade. This transformation is a race - fueled by the accelerating pace of commercial AI innovation coming out of America’s private sector. The United States Military must build on its lead over our adversaries in integrating this technology, established during President Trump’s first term, to make our Warfighters more lethal and efficient. To this end, aligned with America’s AI Action Plan, I direct the Department of War to accelerate America’s Military AI Dominance by becoming an “AI-first” warfighting force across all components, from front to back.
- CDAO and DIU Launch New Effort Focused on Accelerating DOD Adoption of AI Capabilities (December 11, 2024) – The Chief Digital and Artificial Intelligence Office (CDAO) in partnership with the Defense Innovation Unit (DIU) announced the formation of a new AI Rapid Capabilities Cell (AI RCC) focused on accelerating DoD adoption of next-generation artificial intelligence (AI) such as Generative AI (GenAI). The AI RCC will focus on executing pilots in the primary use case areas identified by Task Force Lima, including warfighting and enterprise management. The executive summary of Task Force Lima’s findings can be found [on CDAO’s website]( https://www.ai.mil/Portals/137/Documents/Resources Page/2024-12-TF Lima-ExecSum-TAB-A.pdf?ver=cEnvUdR8cdzXFmv7KW2n-w%3d%3d).
- Evaluation of the Effectiveness of the Chief Digital and Artificial Intelligence Office’s Artificial Intelligence Governance and Acquisition Process (November 14, 2024) – The Chief Digital and Artificial Intelligence Office (CDAO) was established in December 2021 and was tasked with developing an AI strategy and policy for the DoD, and the acquisition and development of AI products and services. Focusing exclusively on evaluating the CDAO’s strategy and policy development, the report found that the implementation plan for the AI Adoption Strategy and the DoD’s AI policy were past due. The CDAO was supposed to have submitted these plans in the form of a DoD chartering directive and an accompanying DoD Instruction; however, this did not happen. As of June 2024, the implementation plan was still in draft form. The evaluation recommended that the Chief Digital and Artificial Intelligence Officer publish the implementation plan and coordinate with the Director of Administration and Management to review existing guidance that could be incorporated into the DoD Directive and Dod Instruction.
- Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence (October 24, 2024) – National Security Memorandum-25 implementation
- Replicator Initiative (August 28, 2023) — The first iteration of Replicator (Replicator 1), announced in August 2023, will deliver all-domain attritable autonomous systems (ADA2) to warfighters at a scale of multiple thousands, across multiple warfighting domains, within 18-24 months, or by August 2025.
- Task Force Lima (Generative AI) (August 10, 2023) — Established the CDAO Generative AI task force, which operated until being sunset in December 2024 when the AI Rapid Capabilities Cell launched.
- Data, Analytics, and Artificial Intelligence Adoption Strategy: Accelerating Decision Advantage (June 27, 2023) – This Strategy builds upon and supersedes the 2018 AI Strategy and the 2020 Data Strategy to continue the Department’s digital transformation. As it notes, “The Department cannot succeed alone. Our integration of data, analytics, and AI technologies is nested within broader U.S. government policy, the network of private sector and academic partners that promote innovation, and a global ecosystem. We need a systematic, agile approach to data, analytics, and AI adoption that is repeatable by all DoD Components. This strategy outlines our approach to improving the organizational environment within which our people can deploy data, analytics, and AI capabilities for enduring decision advantage.”
- DOD Directive 3000.09 Update: Autonomy in Weapon Systems (January 25, 2023) – This document updated the foundational 2012 policy on autonomous weapons, establishing policy and assigning responsibilities for developing and using autonomous and semiautonomous functions in weapon systems, as well as guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements. It also establishes the Autonomous Weapon Systems Working Group.
- Department’s Responsible Artificial Intelligence Strategy and Implementation Pathway Maps the Journey to a Trusted AI Ecosystem (June 22, 2022) – Deputy Secretary of Defense Kathleen Hicks has signed the Department’s Responsible Artificial Intelligence Strategy and Implementation Pathway, which guides the Department of Defense’s journey to its goal of a trusted artificial intelligence (AI) ecosystem. The DoD must transform itself into an AI-ready organization, with responsible artificial intelligence (RAI) as a prominent feature to maintain its competitive advantage.
- Chief Digital and Artificial Intelligence Office (2022) – Chief Digital and Artificial Intelligence Office (CDAO) establishment
- DOD Adopts Ethical Principles for Artificial Intelligence (February 24, 2020) – The U.S. Department of Defense adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation’s leading AI experts with multiple venues for public input and comment. The adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.
Department of Education #
- Artificial Intelligence and the Future of Teaching and Learning (May 2023) – Advocates for AI technologies that support diverse learning environments and recommends robust frameworks to manage AI’s impact on education.
- Designing for Education with Artificial Intelligence: An Essential Guide for Developers (July 2024)
Department of Energy (DOE) #
- Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution (November 25, 2025) — The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness. The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.
- Artificial Intelligence Strategy (October 2025) –
- PermitAI Tool (July 10, 2025) — Pacific Northwest National Laboratory (PNNL) is building a one-stop data platform and a powerful suite of artificial intelligence (AI) tools to streamline and accelerate the review process for critical federal infrastructure.
- DOE Established Artificial Intelligence Advancement Council (May 20, 2022) – DOE AI Governance Council: DOE established an Artificial Intelligence Advancement Council (AIAC), the first of its kind at the Department[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=The U,Turk on April 7%2C 2022)[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=Department requires the combined efforts,AITO). Chartered by Deputy Secretary David Turk, the AIAC coordinates AI activities across DOE’s extensive enterprise and defines Department-wide AI priorities[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=The U,Turk on April 7%2C 2022)[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=Department requires the combined efforts,AITO). It brings together top DOE leaders (Science, Nuclear Security, Intelligence, General Counsel, etc.) to provide recommendations on a comprehensive DOE AI strategy led by the DOE’s Artificial Intelligence and Technology Office (AITO)[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=activities and define DOE AI,AITO)[energy.gov]( https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council#:~:text=Under Secretary for Nuclear Security,chair). The AIAC’s focus includes AI governance, innovation, and ethics in DOE’s missions (science, energy, national security).
- Artificial Intelligence (2026) – Advancing AI R&D and Infrastructure: DOE, leveraging its national laboratories and supercomputing facilities, has launched major AI research initiatives. In 2023, DOE announced the Frontiers in AI for Science, Security, and Technology (FASST) initiative , aimed at developing breakthrough AI capabilities to accelerate scientific discovery and bolster U.S. leadership in critical technologies. DOE’s world-leading exascale supercomputers (like Frontier and Aurora) are being harnessed for “AI for Science” programs to model climate, fusion energy, materials, and more. The Department is also building dedicated AI testbeds to rigorously assess AI systems’ safety and security.energy.govenergy.gov. For example, [DOE is testing AI models against adversarial attacks]( https://www.energy.gov/topics/artificial-intelligence#:~:text=DOE's role in,National Security Memoranda.) in control systems (like power grid cybersecurity) and against chemical/biological threats. These efforts align with the national AI risk management framework, ensuring cutting-edge AI research is accompanied by risk mitigation and trustworthiness measures. (DOE also co-sponsors the National AI Research Institutes with NSF, supporting 25 multi-disciplinary AI centers launched since 2020 in areas from climate-smart agriculture to advanced manufacturing.)
- Google DeepMind and OpenAI Partnership MOUs — 2025–2026 — MOUs providing all 17 National Labs access to frontier AI models (AI co-scientist, AlphaEvolve, reasoning models).
Department of Health and Human Services #
- Request for Information: Accelerating the Adoption and Use of Artificial Intelligence as Part of Clinical Care (December 23, 2025) The HHS Office of the Deputy Secretary in collaboration with ASTP/ONC has published this Request for Information (RFI) to seek broad public comment on what HHS can do to accelerate the adoption and use of AI as part of clinical care.
- HHS Unveils AI Strategy to Transform Agency Operations (December 4, 2025) The U.S. Department of Health and Human Services’ (HHS) AI Strategy is the next phase of the Department’s transformative initiative to make artificial intelligence (AI) available to the federal workforce, integrating it across internal operations, research, and public health. It fulfills HHS’ commitment to utilize leading technologies to enhance efficiency, foster American innovation, improve patient outcomes, and Make America Healthy Again.
- inactive HHS AI Strategic Plan (January 2025) – The AI Strategic Plan provides a framework and roadmap to ensure that HHS fulfills its obligation to the Nation and pioneers the responsible use of AI to improve people’s lives.
- Landing page for “ Artificial Intelligence at HHS.”
Department of Homeland Security (DHS) #
-
AI Cybersecurity Collaboration Playbook (January 14, 2025) – The AI Cybersecurity Collaboration Playbook provides guidance to organizations across the AI community—including AI providers, developers, and adopters—for sharing AI-related cybersecurity information voluntarily with the Cybersecurity and Infrastructure Security Agency (CISA) and other partners through the Joint Cyber Defense Collaborative (JCDC).
-
DHS Playbook for Public Sector Generative Artificial Intelligence Deployment (January 6, 2025) – The DHS GenAI Public Sector Playbook encapsulates the lessons learned from DHS’s pilot programs and offers a series of actionable steps for the responsible adoption of GenAI technologies in the public sector.
-
DHS Artificial Intelligence Task Force: (April 21, 2023) – DHS Secretary Alejandro Mayorkas launched the Department’s first-ever AI Task Force (AITF) to drive specific applications of AI in critical homeland security missions. The task force is applying AI to enhance supply chain screening (detecting forced-labor goods), counter the flow of fentanyl (identifying illicit shipments and precursor chemicals), bolster cybersecurity and critical infrastructure protection, and aid investigations of child exploitation by analyzing large volumes of data. The AITF was given 60 days to deliver an action plan for these priorities and reports regularly to the Secretary.
-
[DHS Announces New Policies and Measures Promoting Responsible Use of Artificial Intelligence | Homeland Security]( https://www.dhs.gov/archive/news/2023/09/14/dhs-announces-new-policies-and-measures-promoting-responsible-use-artificial#:~:text=WASHINGTON – Today%2C the Department,ensure that its use of) (September 14, 2023) – DHS AI Policy Framework: Building on the task force’s work, DHS announced new policies to ensure the [responsible use of AI]( https://www.dhs.gov/archive/news/2023/09/14/dhs-announces-new-policies-and-measures-promoting-responsible-use-artificial#:~:text=Secretary Mayorkas created the AITF in April 2023 and tasked it with advancing the use of AI to support critical homeland security missions. The AITF developed these two new policies%3A). Notably, Secretary Mayorkas appointed DHS’s first [Chief AI Officer]( https://www.dhs.gov/archive/news/2023/09/14/dhs-announces-new-policies-and-measures-promoting-responsible-use-artificial#:~:text=do so in,use across DHS.”) (CIO Eric Hysen) to coordinate AI innovation and safety across the agency. DHS issued:
- Policy Statement 139-06, “Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components,” which sets department-wide AI principles. It mandates that DHS’s use of AI align with EO 13960 (promoting trustworthy AI in government) and all applicable laws, prohibits AI systems that engage in illegal discrimination, and requires that AI adoption demonstrably improves mission effectiveness. DHS affirmed it “will not collect, use, or disseminate data used in AI activities” or deploy AI systems that make decisions based on sensitive characteristics like race, sex, or religion, echoing a commitment to minimize bias.
- Directive 026-11, “Use of Face Recognition and Face Capture Technologies,” which imposes strict oversight on DHS’s use of facial recognition AI. All such systems must undergo extensive testing to ensure no unintended bias or disparate impact, with review by DHS’s Privacy Office and Office for Civil Rights/Civil Liberties. The directive also gives U.S. citizens the right to opt out of face recognition for non-law-enforcement uses and prohibits using face recognition as the sole basis for any law enforcement action. Together, these policies ensure DHS harnesses AI’s benefits for security while protecting privacy and civil rights.
-
Directive Number: 139-08: Artificial Intelligence Use and Acquisition (January 15, 2025) – DHS Directive 139-08: Artificial Intelligence Use and Acquisition
-
2024 DHS Artificial Intelligence Roadmap (March 2024) – 2024 DHS Artificial Intelligence Roadmap
-
Federal Register :: Establishment of the Artificial Intelligence Safety and Security Board (April 26, 2024) – AI Safety and Security Board (AISSB) establishment
-
2024 DHS Aritificial Intelligence Roadmap (February 2024) – DHS AI Corps launch for hiring 50 AI specialists
-
DHS Unveils Generative AI Public Sector Playbook (January 7, 2025) – Generative AI Public Sector Playbook
Department of Housing and Urban Development (HUD) #
- inactive Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing (April 29, 2024) – This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act protects certain rights of applicants for rental housing. It discusses how housing providers and companies that offer tenant screening services can screen applicants for rental housing in a nondiscriminatory way and recommends best practices for complying with the Fair Housing Act. This guidance may also help applicants understand their rights and recognize when they might have been denied housing unlawfully.
- inactive Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms (April 29, 2024) – This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act (“Act”) applies to the advertising of housing, credit, and other real estaterelated transactions through digital platforms. In particular, it addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence (“AI”), to facilitate advertisement targeting and delivery.
Department of Justice (DOJ) #
- Justice Department and National Economic Council Partner to Identify State Laws with Out-Of-State Economic Impacts (August 15, 2025) – The Justice Department and the National Economic Council announced an effort to identify State laws that significantly and adversely affect the national economy or interstate economic activity and to solicit solutions to address such effects. They invite public comments to support the Administration’s mission to address laws that hinder America’s economic growth, including those that burden industry and our small businesses.
- Artificial Intelligence and Civil Rights (February 3, 2025) – Landing page for public speeches, statements, and readouts; Civil Rights Division guidance and other documents; and cases.
- AI Inventory (January 21, 2025) – The October 30, 2023, Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” required federal agencies to report on their use of AI by conducting an annual inventory of their AI use cases.
- Justice Department Issues Final Rule Addressing Threat Posed by Foreign Adversaries’ Access to Americans’ Sensitive Personal Data (December 27, 2024) –
- inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) – According to the statement, “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
- In January 2022, the Civil Rights Division published an article entitled, “Civil Rights in the Digital Age: The Intersection of Artificial Intelligence, Employment Decisions, and Protecting Civil Rights.” This article provides an overview of the predominant issues arising from employment practices concerning the use of AI and discusses the work that the Department of Justice and other federal agencies are doing to address those issues in that context.
- On May 12, 2022, the Department of Justice released a technical assistance document entitled, “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring.” The document describes how algorithms and AI can lead to disability discrimination in hiring.
- On December 1, 2023, the Civil Rights Division issued an employer fact sheet discussing what employers should consider when using private sector commercial or proprietary software and products to electronically complete, modify, or retain the Form I-9.
- The Civil Rights Division Hosts Interagency Convening on AI and Civil Rights On October 9, 2024 the Justice Department’s Civil Rights Division convened principals of federal agency civil rights offices and senior government officials to foster AI and civil rights coordination. This was the fourth such convening by the Civil Rights Division.
- On July 25, 2024, the Department of Justice [filed a statement of interest]( https://www.justice.gov/opa/pr/justice-department-files-statement-interest-supporting-private-parties-right-bring-voting#:~:text=The Justice Department today filed a statement of,Section 11 (b) of the Voting Rights Act.) in the U.S. District Court for the District of New Hampshire supporting the right of private plaintiffs to bring a lawsuit challenging “Robocalls featuring a voice generated with artificial intelligence [] also known as a ‘deepfake,’” as intimidating, threatening, or coercive in violation of Section 11(b) of the Voting Rights Act.
- In January 2023, the Department of Justice and the Department of Housing and Urban Development filed a Statement of Interest (SOI) in [Louis et al. v. SafeRent et al]( https://www.justice.gov/opa/pr/justice-department-files-statement-interest-fair-housing-act-case-alleging-unlawful-algorithm#:~:text=The Statement of Interest was filed in Louis,Hispanic rental applicants in violation of the FHA.). to explain the Fair Housing Act’s application to algorithm-based tenant screening systems. The complaint in that case alleged that SafeRent, formerly known as CoreLogic Rental Property Solutions, LLC, provides tenants screening services that discriminate against Black and Hispanic rental applicants who use federally-funded housing choice vouchers to pay all or part of their rent, in violation of the Fair Housing Act and Massachusetts state laws.
- In November 2022, the Civil Rights Division filed a consent decree resolving allegations that the Regents of the University of California on behalf of the University of California, Berkeley, failed to provide much of its online content (such as courses, lectures, and conferences) in an accessible manner to individuals with disabilities, including through the use of inaccurate automated captioning technology for people with hearing impairments. On December 2, 2022, the district court approved the decree. Under the decree, among other things, the University will not rely solely on YouTube’s automated AI-based technology and will provide accurate captions for its online content.
- In June 2022, the Civil Rights Division, along with the U.S. Attorney’s Office for the Southern District of New York, filed a complaint and a proposed settlement agreement in United States v. Meta Platforms, Inc., f/k/a Facebook, Inc. The court signed the settlement agreement on June 26, 2022, and entered the agreement and final judgment on June 27, 2022. The case was referred to the Division from the U.S. Department of Housing and Urban Development (HUD). HUD alleged that Facebook’s advertisement delivery system, which consists of detailed targeting options and a machine-learning algorithm, allowed housing advertisers to exclude certain Facebook users from seeing housing advertisements based upon protected characteristics, such as race and gender or proxies for such characteristics.
- In June 2022, the Civil Rights Division announced an initial round of settlements with 16 employers that used college and university online recruitment platforms, including Georgia Tech’s platform, to post job advertisements that discriminated against non-U.S. citizens.
- In December 2021, the Civil Rights Division signed a settlement with Microsoft Corporation that resolved claims of discrimination based on citizenship status against non-U.S. citizens. Specifically, the Division found that Microsoft engaged in a pattern or practice of unfair documentary practices by requesting specific documents using employment eligibility verification software during the initial employment eligibility verification process and during reverification.
- In August 2021, the Civil Rights Division settled an investigation of Ascension Health Alliance, who engaged in a pattern or practice of unfair documentary practices by improperly programming its employment eligibility verification software to automatically send reverification e-mails to all non-U.S. citizen employees, even when it was not necessary. The Immigration and Nationality Act’s anti-discrimination provision prohibits employers from requesting more or different documents than necessary to prove work authorization based on employees’ citizenship, immigration status, or national origin.
Department of Labor (DOL) #
- US Department of Labor releases AI literacy framework providing foundational content areas, delivery principles to guide nationwide efforts (February 13, 2026) The U.S. Department of Labor’s Employment and Training Administration published a framework for Artificial Intelligence literacy, providing a foundation to guide nationwide AI literacy efforts across workforce and education systems.
- Department of Labor Compliance Plan Office of Management and Budget Memorandum M-25-21 (February xx, 2026) This AI compliance framework sets forth the policies, roles, and controls that govern the full AI lifecycle—planning, acquisition, development, deployment, monitoring, and retirement. It aligns DOL’s practices with applicable federal requirements and guidance, including the Office of Management and Budget (OMB) Memorandum M-25-21.
- Artificial intelligence landing page for the Department of Labor.
- [Department of Labor Issues New Guidance on the Use of Artificial Intelligence and Employment Decision-Making | Government Contracts Law]( https://www.governmentcontractslaw.com/2024/09/department-of-labor-issues-new-guidance-on-the-use-of-artificial-intelligence-and-employment-decision-making/#:~:text=Department of Labor Issues New,decisions if used improperly%2C) (April 29, 2024) – OFCCP Guidance on AI Hiring Tools: The DOL’s Office of Federal Contract Compliance Programs (OFCCP) issued new guidance to federal contractors on preventing discrimination when using AI and automated systems in employment decisions. This move recognized that many employers (including government contractors) are adopting algorithmic résumé screeners, hiring tests, and HR chatbots – which, if unchecked, could perpetuate bias against protected groups. The OFCCP guidance (issued as FAQs and best practices) reminds contractors of their [obligations under equal employment opportunity laws]( https://www.insidegovernmentcontracts.com/2024/05/office-of-federal-contract-compliance-programs-releases-new-guidance-on-the-use-of-artificial-intelligence-in-federal-contracting-employment-processes/#:~:text=The obligation of,of its business.) (e.g. Executive Order 11246, Section 503, VEVRAA) and clarifies that the use of AI does not exempt them from liability. It provides “promising practices” such as: ensure algorithms are validated as job-related and consistent with business necessity; conduct regular bias audits on AI outcomes (to see if, for instance, an AI hiring tool disproportionately rejects women or minorities); provide reasonable accommodations when AI assessments involve people with disabilities (consistent with the ADA); and maintain transparency with applicants about how AI is used. The guidance effectively serves as a [blueprint for aligning AI tools with EEO compliance,]( https://www.klgates.com/OFCCP-Guidance-Expands-Federal-Scrutiny-of-Artificial-Intelligence-Use-by-Employers-7-16-2024#:~:text=OFCCP GUIDANCE EXPANDS,linked to AI.) warning that OFCCP may scrutinize contractors’ AI systems during audits. This complements the EEOC’s parallel AI in employment initiative – together signaling that algorithmic discrimination in hiring or promotions is an enforcement priority for civil rights agencies.
Department of the Treasury #
- March 29, 2021 – RFI on AI in Financial Services: The Federal Reserve, OCC, FDIC, CFPB, and NCUA jointly issued a Request for Information on the use of AI/ML by financial institutions occ.treas.gov. This RFI (86 FR 16837) sought public input on how banks and lenders are employing AI in areas like credit underwriting, fraud detection, customer service (e.g. chatbots), and risk management. Regulators asked about governance and risk controls around AI, potential fair lending or bias issues, and whether any clarifications to regulations were needed occ.treas.gov occ.gov. The goal was to better understand industry practices and potentially craft guidance that ensures AI algorithms in finance are used in a safe, sound, and non-discriminatory manner. (The comment period was later extended to July 2021, with dozens of industry and consumer group responses.)
- Nov 2022 & June 12, 2024 – Treasury’s AI Policy in Finance: The Department of the Treasury has taken a leadership role in assessing AI’s implications for financial stability and competition. In a November 2022 report on fintech competition, Treasury analyzed how innovations in AI are transforming credit and payments (finding that AI can expand access but also raises new privacy, bias, and explainability challenges) federalregister.gov federalregister.gov. Building on that, in June 2024 Treasury issued an RFI on “Uses, Opportunities, and Risks of AI in the Financial Services Sector” federalregister.gov. This RFI (89 FR 50048) invites input from banks, fintechs, investors, consumer advocates, and the public on where AI is being applied in finance, what benefits and efficiencies it may bring, and what risks to consumers or systemic stability it could pose federalregister.gov federalregister.gov. Treasury specifically is looking at issues like AI-driven fraud, algorithmic discrimination in lending or insurance, concentration of AI resources among a few big tech firms, and the need for any regulatory or policy responses. (Comments are due by August 2024, and will inform Treasury’s and the Financial Stability Oversight Council’s next steps on AI oversight.)
- September 8, 2023 – IRS Uses AI to Target Tax Evasion: The Internal Revenue Service announced a sweeping new compliance initiative to leverage AI and data analytics to catch high-end tax evaders irs.gov. After years of declining audit rates for wealthy taxpayers, the IRS (bolstered by Inflation Reduction Act funding) is deploying machine learning models to sift through large partnerships and complex financial arrangements to identify likely tax avoidance schemes irs.gov irs.gov. AI will help the IRS analyze patterns in filings of ultra-wealthy individuals, large corporations, and pass-through entities to better detect underreporting or abusive tax shelters irs.gov. The IRS stressed that these advanced tools will focus enforcement on the most egregious cases – the agency pledged audit rates won’t rise for those under $400k income – and will reduce burdens from “no-change” audits by refining case selection irs.gov irs.gov. This marks one of the first major uses of AI in federal tax enforcement, aimed at shrinking the estimated $450+ billion annual tax gap. (The IRS is also expanding simpler AI uses like chatbots to improve taxpayer customer service in handling routine notices irs.gov.)
- February 28, 2024 – Treasury AI Fraud Detection: The Treasury Department’s Bureau of the Fiscal Service reported a successful new application of AI in protecting federal funds. In FY2023 the Fiscal Service’s Office of Payment Integrity (OPI) implemented an AI-enhanced process to combat the epidemic of paper check fraud, which had grown 385% since the pandemic home.treasury.gov home.treasury.gov. By using machine learning to flag suspicious patterns and anomalies in negotiated U.S. government checks, Treasury was able to intercept and recover over $375 million in fraudulent payments in one year home.treasury.gov home.treasury.gov. This near-real-time fraud detection system strengthens the “nation’s checkbook” – safeguarding Social Security checks, tax refunds, veterans’ benefits, and other payments from being stolen or altered. Deputy Secretary Wally Adeyemo highlighted that AI allowed Treasury to expedite fraud detection and recovery of taxpayer dollars while ensuring legitimate payments still go out on time home.treasury.gov. OPI is partnering with law enforcement, and multiple criminal cases and arrests have already stemmed from the AI-flagged fraud leads home.treasury.gov. This Treasury success story shows AI’s promise in reducing improper payments and protecting public programs from criminal abuse.
Department of Transportation (DOT) #
- January 2020 – Autonomous Vehicles Policy (AV 4.0): The U.S. DOT, in collaboration with the White House, released “Ensuring American Leadership in Automated Vehicle Technologies 4.0,” a comprehensive framework unifying federal efforts on self-driving cars and trucks transportation.gov. AV 4.0 outlined 10 principles (focused on safety, innovation, and consistency) and cataloged 38 existing federal initiatives on autonomous vehicles across agencies transportation.gov transportation.gov. It set the stage for a consistent, light-touch approach to AV regulation and R&D, emphasizing interagency coordination and modernizing regulations that may hinder AI-driven vehicle technology. Building on this, DOT published an Automated Vehicles Comprehensive Plan in January 2021 to integrate these principles into concrete actions transportation.gov itsdigest.com. The plan defined three strategic goals: safety (prioritize AV safety and security), innovation (promote U.S. tech leadership and collaboration), and integration (incorporate AVs into the transportation system) transportation.gov itsdigest.com. It reaffirmed that DOT will modernize standards, invest in AV research pilots, and work with state and local governments to prepare for automated mobility.
- June 2021 – Mandatory AV Crash Reporting: For the first time, regulators required autonomous vehicle data reporting. The National Highway Traffic Safety Administration (NHTSA) issued a Standing General Order 2021-01 mandating that manufacturers and operators of vehicles equipped with automated driving systems (ADS) or certain advanced driver-assistance systems (ADAS) promptly report any serious crashes to NHTSA environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. This order, effective June 2021, created an unprecedented continuous surveillance regime to gather real-world safety data on AI-driven vehicles. (Over 100 such crashes have been reported since, enabling regulators to spot defect trends.) In April 2025, NHTSA updated and extended this Order (issuing a Third Amended SGO) to streamline reporting burdens while preserving the requirement that firms report the most serious incidents within 5 days environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. The amended order narrows the scope of minor crashes that must be reported and eliminates monthly nil-reports, focusing oversight on significant and relevant incidents environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. This refinement balanced safety monitoring with practical compliance as the industry matured.
- March 2022 – Adapting Safety Standards for AI-Driven Vehicles: NHTSA finalized the first-ever revisions of Federal Motor Vehicle Safety Standards (FMVSS) to accommodate driverless vehicles. The March 2022 final rule updated occupant protection requirements to account for vehicles without traditional manual controls (no steering wheel or pedals) nhtsa.gov reuters.com. For example, it revised terms like “driver’s seat” in regulations to ensure that automated vehicles provide equivalent safety for all passengers even if no human driver is present nhtsa.gov. U.S. regulators thereby removed an important regulatory barrier, making it clear that fully autonomous vehicles need not have redundant human controls as long as they meet adjusted safety performance criteria nhtsa.gov. This rule was a milestone in modernizing decades-old car safety rules for the AI driving era.
- April 2025 – New Automated Vehicle Framework: On April 24, 2025, DOT announced a fresh Automated Vehicle Safety Framework (via NHTSA) to further enable safe deployment of self-driving cars environmentalenergybrief.sidley.com. This included two key measures: (1) Expansion of the AV Exemption Program – NHTSA proposed to broaden and streamline the process for approving real-world testing of non-traditional vehicles (those without steering wheels, etc.), increasing the number of vehicles (up to 2,500 per manufacturer) that can be granted temporary exemptions for research and demonstration purposes environmentalenergybrief.sidley.com. (2) Updates to the Standing General Order – as noted, NHTSA’s Third Amended General Order took effect June 2025, refining crash-reporting rules based on two years of data (extending reporting deadlines for serious crashes from 1 day to 5 days, and eliminating duplicate reports) environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. These steps aim to maintain rigorous oversight of AV safety while fostering innovation, signaling that the federal approach is evolving alongside the technology.
FAA (Federal Aviation Administration)VVV #
- Roadmap for Artificial Intelligence Safety Assurance (Version I) — August 20, 2024 — https://www.faa.gov/media/82891 — FAA’s first AI safety roadmap establishing guiding principles for the safe introduction of AI into aviation systems and UAS operations.
- Certification Research Plan for AI Applications — August 2022 — https://www.faa.gov/sites/faa.gov/files/2022-08/PL_115-254_Sec_741_Certification_of_New_Technologies_into_the_NAS.pdf — Congressionally mandated report outlining FAA’s research plan for certifying AI applications in the National Airspace System.
- Notice 1370.52: Use of Generative AI Tools and Services — March 2025 — https://www.faa.gov/regulations_policies/orders_notices/index.cfm/go/document.information/documentID/1043604 — Replacement interim policy governing internal FAA use of generative AI.
Equal Employment Opportunity Commission (EEOC) #
-
Artificial Intelligence and Algorithmic Fairness Initiative
-
New Guidance on Use of Artificial Intelligence in Hiring (June 6, 2023) inactive
- Title VII Guidance on employer use of AI and other algorithmic decision-making tools.
-
EEOC Settles First AI-Discrimination Lawsuit (August 10, 2023)
- The EEOC settled its first-ever AI discrimination in hiring lawsuit. iTutorGroup Inc. will pay $365,000 to a group of rejected job seekers age 40 and over. More info.
-
Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024)
- With the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, and the Federal Trade Commission.
-
New Guidance on Use of Artificial Intelligence in Hiring (June 6, 2023) inactive
Federal Communications Commission (FCC) #
-
November 16, 2023 – AI and Robocalls Inquiry: The FCC opened a wide-ranging Notice of Inquiry (NOI) into the implications of artificial intelligence for telecom consumers, with a particular focus on AI-generated robocalls and robotexts wiley.law. Noting the rise of AI tools that can mimic human voices or generate tailored scam calls, the Commission sought public comment on how emerging AI tech might facilitate illegal calls, how to define “AI” in a regulatory context, and what safeguards or disclosures might be needed wiley.law. This NOI explored whether updates to the Telephone Consumer Protection Act (TCPA) rules are required to address voice-cloning technologies and other AI uses in communications fraud. It’s part of a broader FCC effort (led by the Consumer and Governmental Affairs Bureau) to stay ahead of tech trends that could threaten consumers – in this case, highly convincing AI-driven spam or scam calls.
-
February 8, 2024 – TCPA Declaratory Ruling on AI Voices: In response to the above inquiry (and growing concern about AI “voice spam”), the FCC issued a unanimous Declaratory Ruling confirming that the TCPA’s existing restrictions on calls made with an “artificial or prerecorded voice” do encompass calls using AI-generated voices wiley.law wiley.law. In practical terms, this meant that if a telemarketer or caller uses a synthetic voice created by AI (even one that closely imitates a real person), they must follow the same consent and disclosure rules that apply to robocalls. The ruling emphasized that “AI-generated voice calls will be governed like other ‘artificial or prerecorded voice’ calls under the TCPA” wiley.law wiley.law. Companies using AI to mimic human speakers in outbound calls must obtain prior express consent from called parties (or written consent for telemarketing), provide identifying information of the caller, and offer an opt-out mechanism – just as is required for traditional robocalls wiley.law wiley.law. The FCC made clear this covers any kind of voice cloning technology, whether the AI voice is entirely artificial or a clone of a real person’s voice wiley.law. This decision closed a potential loophole and put industry on notice that “AI robocalls” are not exempt from regulation. (The FCC accompanied the ruling with consumer education, warning of new scams using AI voices, and highlighted bipartisan support in Congress for tackling AI-driven robocalls wiley.law.)
-
July 2024 – Proposed Rules for AI-Generated Calls and Texts: Building on the NOI and declaratory ruling, the FCC moved to actual rulemaking. In mid-2024, the Commission proposed new rules to strengthen protections against unwanted AI-generated communications. According to an FCC Fact Sheet (FCC 24-84) and reports, the proposal would: require that AI-generated call content be clearly disclosed to consumers, mandate that telemarketers obtain informed consent specifically for AI-driven calls, and update definitions to ensure any call or text involving automated or synthesized speech falls under robocall prohibitions wiley.law nelsonmullins.com. In essence, if a consumer is interacting with an AI (not a human) on a call, the consumer should be made aware from the outset. The NPRM also explored extending these rules to robotexts and tightening exemptions to prevent abuse by bad actors using AI to scale up spam. (Comments on these proposals are slated for fall 2024 natlawreview.com.) The FCC’s actions underscore a whole-of-government recognition that generative AI, while beneficial, can also be misused to deceive or defraud consumers at scale, and regulators are rapidly updating tools (like the TCPA rules) to meet this challenge.
-
AI-Generated Voices in Robocalls Illegal (February 8, 2024) Federal Communications Commission - https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal Federal Communications Commission Federal Communications Commission
-
AI Workshop with NSF (July 13, 2023) Federal Communications Commission - https://www.fcc.gov/fcc-nsf-ai-workshop Federal Communications Commission
-
Inquiry into AI’s Impact on Robocalls (November 15, 2023) - https://www.fcc.gov/consumer-governmental-affairs/fcc-launches-inquiry-ais-impact-robocalls-and-robotexts Federal Communications Commission
-
First AI-Generated Robocall Rules NPRM (August 7, 2024) Federal Communications Commission - https://www.fcc.gov/document/fcc-proposes-first-ai-generated-robocall-robotext-rules-0 Federal Register +2
Federal Trade Commission (FTC) #
-
FTC Approves Final Order against Workado, LLC, Which Misrepresented the Accuracy of its Artificial Intelligence Content Detection Product (August 28, 2025)
-
FTC Sues to Stop Air AI from Using Deceptive Claims about Business Growth, Earnings Potential, and Refund Guarantees to Bilk Millions from Small Businesses (June 23, 2025) –
-
FTC Obtains Permanent Ban of E-Commerce Business Opportunity Scheme Operator (June 23, 2025) –
-
FBA Machine/Passive Scaling, FTC v. (June 23, 2025) –
-
Ascend Ecom (June 23, 2025) – The FTC has filed a lawsuit against an online business opportunity scheme that it alleges has falsely claimed its “cutting edge” AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts.
-
FTC Finalizes Order Prohibiting IntelliVision from Making Deceptive Claims About Its Facial Recognition Software (January 13, 2025) – The Federal Trade Commission finalized an order against IntelliVision Technologies Corp., settling allegations that the company made false, misleading, or unsubstantiated claims that its AI-powered facial recognition software was free of gender or racial bias.
-
FTC Order Requires Online Marketer to Pay $1 Million for Deceptive Claims that its AI Product Could Make Websites Compliant with Accessibility Guidelines (January 3, 2025) – The Federal Trade Commission will require software provider accessiBe to pay $1 million to settle allegations that it misrepresented the ability of its AI-powered web accessibility tool to make any website compliant with the Web Content Accessibility Guidelines (WCAG) for people with disabilities.
-
FTC Approves Final Order against Sitejabber, Which Misrepresented Ratings and Reviews by Consumers Who Had Not Yet Received Products or Services (January 3, 2025) – The Federal Trade Commission has approved a final order against Sitejabber, a company offering an AI-enabled consumer review platform, which deceived consumers by misrepresenting that ratings and reviews it published came from customers who experienced the reviewed product or service, artificially inflating average ratings and review counts.
-
FTC Submits Comment to FCC on Work to Protect Consumers from Potential Harmful Effects of AI (July 31, 2024)
-
FTC Issues Orders to Eight Companies Seeking Information on Surveillance Pricing (July 23, 2024)
-
FTC Has Taken Action Against NGL Labs (July 9, 2024)
- Case pending over alleged law violations in the anonymous messaging app, including marketing to minors.
- Concurring Statement of Commissioner Melissa Holyoak, July 9, 2024
- Press Release, July 9, 2024
-
U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI (June 5, 2024)
-
FTC Announces Tentative Agenda for May 23 Open Commission Meeting (May 16, 2024)
- Includes presentation on the Voice Cloning Challenge winners and a presentation on the Commission’s Final Rule Concerning Government and Business Impersonation.
- Winners from: FTC Announces Exploratory Challenge to Prevent the Harms of AI-enabled Voice Cloning (November 16, 2023)
-
Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024)
- With the CFPB, Justice Department’s Civil Rights Division, and EEOC.
-
FTC Action Leads to Ban for Owners of Automators AI E-Commerce Money-Making Scheme (February 27, 2024)
-
Tech Summit on Artificial Intelligence (January 25, 2024) – The FTC’s Office of Technology hosted the FTC Tech Summit with the goal of facilitating a dialogue amid a dynamic innovation landscape.
- Report of 3rd Panel: Consumer Facing Applications (April 19, 2024)
- Report of 2nd Panel: A Quote Book | Data and Models (April 17, 2024)
- Report of 1st Panel: Hardware and Infrastructure Edition (March 14, 2024)
-
EU-US Hold Fourth Joint Technology Competition Policy Dialogue (April 11, 2024)
-
FTC Sends $2.8 Million in Refunds to Consumers Harmed by DK Automation’s Phony Online Business and Crypto Moneymaking Schemes (March 28, 2024)
-
FTC Staff Report: Building Tech Capacity in Law Enforcement Agencies (March 26, 2024) – This report is meant to establish a shared context and serve as a resource for building technical capacity in government agencies, highlighting how the Office of Technology (OT) applies subject matter experts in regulatory and enforcement contexts. OT notes that there are many successful models and approaches both at the FTC and at other agencies to consider. The scope of this report is focused on OT’s model.
-
Rite Aid Corporation, FTC v. (March 8, 2024) – The FTC charged Rite Aid with failing to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores. Rite Aid Banned from Using AI Facial Recognition; Statement of Commissioner Alvaro M. Bedoya
-
Automators, v FTC (February 27, 2024) – Federal court temporarily shut down a business opportunity scheme that lured consumers to invest $22 million in online stores, using unfounded claims about AI-driven success and profitability.
-
FTC Proposes New Protections to Combat AI Impersonation of Individuals (February 15, 2024)
-
Remarks of Benjamin Wiseman at Harvard Journal of Law & Technology on Worker Surveillance and AI (February 8, 2024)
-
FTC Launches Inquiry into Generative AI Investments and Partnerships (January 25, 2024)
-
FTC Authorizes Compulsory Process for AI-related Products and Services (November 21, 2023)
-
In Comment Submitted to U.S. Copyright Office, FTC Raises AI-related Competition and Consumer Protection Issues (November 7, 2023)
-
Amazon.com (Alexa), U.S. v. (July 21, 2023)
- FTC and DOJ require Amazon to overhaul its data deletion practices and implement privacy safeguards regarding children’s data.
-
Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023)
- FTC Chair Khan and officials from DOJ, CFPB, and EEOC release joint statement on AI.
-
Using Artificial Intelligence and Algorithms (April 20, 2023) Provides general FTC guidance on AI use, followed by regular blog updates and additional guidance.
-
FTC Finalizes Settlement with Photo App Developer Over Misuse of Facial Recognition Technology (May 7, 2021)
-
FTC Takes Action Against WW International (formerly Weight Watchers) for Illegally Collecting Kids’ Sensitive Health Data (March 4, 2022)
- Required deletion of data from children under 13, deletion of any algorithms derived from such data, and payment of a fine.
Food and Drug Administration (FDA) #
- Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (January 2025) – This guidance provides recommendations to sponsors and other interested parties on the use of artificial intelligence (AI) to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs. Specifically, this guidance provides a risk-based credibility assessment framework that may be used for establishing and evaluating the credibility of an AI model for a particular context of use (COU).
-
Draft Guidance for Industry and FDA Staff (April 3, 2023)
- Outlines plan for development of medical devices using or trained by machine learning.
-
Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative (November 1, 2023)
- Developing a regulatory framework for four emerging technologies, including AI.
-
AI/ML for Drug Development Discussion Paper (May 10, 2023)
- Requests feedback on efforts to use AI in drug development.
-
Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (March 18, 2024)
- Outlines the FDA’s commitment to cross-center collaboration and future thinking.
Federal Register :: Radiology Devices; Reclassification of Medical Image Analyzers
National Artificial Intelligence Advisory Committee (NAIAC) #
- inactive AI Safety (May 2024) – On March 5, 2024, the National Artificial Intelligence Advisory Committee convened two panels of experts to share their views on AI safety and the necessary methodologies to achieve it. The prepared statements, written submissions, and discussion with the experts informed these findings.
- inactive Data Challenges and Privacy Protections
- inactive Harnessing AI for Scientific Progress
- inactive Provide Authority and Resources to Promote Responsible Procurement Innovation for AI
- inactive Require Public Summary Reporting on Use of High-Risk AI
- inactive Require Public Use Policies for High-Risk AI
- inactive Expand the AI Use Case Inventory by Limiting the ‘Sensitive Law Enforcement’ Exception
- inactive Expand the AI Use Case Inventory by Limiting the ‘Common Commercial Products’ Exception
- inactive Implementation of the NIST AI Safety Institute (December 2023)
- inactive National Campaign on Lifelong AI Career Success (November 2023)
- inactive Enhancing AI Literacy for the United States of America (November 2023)
- inactive Improve Monitoring of Emerging Risks from AI through Adverse Event Reporting (November 2023)
- inactive Second Chance Skills and Opportunity Moonshot (October 2023)
- inactive Generative AI Away from the Frontier (October 2023)
- inactive Implementing the NIST AI RMF with a Rights-Respecting Approach (October 2023)
- inactive AI’s Procurement Challenge (October 2023)
- inactive Creating Institutional Structures to Support Safer AI Systems (October 2023)
- inactive International Emerging Economies (August 2023)
- inactive National Artificial Intelligence Advisory Committee: Year Two Insights Report
- inactive Enhancing AI’s Positive Impact on Science and Medicine
- inactive Towards Standards for Data Transparency for AI Models
- inactive Law Enforcement Subcommittee: Year One Report and Roadmap
- inactive Exploring the Impact of AI (November 2023)
- inactive In Support of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (November 2023)
- inactive On AI and Existential Risk (October 2023)
- inactive The Potential Future Risks of AI (October 2023)
- inactive Implementing the NIST AI RMF With a Rights-Respecting Approach, Working Group on Rights-Respecting AI (October 2023)
- inactive FAQs on Foundation Models and Generative AI (August 2023)
- inactive Rationales, Mechanisms, and Challenges to Regulating AI (July 2023)
- inactive NAIAC Year 1 Report (May 2023)
Government Accountability Office (GAO) #
- Artificial Intelligence: Generative AI Use and Management at Federal Agencies (July 29, 2025) – From 2023 to 2024, agencies’ use of generative AI increased ninefold. As agencies deploy generative AI, they report encountering challenges such as complying with federal policies while keeping up with this rapidly evolving technology.
- Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements (December 12, 2023) – Federal law and guidance have several requirements for agencies implementing AI, but they haven’t all been met. For example, there’s no government-wide guidance on how agencies should acquire and use AI. Without such guidance, agencies can’t consistently manage AI. And until all requirements are met, agencies can’t effectively address AI risks and benefits. The 35 recommendations address these issues and more.
- Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (June 30, 2021) – This report identifies key accountability practices—centered around the principles of governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly.
OMB Requests #
-
Memorandum: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024)
- Identifying Priority Access or Quality Improvements for Federal Data and Models for Artificial Intelligence Research and Development (R&D) (July 10, 2019) – Identifies needs for additional access to, or improvements in the quality of, Federal data and models.
- Guidance for Regulation of Artificial Intelligence Applications (January 10, 2020) – Calls on agencies, when considering regulations or policies related to AI applications, to promote advancements in technology and innovation while protecting American technology, economic and national security, privacy, civil liberties, and other American values.
- Privacy Impact Assessments (January 6, 2024) – Requests public input on how privacy impact assessments (PIAs) may be more effective at mitigating privacy risks, including those exacerbated by AI and other technology/data capabilities.
- Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (November 11, 2023) – Establishes new agency requirements in areas of AI governance, innovation, and risk management, and directs agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
- Responsible Procurement of Artificial Intelligence in Government (March 28, 2024) – Develops an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance provided in the AI M-memo and advance other aims identified in the Advancing American AI Act (“AI Act”).
National Institute of Standards and Technology (NIST) at the Department of Commerce #
- Center for AI Standards and Innovation (CAISI) – Formerly the AI Safety Institute, the Center for AI Standards and Innovation (CAISI) serves as industry’s primary point of contact within the U.S. government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems. The blog is located here.
- NIST AI 100-1: NIST AI Risk Management Framework (April 29, 2024) – NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.
- TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (December 1, 2022) – This Joint Roadmap aims to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI by the EU and the United States and to advance our shared interest in supporting international standardization efforts and promoting trustworthy AI on the basis of a shared dedication to democratic values and human rights.
National Institutes of Health #
National Science and Technology Council #
- Houses the Joint Committee on Research Environments and the Select Committee on AI
National Science Foundation #
- Request for Information on the Development of a 2025 National Artificial Intelligence (AI) Research and Development (R&D) Strategic Plan (April 29, 2025) – The Office of Science and Technology Policy (OSTP), the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO) welcomes input from all interested parties on how the previous administration’s National Artificial Intelligence Research and Development Strategic Plan (2023 Update) can be rewritten so that the United States can secure its position as the unrivaled world leader in artificial intelligence by performing R&D to accelerate AI-driven innovation, enhance U.S. economic and national security, promote human flourishing, and maintain the United States’ dominance in AI while focusing on the Federal government’s unique role in AI research and development (R&D) over the next 3 to 5 years.
- NSF’s National Artificial Intelligence Research Institutes – Launched in 2020; consists of 25 AI institutes connecting over 500 funded and collaborative institutions globally.
- National Artificial Intelligence Research Resource Pilot – The National Artificial Intelligence Research Resource (NAIRR) will provide a shared national research infrastructure to bridge this gap by connecting U.S. researchers and educators to AI resources — computation, data, software, models, training and educational materials — to advance research, discovery and innovation. As directed by Winning the Race: America’s AI Action Plan, the launch of solicitation NSF 25-546 begins the transition from the NAIRR pilot to a scalable and sustainable NAIRR.
National Telecommunications and Information Administration (NTIA) #
-
Dual-Use Foundation Models with Widely Available Model Weights Report (July 30, 2024) – This Report provides a non-exhaustive review of the risks and benefits of open foundation models, broken down into the broad categories of Public Safety; Societal Risks and Wellbeing; Competition, Innovation, and Research; Geopolitical Considerations; and Uncertainty in Future Risks and Benefits. It is important to under stand these risks as marginal risks—that is, risks that are unique to the deployment of dual-use foundation models with widely available model weights relative to risks from other existing technologies, including closed weight models and models that are not considered du al-use foundation models under the EO definition (such as foundation models with fewer than 10 billion parameters).
Securities and Exchange Commission (SEC) #
- SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (March 18, 2024) – The Securities and Exchange Commission announced a settlement with two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their purported use of artificial intelligence (AI). The firms agreed to pay $400,000 in total civil penalties.
- Regulating AI in Securities Markets (July 26, 2023) – The SEC voted to propose new rules addressing conflicts of interest arising from broker-dealers’ and investment advisers’ use of predictive data analytics (PDA) and AI when interacting with investors. Concerned that firms’ AI-driven platforms might optimize for the firm’s benefit (e.g. maximizing fees or trading volume) at the expense of investors, the SEC’s proposal (Release No. 34-97990) would require firms to: (1) evaluate whether any AI/PDA usage places the firm’s interest ahead of the client’s, and (2) eliminate or neutralize the effect of any such conflict sec.gov sec.gov. In essence, the rule says that regardless of the technology used – be it a human advisor or an algorithm – investment advisers and brokers cannot put their own interests before their customers’ sec.gov sec.gov. The SEC noted AI models can potentially influence investor behavior at scale (for good or ill), so this proactive step aims to “protect investors from conflicts of interest – and require that, regardless of the technology used, firms meet their obligations” under fiduciary and conduct standards sec.gov sec.gov. The proposal, dubbed the “Predictive Data Analytics Rule,” also mandates written policies and recordkeeping to ensure compliance sec.gov. (This followed public warnings by SEC Chair Gary Gensler that AI in trading and advice could lead to systematic conflicts or even market manipulation if left unchecked.) The rule is currently in the comment period, but signals a broader federal effort to oversee AI in financial decision-making contexts.
- Financial regulatory agencies have developed coordinated approaches to AI governance, led by groundbreaking enforcement actions and comprehensive risk assessments. Office of the Comptroller of the Currency +2