The federal government has undertaken hundreds of AI-related actions, with agencies implementing comprehensive frameworks spanning national security, financial regulation, transportation safety, environmental protection, and international cooperation. This research identifies 150+ significant AI actions from 2020-2025 across major federal departments and independent agencies.
Key documents #
- NIST AI 100-1: NIST AI Risk Management Framework – On April 29, 2024, NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed over the past year with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.
White House Actions under Trump (2025-present) #
- Executive Order on Preventing Woke AI in the Federal Government (July 23, 2025) – While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas. Building on Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), this order helps fulfill that obligation in the context of large language models.
- Executive Order on Accelerating Federal Permitting of Data Center Infrastructure (July 23, 2025) – This EO facilitates expedited permitting for data centers and related infrastructure, energy, and manufacturing projects in numerous ways, including changes to the Clean Water Act, the Clean Air Act, the National Environmental Policy Act, and Fixing America’s Surface Transportation Act.
- Winning the Race: America’s AI Action Plan (July 10, 2025) –
-
Removing Barriers to American Leadership in Artificial Intelligence (January 23, 2025)
- Public Comment Invited on Artificial Intelligence Action Plan (January 25, 2025)
White House Actions under Biden (2020-2025) #
- inactive Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure (January 14, 2025)
- inactive Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence (March 28, 2024)
- National Artificial Intelligence Research Resource (NAIRR) pilot (January 2024) – Launched by NSF. Provides researchers with resources for responsible AI research.
- inactive
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 30, 2023)
- 90 Day Progress (January 29, 2024)
- 180 Day Progress (April 29, 2024)
- 270 Day Progress (July 26, 2024)
- Stanford HAI’s Tracking U.S. Executive Action on AI
- inactive Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023) – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have voluntarily committed to internal and external security testing of their AI systems before their release, and committed to sharing information across the industry and with governments, civil society, and academia on managing AI risks.
- inactive New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment (May 23, 2023) – In May 2023, the Biden-Harris administration updated the National AI Research and Development Strategic Plan, emphasizing a principled and coordinated approach to international collaboration in AI research.
- inactive New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety (May 4, 2023) – OSTP releases National AI R&D Strategic Plan, updated for the first time since 2019.
- inactive Biden-Harris Administration Tackles Racial and Ethnic Bias in Home Valuations (March 23, 2023)
- inactive Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government (February 26, 2023)
- inactive Blueprint for an AI Bill of Rights (October 4, 2022)
- inactive Key Actions to Advance Tech Accountability and Protect the Rights of the American Public (October 4, 2022)
- inactive Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government (January 20, 2021)
White House Actions under Trump (2016-2021) #
- Guidance for Regulation of Artificial Intelligence Applications In January 2020, the Office of Management and Budget (“OMB”) published a draft memorandum featuring 10 “AI Principles”[8] and outlining its proposed approach to regulatory guidance for the private sector which echoes the “light-touch” regulatory approach espoused by the 2019 Executive Order.
- Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government EO 13960 – Directed agencies to conduct an annual inventory of their AI use cases and publish them to the extent possible, resulting in the AI Use Case Inventory
- Maintaining American Leadership in Artificial Intelligence EO 13859 – Attempts to ensure American leadership in R&D related to AI.
- EO 13694 – Blocks the purchase of IaaS capable of supporting certain AI models by non-US persons known/suspected to engage in malicious cyber activities.
- EO 13984 – Ensures that providers offering United States IaaS products verify the identity of persons obtaining an IaaS account and maintain records of those transactions.
OMB Requests #
- Memorandum: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024)
-
Identifying Priority Access or Quality Improvements for Federal Data and Models for Artificial Intelligence Research and Development (R&D)
- Identifies needs for additional access to, or improvements in the quality of, Federal data and models.
-
Guidance for Regulation of Artificial Intelligence Applications
- Calls on agencies, when considering regulations or policies related to AI applications, to promote advancements in technology and innovation while protecting American technology, economic and national security, privacy, civil liberties, and other American values.
-
Privacy Impact Assessments
- Requests public input on how privacy impact assessments (PIAs) may be more effective at mitigating privacy risks, including those exacerbated by AI and other technology/data capabilities.
-
Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
- Establishes new agency requirements in areas of AI governance, innovation, and risk management, and directs agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
-
Responsible Procurement of Artificial Intelligence in Government
- Develops an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance provided in the AI M-memo and advance other aims identified in the Advancing American AI Act (“AI Act”).
Agency Actions #
Consumer Financial Protection Bureau (CFPB) #
- inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) – “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
- CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior (April 25, 2023) – The Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission released a joint statement outlining a commitment to enforce their respective laws and regulations.
- Generative AI Chatbots Warnings – The CFPB warns banks about using AI chatbots that fail to provide timely or straightforward answers.
- CFPB circular (May 2022) asserting creditors using AI in credit decisions must still provide specific adverse action notices.
- CFPB circular (September 2023) – reaffirms the same.
- CFPB report (June 2023) – Warns financial institutions about legal obligations when deploying chatbot technology.
- CFPB proposed rule (June 2023) – Mandates entities adopt policies to ensure certain AI models used in credit decisions adhere to quality control standards.
Department of Commerce #
- October 7, 2022 – Controls on Advanced AI Chips: The Commerce Department’s Bureau of Industry and Security (BIS) imposed sweeping export controls on advanced computing chips and semiconductor technology that underpin high-end AI systems. This seminal rule bis.gov bis.gov requires licenses for exporting to China (and other adversary nations) certain graphic processing units (GPUs) and AI accelerators that exceed specified performance thresholds, as well as the equipment to manufacture such semiconductors bis.gov bis.gov. The controls, issued in response to national security concerns, aim to restrict China’s ability to obtain the cutting-edge chips needed for training large-scale AI models and developing advanced military applications bis.gov bis.gov. BIS officials noted that AI capabilities enabled by supercomputing could improve an adversary’s military decision-making, autonomous weapons, signals intelligence, and facial recognition surveillance bis.gov bis.gov. Thus, the rule also added certain Chinese supercomputing and AI entities to the Entity List, barring U.S. technology transfer to them. This export control represented one of the most significant tech leverage actions to date, often referred to as the start of a “Tech Iron Curtain” around AI hardware.
- October 17, 2023 – Enhanced AI Export Safeguards: BIS updated and strengthened the 2022 controls to close loopholes and address evolving tech. The updated rules expanded the scope of chips covered (introducing a new “performance density” metric to catch chip designs that might circumvent prior thresholds) and imposed a worldwide license requirement on any export of controlled AI chips to companies headquartered in any country of concern (preventing proxy routing) bis.gov bis.gov. It also added 43 more countries to the list requiring notification for exports of less advanced (but still sensitive) chips, beyond just China/Macau bis.gov bis.gov. Commerce emphasized these measures ensure “those seeking to obtain powerful advanced chips… will not use these technologies to undermine U.S. national security”, and that the U.S. will “continue to hone these controls as technology evolves” bis.gov bis.gov. The tightened rules reflect an ongoing policy to constrain authoritarian regimes’ access to the highest-caliber AI computing infrastructure. (Notably, in parallel, the U.S. is working with allies on similar controls and considering outbound investment screening for AI sectors.) These export controls are a tool to slow rival militaries’ progress in AI by targeting the hardware “choke points” required for training advanced AI models bis.gov, while still allowing civilian AI collaboration in areas like healthcare to continue under license exceptions.
Department of Defense VVV #
-
February 2020 – DoD Adopts AI Ethical Principles: The Pentagon formally adopted five Artificial Intelligence Ethical Principles (responsible, equitable, traceable, reliable, governable) for AI use, becoming the first military in the world to do so ai.mil. These principles guide the design, development, and deployment of AI in defense.
-
June 22, 2022 – DoD Responsible AI Strategy: Deputy Secretary Kathleen Hicks approved the DoD’s Responsible AI Strategy and Implementation Pathway, providing a roadmap to ensure a trusted AI ecosystem built on those ethical principles ai.mil ai.mil. This strategy operationalizes AI ethics through governance structures, testing standards, accountability checks, and workforce training, in line with DoD’s February 2020 principles.
-
June 1, 2022 – Establishment of CDAO: DoD stood up the Chief Digital and Artificial Intelligence Office (CDAO) to accelerate AI adoption, data analytics, and digital innovation across the Department defense.gov. The CDAO unified the Joint AI Center and other data/tech units, and in its first year set policies and structures to enable AI-driven advantages for U.S. forces defense.gov. It launched initiatives for digital talent (training 1,500 personnel in AI skills), data quality improvements, and new acquisition mechanisms like Tradewind and “TryAI” to quickly test and procure AI solutions defense.gov defense.gov. The CDAO also helped publish DoD’s Responsible AI Implementation tenets, establishing processes to continuously test and mitigate biases in AI systems defense.gov.
-
DOD AI Ethical Principles (February 24, 2020) - https://www.defense.gov/News/Releases/release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
-
2023 Data, Analytics, and AI Adoption Strategy (November 2, 2023) - https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF
-
Chief Digital and Artificial Intelligence Office (CDAO) establishment (2022) - https://www.ai.mil/
-
AI Rapid Capabilities Cell launch (December 2024) - https://www.defense.gov/News/Releases/Release/Article/3996199/cdao-and-diu-launch-new-effort-focused-on-accelerating-dod-adoption-of-ai-capab/
-
National Security Memorandum-25 implementation (October 24, 2024) - https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
Department of Education #
- Artificial Intelligence and the Future of Teaching and Learning (May 2023) – Advocates for AI technologies that support diverse learning environments and recommends robust frameworks to manage AI’s impact on education.
- Designing for Education with Artificial Intelligence: An Essential Guide for Developers (July 2024)
Department of Energy (DOE) #
- May 20, 2022 – DOE AI Governance Council: DOE established an Artificial Intelligence Advancement Council (AIAC), the first of its kind at the Department energy.gov energy.gov. Chartered by Deputy Secretary David Turk, the AIAC coordinates AI activities across DOE’s extensive enterprise and defines Department-wide AI priorities energy.gov energy.gov. It brings together top DOE leaders (Science, Nuclear Security, Intelligence, General Counsel, etc.) to provide recommendations on a comprehensive DOE AI strategy led by the DOE’s Artificial Intelligence and Technology Office (AITO) energy.gov energy.gov. The AIAC’s focus includes AI governance, innovation, and ethics in DOE’s missions (science, energy, national security).
- Advancing AI R&D and Infrastructure: DOE, leveraging its national laboratories and supercomputing facilities, has launched major AI research initiatives. In 2023, DOE announced the Frontiers in AI for Science, Security, and Technology (FASST) initiative, aimed at developing breakthrough AI capabilities to accelerate scientific discovery and bolster U.S. leadership in critical technologies energy.gov energy.gov. DOE’s world-leading exascale supercomputers (like Frontier and Aurora) are being harnessed for “AI for Science” programs to model climate, fusion energy, materials, and more energy.gov energy.gov. The Department is also building dedicated AI testbeds to rigorously assess AI systems’ safety and security energy.gov energy.gov. For example, DOE is testing AI models against adversarial attacks in control systems (like power grid cybersecurity) and against chemical/biological threats energy.gov energy.gov. These efforts align with the national AI risk management framework, ensuring cutting-edge AI research is accompanied by risk mitigation and trustworthiness measures energy.gov. (DOE also co-sponsors the National AI Research Institutes with NSF, supporting 25 multi-disciplinary AI centers launched since 2020 in areas from climate-smart agriculture to advanced manufacturing.)
Department of Homeland Security (DHS) VVV #
-
DHS Artificial Intelligence Task Force: (April 21, 2023) – DHS Secretary Alejandro Mayorkas launched the Department’s first-ever AI Task Force (AITF) to drive specific applications of AI in critical homeland security missions. The task force is applying AI to enhance supply chain screening (detecting forced-labor goods), counter the flow of fentanyl (identifying illicit shipments and precursor chemicals), bolster cybersecurity and critical infrastructure protection, and aid investigations of child exploitation by analyzing large volumes of data. The AITF was given 60 days to deliver an action plan for these priorities and reports regularly to the Secretary.
-
September 14, 2023 – DHS AI Policy Framework: Building on the task force’s work, DHS announced new policies to ensure the responsible use of AI dhs.gov dhs.gov. Notably, Secretary Mayorkas appointed DHS’s first Chief AI Officer (CIO Eric Hysen) to coordinate AI innovation and safety across the agency dhs.gov dhs.gov. DHS issued:
- Policy Statement 139-06, “Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components,” which sets department-wide AI principles dhs.gov. It mandates that DHS’s use of AI align with EO 13960 (promoting trustworthy AI in government) and all applicable laws, prohibits AI systems that engage in illegal discrimination, and requires that AI adoption demonstrably improves mission effectiveness dhs.gov dhs.gov. DHS affirmed it “will not collect, use, or disseminate data used in AI activities” or deploy AI systems that make decisions based on sensitive characteristics like race, sex, or religion dhs.gov dhs.gov, echoing a commitment to minimize bias.
- Directive 026-11, “Use of Face Recognition and Face Capture Technologies,” which imposes strict oversight on DHS’s use of facial recognition AI dhs.gov. All such systems must undergo extensive testing to ensure no unintended bias or disparate impact, with review by DHS’s Privacy Office and Office for Civil Rights/Civil Liberties dhs.gov dhs.gov. The directive also gives U.S. citizens the right to opt out of face recognition for non-law-enforcement uses and prohibits using face recognition as the sole basis for any law enforcement action dhs.gov. Together, these policies ensure DHS harnesses AI’s benefits for security while protecting privacy and civil rights.
-
DHS Directive 139-08: Artificial Intelligence Use and Acquisition (January 15, 2025) dhs - https://www.dhs.gov/sites/default/files/2025-01/25_0116_CIO_DHS-Directive-139-08-508.pdf
-
2024 DHS Artificial Intelligence Roadmap (March 2024) - https://www.dhs.gov/sites/default/files/2024-03/24_0315_ocio_roadmap_artificialintelligence-ciov3-signed-508.pdf
-
AI Safety and Security Board (AISSB) establishment (April 26, 2024) - https://www.federalregister.gov/documents/2024/04/29/2024-09132/establishment-of-the-artificial-intelligence-safety-and-security-board
-
DHS AI Corps launch (February 2024) for hiring 50 AI specialists dhs
-
Generative AI Public Sector Playbook (January 7, 2025) DHS - https://www.dhs.gov/archive/news/2025/01/07/dhs-unveils-generative-ai-public-sector-playbook
Department of Housing and Urban Development (HUD) #
- inactive Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing (April 29, 2024) – This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act protects certain rights of applicants for rental housing. It discusses how housing providers and companies that offer tenant screening services can screen applicants for rental housing in a nondiscriminatory way and recommends best practices for complying with the Fair Housing Act. This guidance may also help applicants understand their rights and recognize when they might have been denied housing unlawfully.
- inactive Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms (April 29, 2024) – This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act (“Act”) applies to the advertising of housing, credit, and other real estaterelated transactions through digital platforms. In particular, it addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence (“AI”), to facilitate advertisement targeting and delivery.
Department of Justice (DOJ) #
- Justice Department and National Economic Council Partner to Identify State Laws with Out-Of-State Economic Impacts (August 15, 2015) – The Justice Department and the National Economic Council announced an effort to identify State laws that significantly and adversely affect the national economy or interstate economic activity and to solicit solutions to address such effects. They invite public comments to support the Administration’s mission to address laws that hinder America’s economic growth, including those that burden industry and our small businesses.
- Artificial Intelligence and Civil Rights (February 3, 2025) – Landing page for public speeches, statements, and readouts; Civil Rights Division guidance and other documents; and cases.
- AI Inventory (January 21, 2025) – The October 30, 2023, Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” required federal agencies to report on their use of AI by conducting an annual inventory of their AI use cases.
- inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) – According to the statement, “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
Department of Labor (DOL) #
- April 30, 2024 – OFCCP Guidance on AI Hiring Tools: The DOL’s Office of Federal Contract Compliance Programs (OFCCP) issued new guidance to federal contractors on preventing discrimination when using AI and automated systems in employment decisions cwc.org governmentcontractslaw.com. This move recognized that many employers (including government contractors) are adopting algorithmic résumé screeners, hiring tests, and HR chatbots – which, if unchecked, could perpetuate bias against protected groups. The OFCCP guidance (issued as FAQs and best practices) reminds contractors of their obligations under equal employment opportunity laws (e.g. Executive Order 11246, Section 503, VEVRAA) and clarifies that the use of AI does not exempt them from liability workforcebulletin.com insidegovernmentcontracts.com. It provides “promising practices” such as: ensure algorithms are validated as job-related and consistent with business necessity; conduct regular bias audits on AI outcomes (to see if, for instance, an AI hiring tool disproportionately rejects women or minorities); provide reasonable accommodations when AI assessments involve people with disabilities (consistent with the ADA); and maintain transparency with applicants about how AI is used cwc.org klgates.com. The guidance effectively serves as a blueprint for aligning AI tools with EEO compliance, warning that OFCCP may scrutinize contractors’ AI systems during audits. This complements the EEOC’s parallel AI in employment initiative – together signaling that algorithmic discrimination in hiring or promotions is an enforcement priority for civil rights agencies.
Department of the Treasury #
- March 29, 2021 – RFI on AI in Financial Services: The Federal Reserve, OCC, FDIC, CFPB, and NCUA jointly issued a Request for Information on the use of AI/ML by financial institutions occ.treas.gov. This RFI (86 FR 16837) sought public input on how banks and lenders are employing AI in areas like credit underwriting, fraud detection, customer service (e.g. chatbots), and risk management. Regulators asked about governance and risk controls around AI, potential fair lending or bias issues, and whether any clarifications to regulations were needed occ.treas.gov occ.gov. The goal was to better understand industry practices and potentially craft guidance that ensures AI algorithms in finance are used in a safe, sound, and non-discriminatory manner. (The comment period was later extended to July 2021, with dozens of industry and consumer group responses.)
- Nov 2022 & June 12, 2024 – Treasury’s AI Policy in Finance: The Department of the Treasury has taken a leadership role in assessing AI’s implications for financial stability and competition. In a November 2022 report on fintech competition, Treasury analyzed how innovations in AI are transforming credit and payments (finding that AI can expand access but also raises new privacy, bias, and explainability challenges) federalregister.gov federalregister.gov. Building on that, in June 2024 Treasury issued an RFI on “Uses, Opportunities, and Risks of AI in the Financial Services Sector” federalregister.gov. This RFI (89 FR 50048) invites input from banks, fintechs, investors, consumer advocates, and the public on where AI is being applied in finance, what benefits and efficiencies it may bring, and what risks to consumers or systemic stability it could pose federalregister.gov federalregister.gov. Treasury specifically is looking at issues like AI-driven fraud, algorithmic discrimination in lending or insurance, concentration of AI resources among a few big tech firms, and the need for any regulatory or policy responses. (Comments are due by August 2024, and will inform Treasury’s and the Financial Stability Oversight Council’s next steps on AI oversight.)
- September 8, 2023 – IRS Uses AI to Target Tax Evasion: The Internal Revenue Service announced a sweeping new compliance initiative to leverage AI and data analytics to catch high-end tax evaders irs.gov. After years of declining audit rates for wealthy taxpayers, the IRS (bolstered by Inflation Reduction Act funding) is deploying machine learning models to sift through large partnerships and complex financial arrangements to identify likely tax avoidance schemes irs.gov irs.gov. AI will help the IRS analyze patterns in filings of ultra-wealthy individuals, large corporations, and pass-through entities to better detect underreporting or abusive tax shelters irs.gov. The IRS stressed that these advanced tools will focus enforcement on the most egregious cases – the agency pledged audit rates won’t rise for those under $400k income – and will reduce burdens from “no-change” audits by refining case selection irs.gov irs.gov. This marks one of the first major uses of AI in federal tax enforcement, aimed at shrinking the estimated $450+ billion annual tax gap. (The IRS is also expanding simpler AI uses like chatbots to improve taxpayer customer service in handling routine notices irs.gov.)
- February 28, 2024 – Treasury AI Fraud Detection: The Treasury Department’s Bureau of the Fiscal Service reported a successful new application of AI in protecting federal funds. In FY2023 the Fiscal Service’s Office of Payment Integrity (OPI) implemented an AI-enhanced process to combat the epidemic of paper check fraud, which had grown 385% since the pandemic home.treasury.gov home.treasury.gov. By using machine learning to flag suspicious patterns and anomalies in negotiated U.S. government checks, Treasury was able to intercept and recover over $375 million in fraudulent payments in one year home.treasury.gov home.treasury.gov. This near-real-time fraud detection system strengthens the “nation’s checkbook” – safeguarding Social Security checks, tax refunds, veterans’ benefits, and other payments from being stolen or altered. Deputy Secretary Wally Adeyemo highlighted that AI allowed Treasury to expedite fraud detection and recovery of taxpayer dollars while ensuring legitimate payments still go out on time home.treasury.gov. OPI is partnering with law enforcement, and multiple criminal cases and arrests have already stemmed from the AI-flagged fraud leads home.treasury.gov. This Treasury success story shows AI’s promise in reducing improper payments and protecting public programs from criminal abuse.
Department of Transportation (DOT) #
- January 2020 – Autonomous Vehicles Policy (AV 4.0): The U.S. DOT, in collaboration with the White House, released “Ensuring American Leadership in Automated Vehicle Technologies 4.0,” a comprehensive framework unifying federal efforts on self-driving cars and trucks transportation.gov. AV 4.0 outlined 10 principles (focused on safety, innovation, and consistency) and cataloged 38 existing federal initiatives on autonomous vehicles across agencies transportation.gov transportation.gov. It set the stage for a consistent, light-touch approach to AV regulation and R&D, emphasizing interagency coordination and modernizing regulations that may hinder AI-driven vehicle technology. Building on this, DOT published an Automated Vehicles Comprehensive Plan in January 2021 to integrate these principles into concrete actions transportation.gov itsdigest.com. The plan defined three strategic goals: safety (prioritize AV safety and security), innovation (promote U.S. tech leadership and collaboration), and integration (incorporate AVs into the transportation system) transportation.gov itsdigest.com. It reaffirmed that DOT will modernize standards, invest in AV research pilots, and work with state and local governments to prepare for automated mobility.
- June 2021 – Mandatory AV Crash Reporting: For the first time, regulators required autonomous vehicle data reporting. The National Highway Traffic Safety Administration (NHTSA) issued a Standing General Order 2021-01 mandating that manufacturers and operators of vehicles equipped with automated driving systems (ADS) or certain advanced driver-assistance systems (ADAS) promptly report any serious crashes to NHTSA environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. This order, effective June 2021, created an unprecedented continuous surveillance regime to gather real-world safety data on AI-driven vehicles. (Over 100 such crashes have been reported since, enabling regulators to spot defect trends.) In April 2025, NHTSA updated and extended this Order (issuing a Third Amended SGO) to streamline reporting burdens while preserving the requirement that firms report the most serious incidents within 5 days environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. The amended order narrows the scope of minor crashes that must be reported and eliminates monthly nil-reports, focusing oversight on significant and relevant incidents environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. This refinement balanced safety monitoring with practical compliance as the industry matured.
- March 2022 – Adapting Safety Standards for AI-Driven Vehicles: NHTSA finalized the first-ever revisions of Federal Motor Vehicle Safety Standards (FMVSS) to accommodate driverless vehicles. The March 2022 final rule updated occupant protection requirements to account for vehicles without traditional manual controls (no steering wheel or pedals) nhtsa.gov reuters.com. For example, it revised terms like “driver’s seat” in regulations to ensure that automated vehicles provide equivalent safety for all passengers even if no human driver is present nhtsa.gov. U.S. regulators thereby removed an important regulatory barrier, making it clear that fully autonomous vehicles need not have redundant human controls as long as they meet adjusted safety performance criteria nhtsa.gov. This rule was a milestone in modernizing decades-old car safety rules for the AI driving era.
- April 2025 – New Automated Vehicle Framework: On April 24, 2025, DOT announced a fresh Automated Vehicle Safety Framework (via NHTSA) to further enable safe deployment of self-driving cars environmentalenergybrief.sidley.com. This included two key measures: (1) Expansion of the AV Exemption Program – NHTSA proposed to broaden and streamline the process for approving real-world testing of non-traditional vehicles (those without steering wheels, etc.), increasing the number of vehicles (up to 2,500 per manufacturer) that can be granted temporary exemptions for research and demonstration purposes environmentalenergybrief.sidley.com. (2) Updates to the Standing General Order – as noted, NHTSA’s Third Amended General Order took effect June 2025, refining crash-reporting rules based on two years of data (extending reporting deadlines for serious crashes from 1 day to 5 days, and eliminating duplicate reports) environmentalenergybrief.sidley.com environmentalenergybrief.sidley.com. These steps aim to maintain rigorous oversight of AV safety while fostering innovation, signaling that the federal approach is evolving alongside the technology.
Equal Employment Opportunity Commission (EEOC) #
-
Artificial Intelligence and Algorithmic Fairness Initiative
-
New Guidance on Use of Artificial Intelligence in Hiring (June 6, 2023) inactive
- Title VII Guidance on employer use of AI and other algorithmic decision-making tools.
-
EEOC Settles First AI-Discrimination Lawsuit (August 10, 2023)
- The EEOC settled its first-ever AI discrimination in hiring lawsuit. iTutorGroup Inc. will pay $365,000 to a group of rejected job seekers age 40 and over. More info.
-
Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024)
- With the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, and the Federal Trade Commission.
-
New Guidance on Use of Artificial Intelligence in Hiring (June 6, 2023) inactive
Federal Communications Commission (FCC) #
-
November 16, 2023 – AI and Robocalls Inquiry: The FCC opened a wide-ranging Notice of Inquiry (NOI) into the implications of artificial intelligence for telecom consumers, with a particular focus on AI-generated robocalls and robotexts wiley.law. Noting the rise of AI tools that can mimic human voices or generate tailored scam calls, the Commission sought public comment on how emerging AI tech might facilitate illegal calls, how to define “AI” in a regulatory context, and what safeguards or disclosures might be needed wiley.law. This NOI explored whether updates to the Telephone Consumer Protection Act (TCPA) rules are required to address voice-cloning technologies and other AI uses in communications fraud. It’s part of a broader FCC effort (led by the Consumer and Governmental Affairs Bureau) to stay ahead of tech trends that could threaten consumers – in this case, highly convincing AI-driven spam or scam calls.
-
February 8, 2024 – TCPA Declaratory Ruling on AI Voices: In response to the above inquiry (and growing concern about AI “voice spam”), the FCC issued a unanimous Declaratory Ruling confirming that the TCPA’s existing restrictions on calls made with an “artificial or prerecorded voice” do encompass calls using AI-generated voices wiley.law wiley.law. In practical terms, this meant that if a telemarketer or caller uses a synthetic voice created by AI (even one that closely imitates a real person), they must follow the same consent and disclosure rules that apply to robocalls. The ruling emphasized that “AI-generated voice calls will be governed like other ‘artificial or prerecorded voice’ calls under the TCPA” wiley.law wiley.law. Companies using AI to mimic human speakers in outbound calls must obtain prior express consent from called parties (or written consent for telemarketing), provide identifying information of the caller, and offer an opt-out mechanism – just as is required for traditional robocalls wiley.law wiley.law. The FCC made clear this covers any kind of voice cloning technology, whether the AI voice is entirely artificial or a clone of a real person’s voice wiley.law. This decision closed a potential loophole and put industry on notice that “AI robocalls” are not exempt from regulation. (The FCC accompanied the ruling with consumer education, warning of new scams using AI voices, and highlighted bipartisan support in Congress for tackling AI-driven robocalls wiley.law.)
-
July 2024 – Proposed Rules for AI-Generated Calls and Texts: Building on the NOI and declaratory ruling, the FCC moved to actual rulemaking. In mid-2024, the Commission proposed new rules to strengthen protections against unwanted AI-generated communications. According to an FCC Fact Sheet (FCC 24-84) and reports, the proposal would: require that AI-generated call content be clearly disclosed to consumers, mandate that telemarketers obtain informed consent specifically for AI-driven calls, and update definitions to ensure any call or text involving automated or synthesized speech falls under robocall prohibitions wiley.law nelsonmullins.com. In essence, if a consumer is interacting with an AI (not a human) on a call, the consumer should be made aware from the outset. The NPRM also explored extending these rules to robotexts and tightening exemptions to prevent abuse by bad actors using AI to scale up spam. (Comments on these proposals are slated for fall 2024 natlawreview.com.) The FCC’s actions underscore a whole-of-government recognition that generative AI, while beneficial, can also be misused to deceive or defraud consumers at scale, and regulators are rapidly updating tools (like the TCPA rules) to meet this challenge.
-
AI-Generated Voices in Robocalls Illegal (February 8, 2024) Federal Communications Commission - https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal Federal Communications Commission Federal Communications Commission
-
AI Workshop with NSF (July 13, 2023) Federal Communications Commission - https://www.fcc.gov/fcc-nsf-ai-workshop Federal Communications Commission
-
Inquiry into AI’s Impact on Robocalls (November 15, 2023) - https://www.fcc.gov/consumer-governmental-affairs/fcc-launches-inquiry-ais-impact-robocalls-and-robotexts Federal Communications Commission
-
First AI-Generated Robocall Rules NPRM (August 7, 2024) Federal Communications Commission - https://www.fcc.gov/document/fcc-proposes-first-ai-generated-robocall-robotext-rules-0 Federal Register +2
Federal Trade Commission (FTC) #
- FTC Submits Comment to FCC on Work to Protect Consumers from Potential Harmful Effects of AI (July 31, 2024)
- FTC Issues Orders to Eight Companies Seeking Information on Surveillance Pricing (July 23, 2024)
-
FTC Has Taken Action Against NGL Labs (July 9, 2024)
- Case pending over alleged law violations in the anonymous messaging app, including marketing to minors.
- Concurring Statement of Commissioner Melissa Holyoak, July 9, 2024
- Press Release, July 9, 2024
- U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI (June 5, 2024)
-
FTC Announces Tentative Agenda for May 23 Open Commission Meeting (May 16, 2024)
- Includes presentation on the Voice Cloning Challenge winners and a presentation on the Commission’s Final Rule Concerning Government and Business Impersonation.
- Winners from: FTC Announces Exploratory Challenge to Prevent the Harms of AI-enabled Voice Cloning (November 16, 2023)
-
Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024)
- With the CFPB, Justice Department’s Civil Rights Division, and EEOC.
- FTC Action Leads to Ban for Owners of Automators AI E-Commerce Money-Making Scheme (February 27, 2024)
- FTC held a
Tech Summit on Artificial Intelligence (January 25, 2024)
- Report of 3rd Panel: Consumer Facing Applications (April 19, 2024)
- Report of 2nd Panel: A Quote Book | Data and Models (April 17, 2024)
- Report of 1st Panel: Hardware and Infrastructure Edition (March 14, 2024)
- EU-US Hold Fourth Joint Technology Competition Policy Dialogue (April 11, 2024)
- FTC Sends $2.8 Million in Refunds to Consumers Harmed by DK Automation’s Phony Online Business and Crypto Moneymaking Schemes (March 28, 2024)
- FTC Staff Report: Building Tech Capacity in Law Enforcement Agencies (March 26, 2024)
-
Rite Aid Corporation, FTC v. (March 8, 2024 update)
- FTC charges Rite Aid with failing to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.
- Rite Aid Banned from Using AI Facial Recognition (December 19, 2023)
- Statement of Commissioner Alvaro M. Bedoya (December 19, 2023)
-
Automators, v FTC (February 27, 2024 update)
- Federal court temporarily shut down a business opportunity scheme that lured consumers to invest $22 million in online stores, using unfounded claims about AI-driven success and profitability.
- FTC Proposes New Protections to Combat AI Impersonation of Individuals (February 15, 2024)
- Remarks of Benjamin Wiseman at Harvard Journal of Law & Technology on Worker Surveillance and AI (February 8, 2024)
- FTC Launches Inquiry into Generative AI Investments and Partnerships (January 25, 2024)
- FTC Authorizes Compulsory Process for AI-related Products and Services (November 21, 2023)
- In Comment Submitted to U.S. Copyright Office, FTC Raises AI-related Competition and Consumer Protection Issues (November 7, 2023)
-
Amazon.com (Alexa), U.S. v. (July 21, 2023)
- FTC and DOJ require Amazon to overhaul its data deletion practices and implement privacy safeguards regarding children’s data.
-
Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (April 25, 2023)
- FTC Chair Khan and officials from DOJ, CFPB, and EEOC release joint statement on AI.
- Using Artificial Intelligence and Algorithms (April 20, 2023) Provides general FTC guidance on AI use, followed by regular blog updates and additional guidance.
- FTC Finalizes Settlement with Photo App Developer Over Misuse of Facial Recognition Technology (May 7, 2021)
-
FTC Takes Action Against WW International (formerly Weight Watchers) for Illegally Collecting Kids’ Sensitive Health Data (March 4, 2022)
- Required deletion of data from children under 13, deletion of any algorithms derived from such data, and payment of a fine.
Federal Register #
The Federal Register contains dozens of AI-related entries spanning all major agencies. Key regulatory actions include Executive Order revocations and replacements in January 2025, comprehensive RFIs on AI applications, and interagency coordination frameworks for AI governance. Federal Register +4
Food and Drug Administration (FDA) #
-
Draft Guidance for Industry and FDA Staff (April 3, 2023)
- Outlines plan for development of medical devices using or trained by machine learning.
-
Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative (November 1, 2023)
- Developing a regulatory framework for four emerging technologies, including AI.
-
AI/ML for Drug Development Discussion Paper (May 10, 2023)
- Requests feedback on efforts to use AI in drug development.
-
Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (March 18, 2024)
- Outlines the FDA’s commitment to cross-center collaboration and future thinking.
National Artificial Intelligence Advisory Committee (NAIAC) #
- inactive AI Safety (May 2024) – On March 5, 2024, the National Artificial Intelligence Advisory Committee convened two panels of experts to share their views on AI safety and the necessary methodologies to achieve it. The prepared statements, written submissions, and discussion with the experts informed these findings.
- inactive Data Challenges and Privacy Protections
- inactive Harnessing AI for Scientific Progress
- inactive Provide Authority and Resources to Promote Responsible Procurement Innovation for AI
- inactive Require Public Summary Reporting on Use of High-Risk AI
- inactive Require Public Use Policies for High-Risk AI
- inactive Expand the AI Use Case Inventory by Limiting the ‘Sensitive Law Enforcement’ Exception
- inactive Expand the AI Use Case Inventory by Limiting the ‘Common Commercial Products’ Exception
- inactive Implementation of the NIST AI Safety Institute (December 2023)
- inactive National Campaign on Lifelong AI Career Success (November 2023)
- inactive Enhancing AI Literacy for the United States of America (November 2023)
- inactive Improve Monitoring of Emerging Risks from AI through Adverse Event Reporting (November 2023)
- inactive Second Chance Skills and Opportunity Moonshot (October 2023)
- inactive Generative AI Away from the Frontier (October 2023)
- inactive Implementing the NIST AI RMF with a Rights-Respecting Approach (October 2023)
- inactive AI’s Procurement Challenge (October 2023)
- inactive Creating Institutional Structures to Support Safer AI Systems (October 2023)
- inactive International Emerging Economies (August 2023)
- inactive National Artificial Intelligence Advisory Committee: Year Two Insights Report
- inactive Enhancing AI’s Positive Impact on Science and Medicine
- inactive Towards Standards for Data Transparency for AI Models
- inactive Law Enforcement Subcommittee: Year One Report and Roadmap
- inactive Exploring the Impact of AI (November 2023)
- inactive In Support of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (November 2023)
- inactive On AI and Existential Risk (October 2023)
- inactive The Potential Future Risks of AI (October 2023)
- inactive Implementing the NIST AI RMF With a Rights-Respecting Approach, Working Group on Rights-Respecting AI (October 2023)
- inactive FAQs on Foundation Models and Generative AI (August 2023)
- inactive Rationales, Mechanisms, and Challenges to Regulating AI (July 2023)
- inactive NAIAC Year 1 Report (May 2023)
Government Accountability Office (GAO) #
- Artificial Intelligence: Generative AI Use and Management at Federal Agencies (July 29, 2025) – From 2023 to 2024, agencies’ use of generative AI increased ninefold. As agencies deploy generative AI, they report encountering challenges such as complying with federal policies while keeping up with this rapidly evolving technology.
- Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements (December 12, 2023) – Federal law and guidance have several requirements for agencies implementing AI, but they haven’t all been met. For example, there’s no government-wide guidance on how agencies should acquire and use AI. Without such guidance, agencies can’t consistently manage AI. And until all requirements are met, agencies can’t effectively address AI risks and benefits. The 35 recommendations address these issues and more.
- Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (June 30, 2021) – This report identifies key accountability practices—centered around the principles of governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly.
National Institute of Standards and Technology (NIST) at the Department of Commerce #
- NIST AI 100-1: NIST AI Risk Management Framework – On April 29, 2024, NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed over the past year with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.
National Institutes of Health #
National Science and Technology Council #
- Houses the Joint Committee on Research Environments and the Select Committee on AI
National Science Foundation #
- Request for Information on the Development of a 2025 National Artificial Intelligence (AI) Research and Development (R&D) Strategic Plan (April 29, 2025) – The Office of Science and Technology Policy (OSTP), the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO) welcomes input from all interested parties on how the previous administration’s National Artificial Intelligence Research and Development Strategic Plan (2023 Update) can be rewritten so that the United States can secure its position as the unrivaled world leader in artificial intelligence by performing R&D to accelerate AI-driven innovation, enhance U.S. economic and national security, promote human flourishing, and maintain the United States’ dominance in AI while focusing on the Federal government’s unique role in AI research and development (R&D) over the next 3 to 5 years.
- NSF’s National Artificial Intelligence Research Institutes – Launched in 2020; consists of 25 AI institutes connecting over 500 funded and collaborative institutions globally.
- NSF-led interagency NAIRR Pilot
National Telecommunications and Information Administration (NTIA) #
-
Dual-Use Foundation Models with Widely Available Model Weights Report This Report provides a non-exhaustive review of the risks and benefits of open foundation models, broken down into the broad categories of Public Safety; Societal Risks and Wellbeing; Competition, Innovation, and Research; Geopolitical Considerations; and Uncertainty in Future Risks and Benefits. It is important to under stand these risks as marginal risks—that is, risks that are unique to the deployment of dual-use foundation models with widely available model weights relative to risks from other existing technologies, including closed weight models and models that are not considered du al-use foundation models under the EO definition (such as foundation models with fewer than 10 billion parameters).
Securities and Exchange Commission (SEC) VVV #
- July 26, 2023 – Regulating AI in Securities Markets: The SEC voted to propose new rules addressing conflicts of interest arising from broker-dealers’ and investment advisers’ use of predictive data analytics (PDA) and AI when interacting with investors sec.gov sec.gov. Concerned that firms’ AI-driven platforms might optimize for the firm’s benefit (e.g. maximizing fees or trading volume) at the expense of investors, the SEC’s proposal (Release No. 34-97990) would require firms to: (1) evaluate whether any AI/PDA usage places the firm’s interest ahead of the client’s, and (2) eliminate or neutralize the effect of any such conflict sec.gov sec.gov. In essence, the rule says that regardless of the technology used – be it a human advisor or an algorithm – investment advisers and brokers cannot put their own interests before their customers’ sec.gov sec.gov. The SEC noted AI models can potentially influence investor behavior at scale (for good or ill), so this proactive step aims to “protect investors from conflicts of interest – and require that, regardless of the technology used, firms meet their obligations” under fiduciary and conduct standards sec.gov sec.gov. The proposal, dubbed the “Predictive Data Analytics Rule,” also mandates written policies and recordkeeping to ensure compliance sec.gov. (This followed public warnings by SEC Chair Gary Gensler that AI in trading and advice could lead to systematic conflicts or even market manipulation if left unchecked.) The rule is currently in the comment period, but signals a broader federal effort to oversee AI in financial decision-making contexts.
Financial regulatory agencies have developed coordinated approaches to AI governance, led by groundbreaking enforcement actions and comprehensive risk assessments. Office of the Comptroller of the Currency +2 The SEC’s first “AI washing” enforcement case in March 2024 against Delphia and Global Predictions established precedent for holding firms accountable for false AI claims, resulting in $400,000 in civil penalties. Cooley +3