Insights

How to reach AI compliance in the United States

How to reach AI compliance in the United States

October 22, 2025 12 min read

Eight out of ten (78%) U.S. organizations plan to increase AI spending during the fiscal year 2025. As algorithms take over increasingly complex tasks traditionally performed by humans, often requiring ethical judgment — determining who receives a mortgage, who is promoted, how patients are diagnosed, etc. — the field is beginning to be held accountable, and rightfully so.

Take the recent example from Illinois: a retail chain utilized an automated hiring tool that flagged candidates based on their geographical zone. The AI system failed to present candidates from lower economic status areas when their input was from Upper West Chicago. The tool was not explicitly programmed to discriminate, but inherited biases from historical data. These are no longer outlier instances; they are warning signals for organizations moving too quickly without thorough risk assessments.

State governments, federal agencies, and industry regulators are becoming more assertive. Compliance is becoming essential to any AI strategy and is no longer merely a legal footnote. In this article, we will answer the questions you might have around the AI compliance in the United States, regardless of whether you are developing or purchasing AI tools.

The AI compliance landscape in the United States

It goes without saying that it didn’t take long for the American government to respond to the spike in AI business application. In October 2023, Ex-President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence — the most extensive federal effort yet to regulate AI in the U.S. It directed agencies, such as the Department of Commerce, Federal Trade Commission (FTC), and Department of Labor, to begin to develop new standards for areas, such as transparency, non-discrimination, and accountability.

Nonetheless, it all started at the state level, following a wave of actions from the states, including the aforementioned Illinois 3773 case, which limits the use of ZIP codes in automated hiring, due to bias and systemic bias concerns. Another case adding to the propriety of compulsory AI compliance took place when Amazon-powered résumé screening was disbanded after reports showed the program was biased against women. Now, New York’s Local Law 144 requires companies to conduct bias audits before using AI for hiring. Last but not leat, in California, the California Privacy Rights Act (CPRA) encourages businesses to report any automated decision-making processes impacting consumers’ rights or access to services.

American businesses now understand that it is more than just penalties are at stake. Ignoring AI regulatory compliance equals the risk of losing the public’s trust and having your technology removed from the market.

Hence, what are the AI compliance frameworks a business must adhere to in the United States to stay on the safe side?

Key AI compliance frameworks and regulations in the U.S.

The U.S. approach to AI regulation is taking shape rapidly, driven by a combination of federal executive laws and regulations, state legislative AI policies, and specific sector demands. There is no unified federal law, but several recent developments have introduced significant changes for compliance in 2024 and 2025.

Executive orders: EO 14110 and EO 14179

President Biden’s 2023 Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence represented a pivotal moment in the federal approach to AI policy. The order established eight guiding principles for federal AI use, including requirements for safety testing of AI systems before public release, standards for detecting AI-generated content through watermarking, and protections against AI-enabled fraud. It also mandated that developers of the most powerful AI systems share safety test results with the federal government, particularly for systems that could pose risks to national security, economic security, or public health. The order directed the FTC and the Department of Commerce to devise regulations around fairness, safety, and transparency, with specific attention to preventing algorithmic discrimination in housing, education, and employment.

National Institute of Standards and Technology (NIST) AI Risk Management Framework

First released in January 2023 and updated in mid-2024, the NIST AI Risk Management Framework (AI RMF) has gained wider traction and adopted in the business world as an actionable framework for managing AI risks of all shapes and sizes. Its updated version has a specific profile for generative AI models with more specific guideposts for assessing and mitigating risks relating to hallucinations, misinformation, and lack of explainability. For instance, the framework addresses scenarios where AI models generate factually incorrect information presented as fact, produce biased outputs based on training data limitations, or make recommendations without providing traceable reasoning that users can verify.

Since it is purely voluntary guidance, the AI RMF was not intended to be a compliance standard; however, it is increasingly treated as such, especially among large enterprises and federal contractors.

New York City Local Law 144

New York City’s Local Law 144, which went into effect in July 2023, mandates that companies using automated employment decision technologies warn candidates and do impartial bias audits. The regulation applies to businesses that use algorithmic systems to screen applicants for employment or promotions, and non-compliance can result in hefty penalties. Due to the regulation, numerous big staffing platforms and HR tech companies have already seen changes in their AI governance policies.

Tennessee Ensuring Likeness Voice and Image Security (ELVIS) Act

The ELVIS Act was passed in March 2024 to amend Tennessee’s Personal Rights Protection Act. The ELVIS Act prohibits unauthorized commercial use of a person’s “voice” along with their name, image, and likeness. “Voice” is broadly identified in the ELVIS Act to include the actual recording of a voice, AI-generated voice approximations, and imitations by humans that performed identifiable impersonations.

Importantly, the Act also introduces secondary liability, meaning that the individual or company can be responsible even if they didn’t upload the infringing content themselves, but only enabled, facilitated, or distributed it. This presents new compliance requirements and risks for AI developers, content platforms, and advertising companies that use synthetic likenesses or voices without clear consent.

As of mid-2025, nearly 45 states in the U.S. have introduced nearly 700 AI-related bills, with around 20% of those bills becoming law. Some instituted bills focus on consumer protection and transparency, while others focus on specific use cases, such as through insurance, education, or law enforcement. Without a federal umbrella law, organizations must proactively follow and potentially utilize frameworks like NIST AI RMF to build consistent risk-based compliance processes.

Multiple trends are converging to make compliance not just pressing, but inevitable:

  • Presidential Executive Orders and agency guidance (FTC, Equal Employment Opportunity Commission (EEOC), Consumer Financial Protection Bureau (CFPB)) indicate a pivot from the AI algorithms that are neither transparent nor explainable.
  • Enactment of state-level legislation such as Illinois’ 3773 and the evolving landscape of California privacy laws.
  • Publicly visible failures of AI implementation, like biased hiring tools and flawed facial recognition.
  • Consumer backlash around the lack of transparency in automated decision-making.
  • Cross-border pressure from foreign legislation, especially around the EU AI Act, is impacting U.S. compliance frameworks.

Failed AI compliance in the US: first lawsuits already filed

The responsibility of AI technologies is determined by the compliance infrastructure we build around them. Two recent cases remind us how firms’ lack of AI risk management can lead to dire business and legal consequences.  

  • Mobley v. Workday (Feb 2024): Derek Mobley, an African American man over 40 with a disability, has filed suit against Workday’s automated résumé‑screening tool based on alleged age, race, and disability discrimination. The amended complaint contends that the tool systematically rejected applicants like Mobley, creating repeat-use issues across protected classes.
  • SafeRent Settlement in Massachusetts (Nov 2024): Mary Louis and others filed suit against SafeRent after a low AI-generated tenant-screening score caused housing applications to be denied, even with valid rental histories and housing-voucher status as part of their application. SafeRent settled for $2.2 million and turned off the feature for voucher holders for 5 years.

A stepwise guide to building AI compliance in the United States

Below are key steps a company can take to ensure AI compliance — both in the United States and more broadly — by adopting a responsible approach throughout the AI lifecycle:

PhaseCompliance actionsExamples/Tools
Data collectionVet sources for legality and bias; avoid collecting unnecessary attributes   Data minimization, anonymization, differential privacy
Model training   Test for bias across protected classes; make models explainableSHAP, LIME, IBM AI Fairness 360, Microsoft Azure Responsible AI Dashboard, AWS Clarify, and Google Cloud Explainable AI
Pre-deploymentConduct formal audits (internal or third-party) for discrimination risksNYC Local Law 144 (for hiring tools), custom audit protocols
DeploymentClearly document AI use case, limitations, and human fallback optionsModel cards, datasheets for datasets, use policies




MonitoringEstablish feedback loops, drift detection, security monitoring, and governance reviews (Data Ethics, Model Risk, Security). Track the 11 Responsible AI Framework pillars: Explainability, Transparency, Traceability, Reliability, Repeatability, Data Ethics, Accountability, Impact, Human-in-Loop, Bias Freedom, and Fairness.Responsible AI dashboards, drift detection tools, governance frameworks, bias alerts, A/B fairness testing
Table 1. Practical Steps to Build AI Compliance Standards

Robust compliance efforts start with an AI impact assessment to recognize who the systems may negatively affect and under what conditions, especially in sensitive areas like housing, credit, hiring, and healthcare. Teams should establish their definitions of accepted bias and identify thresholds that will indicate if a model may need retraining or adjustments during development or after deployment.

Avenga’s Responsible AI framework explained
Graph 1. Avenga’s Responsible AI framework explained

Due to the risks inherent in relying upon a single tool for explainability, teams should engage several styles of examinations and cross-validate interpretations to understand more about logical decisions. Creating a means for redress (allowing users to appeal or dispute an automated outcome) is just as critical. During the assessment process, developing and maintaining usable logs, audit trails, and other types of records ensures traceability — something regulators require to ensure compliance.

AI Project Timeline deconstructed
Graph 2. AI Project Timeline deconstructed

Key challenges in AI compliance

Despite increased awareness about responsible use of AI, compliance issues are still common, and here’s why.

1.      Lack of explainability in AI models

Black box systems can generate outputs, but without transparency about how the system arrived at that outcome, the organization can’t explain or justify those outputs to third parties, particularly where the outputs may concern lives or large sums of money. IBM’s now-defunct Watson for Oncology project is a textbook example. Watson made treatment recommendations that were intentionally unsafe or inconsequential, and although Watson showed promise, it could not be trusted clinically. The result was a reported $62 million loss and court documents stating that the project was shut down. From a compliance perspective, a lack of explainability creates legal blind spots.

2.      Bias embedded in training data

Data biases originating from poorly curated or unbalanced datasets are one of the leading sources of biased AI outcomes. Numerous medical imaging diagnostics and other systems were developed using AI practices during the COVID-19 pandemic, mostly from narrow sets of relatively non-representative datasets. Certain systems, for example, learned to identify patterns correlated with COVID-positive outcomes based on spurious factors like which imaging equipment manufacturer was used or which hospital facility captured the scan, rather than actual medical indicators in the images themselves rather than the medical signals in the image, resulting in dangerously flawed outcomes. In high-risk industries, biases in data are not only a technical problem; it is an inherent compliance one.

3.      Fragmented regulatory landscape

Without a federal AI law, companies must comply with a patchwork of state-level laws that often have different requirements and timelines. For instance, New York requires bias audits of hiring algorithms to be conducted annually and mandates candidate notification, while Utah’s S.B. 149 focuses on disclosure requirements for generative AI in customer-facing roles but doesn’t require bias audits. Within this environment, compliance has turned into a real-time game of whack-a-mole (particularly companies doing business across multiple states).

4.      Lack of internal ownership

AI compliance and risk management frequently exist within the complex intersection of departments (legal, data science, and engineering), where the roles are ill-defined based on the evidence. This disparity raises the possibility of mistakes being made when developing and implementing the model. Without clearly defined roles, compliance turns into a reactive process that is frequently initiated only after an issue arises.

5.      Evolving Models and Legal Standards

AI systems evolve through retraining, data drift, or architecture change, and the legal landscape changes just as readily. Refusing to identify adaptations and implementations can have substantial risks and costs. Zillow’s home pricing algorithm, which powered its now defunct iBuying program, was driven by model outputs that were too optimistic. When real-world prices didn’t conform to Zillow’s predictions, it lost $300 million and had to close down the unit. The lesson learned: compliance is not only the act of launching a model; it is a model’s continued and responsible performance.

AI compliance in 2026: the new standard, not an option

The evolution of AI regulation in the U.S. is increasingly being underpinned not solely by enforcement, but by expectations of trustworthiness, transparency, and responsible innovation. Organizations will be expected to use ethical AI and create those capabilities based on legal and ethical compliance when using AI as part of sensitive transactions to maximize human experience (for example, hiring, lending, healthcare).

Signals of this inflection point have already been seen. From New York’s mandatory bias audits (Local Law 144) to Illinois’ ZIP code discrimination cases, state lawmakers are stepping in where federal regulation hasn’t yet unified. Even major players are learning the hard way: think Zillow’s $300M pricing model failure or Workday’s résumé screening lawsuit.

AI compliance is no longer about checking a box. It’s about:

  • Building ethical AI practices
  • Vetting data for bias and traceability
  • Adopting real compliance tools
  • Following frameworks like NIST’s AI RMF
  • Preparing for what may become the blueprint for an AI Bill of Rights

AI usage in high-stakes areas like hiring, lending, and healthcare now demands transparency, explainability, and a rock-solid AI governance framework.

Looking for a trusted and experienced AI technology partner? Leverage Avenga’s decades of transforming businesses with innovative, reliable, and, of course, compliant solutions.