The regulatory landscape around AI in hiring has shifted from theoretical to enforceable. As of January 2026, employers using AI tools for recruiting, screening, or promotion decisions face active compliance obligations in multiple jurisdictions — with penalties ranging from daily fines to private lawsuits to seven-figure EU sanctions. The era of deploying hiring AI as a black box is over.
Illinois’s amendment to its Human Rights Act took effect on 1 January 2026, making it unlawful to use AI that results in employment discrimination — and uniquely granting candidates a private right of action to sue employers directly. New York City’s Local Law 144 has been in force since 2023, though enforcement is tightening after a 2024 review found that only 18 of 391 covered employers had posted the required bias audits. Colorado’s AI Act takes effect in February 2026 with the most comprehensive requirements yet. And the EU AI Act classifies all hiring AI as “high-risk,” carrying potential fines of up to €35 million or 7% of global annual revenue.
If your company uses any form of automated tool to screen, rank, evaluate, or select candidates, this guide explains what you’re required to do and which recruiting tools are built to help you do it.
What the Law Requires: Jurisdiction by Jurisdiction
Illinois (Effective 1 January 2026)
Illinois House Bill 3773 amends the Illinois Human Rights Act to cover AI use across the full employment lifecycle — recruitment, hiring, promotion, discipline, discharge, tenure, and conditions of employment. The key provisions are:
Anti-discrimination mandate. It is unlawful for employers to use AI in any employment decision that has the effect of discriminating against individuals based on protected characteristics — regardless of whether the discrimination was intentional. This is a disparate impact standard, meaning outcomes matter even if the employer didn’t intend bias.
ZIP code prohibition. Employers cannot use ZIP codes as a proxy for protected characteristics in AI-driven decisions. This targets a well-documented pattern where geographic data functions as a stand-in for race or socioeconomic status.
Notice requirement. Employers must notify applicants and employees when AI is being used to inform employment decisions. The notice must explain the purpose of the AI and what characteristics it evaluates. The Illinois Department of Human Rights has released draft rules specifying the mechanics of notice.
Private right of action. This is the provision with the highest stakes. Unlike NYC’s law (which is enforced by the city), Illinois gives individual candidates the right to sue employers directly if they believe AI-driven discrimination occurred. Remedies include back pay, reinstatement, emotional distress damages, and attorney’s fees.
New York City (In Force Since July 2023)
NYC Local Law 144 requires employers using “automated employment decision tools” (AEDTs) in hiring or promotion to conduct annual bias audits by independent auditors and publish the results. Candidates must be notified that an AEDT is being used, what data it analyses, and how to request an alternative selection process.
Penalties run $500–1,500 per violation per day, which compounds quickly across multiple candidates. Enforcement has been described as light so far, but stricter crackdowns are expected as the city ramps up compliance auditing.
Colorado (Effective February 2026)
The Colorado AI Act is the most comprehensive state-level AI hiring law in the United States. It requires employers deploying “high-risk” AI systems (which includes hiring and promotion tools) to use reasonable care to avoid algorithmic discrimination. Specific obligations include annual impact assessments, documented risk management policies, transparency disclosures to consumers, and completion of new assessments within 90 days of any material AI system change.
Companies with 50 or more employees must establish formal risk management policies. The law creates civil liability for non-compliance and establishes a rebuttable presumption of reasonable care for employers who meet all specified compliance requirements.
EU AI Act (In Force, Phased Implementation)
The EU AI Act classifies employment and hiring AI as “high-risk,” subjecting it to the strictest tier of regulatory obligations: transparency requirements, human oversight mandates, comprehensive documentation, and conformity assessments. Penalties reach up to €35 million or 7% of global annual revenue — whichever is higher.
For UK-based companies: while the UK is not subject to the EU AI Act directly, any company using AI tools to evaluate candidates located in EU member states must comply. The UK’s own AI regulatory framework is developing separately, with the government taking a sector-specific rather than horizontal approach.
Which States and Jurisdictions Have Similar Laws?
The patchwork is growing rapidly. Beyond the four jurisdictions detailed above:
California extended the Fair Employment and Housing Act to cover automated decision systems effective October 2025. Requirements include human oversight of AI-driven employment decisions and four-year record retention of AI criteria and results.
New Jersey, Texas, and Washington all have active AI hiring legislation in various stages. The direction across all pending bills is consistent: transparency, human oversight, bias accountability, and notice to candidates.
45 US states introduced AI-related bills in 2024 alone. While most haven’t yet become law, the trajectory is clear: employers operating across state lines face an expanding compliance patchwork with no federal standard to simplify it. The safest strategy is to build processes that satisfy the strictest jurisdiction’s requirements and apply them universally.
Tool Compliance Comparison: Built-In Features That Support Compliance
| Compliance Feature | Greenhouse | Workable | Lever | Manatal | HireVue | Paradox |
|---|---|---|---|---|---|---|
| Bias audit support | ✅ (Expert tier: DEI analytics, adverse impact tracking) | Partial (basic DEI reporting) | Partial (diversity analytics) | ❌ | ✅ (regular model audits published) | Partial |
| Candidate notification tools | ✅ (customisable communication templates) | ✅ (automated candidate comms) | ✅ (automated nurture + comms) | ✅ (email templates) | ✅ (built-in consent workflows) | ✅ (conversational disclosure) |
| Consent management | Partial (via workflow configuration) | Partial | Partial | Partial | ✅ (explicit consent capture for video/AI) | ✅ (conversational consent) |
| Structured evaluation | ✅ (scorecards, interview kits, blind screening) | ✅ (structured interviews) | ✅ (structured workflows) | Partial | ✅ (standardised assessments) | ❌ (screening, not evaluation) |
| Human oversight enforcement | ✅ (human decision required at each stage) | ✅ (human review built into workflow) | ✅ (human decision gates) | ✅ (human review stage) | ✅ (AI informs, human decides) | Partial (escalation to human) |
| Audit trail / documentation | ✅ (comprehensive activity logging) | ✅ (activity tracking) | ✅ (full pipeline history) | ✅ (activity logging) | ✅ (full transcript and scoring records) | ✅ (conversation logs) |
| DEI/adverse impact reporting | ✅ (Expert tier) | Partial | Partial | ❌ | ✅ | ❌ |
Key takeaway: Greenhouse (Expert tier) and HireVue offer the most comprehensive compliance infrastructure. Greenhouse’s structured hiring methodology — scorecards, interview kits, blind screening, and DEI analytics — was designed to reduce bias before AI regulation made it mandatory. HireVue publishes regular third-party audits of its AI models and has rebuilt its assessment methodology around structured, validated approaches following earlier scrutiny of video analysis.
For smaller firms that can’t justify Greenhouse Expert pricing, Workable and Lever provide adequate compliance support for current regulations, though they may need supplementing with manual processes for bias auditing requirements in jurisdictions like NYC and Colorado.
For our full review of each platform, see: Best AI Recruiting Tools in 2026.
Implementation Checklist: Making Your AI Hiring Process Compliant
Audit Your Current AI Usage
- Inventory every tool. List every piece of software involved in recruitment, screening, evaluation, or promotion decisions. This includes ATS platforms, sourcing tools, assessment tools, scheduling bots, and any AI features embedded in existing HR software. Many employers don’t realise their existing tools include AI components.
- Map the decision flow. For each tool, document where in the hiring process it operates, what data it analyses, and how its output influences the final hiring decision. This mapping is the foundation for Illinois notice requirements and Colorado impact assessments.
- Identify covered jurisdictions. Determine which regulations apply based on where your candidates and employees are located — not where your company is headquartered. A London-based company hiring remote workers in Illinois, NYC, or the EU is subject to those jurisdictions’ laws.
Build Compliance Infrastructure
- Draft candidate notices. Create plain-language notices explaining that AI is used in your hiring process, what it evaluates, and what characteristics it considers. Illinois requires this; it’s best practice everywhere.
- Implement consent workflows. For jurisdictions requiring consent (Illinois for video AI), build consent capture into your application or screening flow before any AI processing occurs.
- Commission or conduct bias audits. NYC requires annual independent bias audits of AEDTs. Colorado requires impact assessments. Even where not yet mandated, proactive auditing demonstrates good faith and creates evidence of reasonable care.
- Ensure human decision gates. No AI tool should make a final hire/no-hire decision autonomously. Build mandatory human review into every pipeline stage where AI influences the outcome. California explicitly requires this; every other jurisdiction strongly implies it.
- Establish record retention. California requires four-year retention of AI decision criteria and results. Apply this standard universally — it exceeds most other jurisdictions’ requirements and protects you as new regulations emerge.
Ongoing Compliance
- Review vendor contracts. Some laws (Colorado, Illinois) treat vendors and agents as extensions of the employer. Ensure your contracts with AI tool providers require transparency on training data, model changes, known risks, and cooperation with audits.
- Monitor new legislation quarterly. With 45 states introducing AI bills in 2024 alone, the regulatory landscape is changing faster than annual reviews can track. Assign someone (internal or external counsel) to monitor developments in every state where you hire.
- Retrain on policy changes. Every time a regulation changes or a new jurisdiction’s law takes effect, update your processes and retrain recruiters and hiring managers.
What Happens If You Don’t Comply
The consequences are escalating:
Financial penalties. NYC Local Law 144 imposes $500–1,500 per violation per day. The EU AI Act reaches €35 million or 7% of global revenue. These are not theoretical — enforcement actions are underway.
Private litigation. Illinois’s private right of action means individual candidates can sue directly, without waiting for a government agency to act. Class action exposure is significant for large employers using the same AI tools across thousands of applicants.
Discrimination claims. Beyond AI-specific regulations, existing anti-discrimination laws (Title VII, state equivalents) apply to AI-driven hiring decisions. The EEOC has made clear that employers are liable for discriminatory outcomes from AI tools, regardless of whether the employer understood the AI’s methodology. Recent claims against Workday and Amazon have drawn public attention to this exposure.
Reputational damage. High-profile AI hiring bias stories attract media coverage and erode employer brand. In a competitive talent market, candidates increasingly research potential employers’ hiring practices before applying.
Frequently Asked Questions
Do these laws apply if my company is based outside the US?
Yes — the laws apply based on where the candidate or employee is located, not where the employer is headquartered. A UK company using AI to evaluate applicants in New York City must comply with Local Law 144. A company hiring remote workers in Illinois must comply with HB 3773. If you hire across multiple US states, map your compliance obligations to every jurisdiction where candidates are based.
Is it safer to just stop using AI in hiring?
It’s an option, but it’s increasingly impractical. AI hiring tools, when properly governed, can actually reduce bias compared to unstructured human decision-making (which is well-documented to be influenced by unconscious bias). The regulatory intent isn’t to eliminate AI from hiring — it’s to ensure AI is transparent, auditable, and non-discriminatory. Companies that build compliant AI hiring processes will be better positioned than those that either ignore the regulations or abandon AI entirely.
What’s the single most important step for compliance?
Ensure human oversight at every decision point. No current regulation prohibits the use of AI to assist hiring decisions. Every current regulation requires that humans make the final call. If you can demonstrate that AI informs but does not determine hiring outcomes — and that you’ve taken reasonable steps to ensure the AI operates without discriminatory effect — you’ll satisfy the core requirement across all current jurisdictions.
Also in this series