Artificial intelligence is now embedded across the employment lifecycle, from recruiting and hiring to performance management, compensation and workforce planning. While these tools can drive efficiency and data‑driven decision‑making, they also can create legal risk, if used without appropriate safeguards.
Employers commonly use AI to draft job descriptions, target job ads, source candidates and screen or rank résumés based on keywords, experience or inferred skills through the following types of tools:
- Agentic Sourcing: AI agents now act as “teammates” that continuously scan professional networks (like LinkedIn) and market analytics to build talent pools before a role is even open.
- Semantic Screening: Moving beyond simple keywords, AI now uses skills-based matching to analyze the context of a candidate’s experience to determine if their actual skills match the role, rather than just looking for specific job titles on a resume.
- Video & Multimodal Analysis: AI-enhanced interview platforms analyze video interviews not just for what is said, but for communication style and technical competency, providing a summary for human recruiters to review.
- Automated Scheduling & Engagement: Chatbots handle 24/7 candidate queries and interview scheduling, which has reportedly cut “time-to-fill” metrics by 40–50% in high-volume sectors like retail and healthcare.
Risks Associated with AI Tools Used in the Hiring Process
These types of screening tools may disproportionately exclude candidates based on protected characteristics, even if the criteria appear neutral on their face. For example, résumé filters that correlate years of experience, graduation dates or career gaps with lower scores can disadvantage older applicants. Similarly, automated screening may screen out individuals with disabilities if tools rely on speed, communication style or non‑essential job criteria. Additionally, when it comes to applicants who might need an accommodation, chatbot interfaces may not offer alternative methods for applicants with disabilities. When this happens, the employer has failed to engage in the interactive process, as required by the ADA and similar state disability discrimination laws.
As an employer, it is important to treat every AI tool used in the hiring process as if it were a human recruiter whose decisions must be validated, explainable and defensible. To accomplish this, employers who rely on AI to streamline their recruiting process should implement the following:
1. Disclose AI use in hiring materials and candidate communications and obtain consent.
2. Maintain a central AI recruiting and hiring inventory, which identifies:
-
-
- The tool’s purpose (sourcing, screening, ranking, scheduling, engagement)
- Whether the tool scores, ranks, filters or meaningfully influences decisions
- Data inputs such as resume text, facial data, voice and engagement metrics.
-
3. Require AI vendors to provide validation studies linking outputs to job-related criteria. Vendors must also be able to provide documentation of training data sources and bias-mitigation methods. Employers should include these requirements in AI vendor contracts. Employers should also be prepared to independently test the tools to determine whether they are improperly excluding certain types of candidates.
4. Conduct periodic adverse-impact analyses to monitor selection rates, drop-off points in agentic sourcing funnels and auto rejections triggered before human review.
5. Consider prohibiting fully automated rejection without human review or set clear processes for when humans must override AI decisions and who has authority to deviate from AI recommendations.
6. Provide non-AI alternatives to video interviews and ensure scheduling bots can accommodate disability-related timing needs and religious observances.
New and Emerging Regulations on the Use of AI in the Hiring Process
Although there is no comprehensive federal law governing the use of AI in the hiring process, AI tools will be treated as part of the employment decision and subject to existing federal laws such as Title VII, ADA and the ADEA. Under these laws, employers remain liable for discriminatory outcomes produced by AI, even when the tools are created and supplied by a vendor. In other words, “the AI algorithm did it” is not a defense.
And, States and local governments are beginning to regulate the use of AI in the hiring process. For example:
- New York City has enacted a law that applies to Automated Employment Decision Tools that substantially assist or replace hiring decisions. The NYC law applies to resume screening, ranking, video analysis and other tools and requires the employer to conduct an annual independent bias audit and a public posting of the audit summary. The law also requires at least 10 business days’ notice to candidates that AI will be used and the notice must include instructions for requesting an alternative selection process or reasonable accommodation. The law provides for a $500 penalty for the first violation and for any additional violations occurring on the same day as the first. The penalties can increase to up to $1,500 for each subsequent violation.
- Illinois has two overlapping AI hiring laws. Effective January 1, 2026 the Illinois Human Rights Act was amended to expressly regulate employer use of AI in employment decisions. The amendments make it a civil rights violation for an employer to use AI in recruitment, hiring, promotion, termination or other employment‑related decisions in a manner that results in discrimination, including disparate impact, on the basis of any protected class (such as race, sex, age, disability, religion or national origin). Critically, the statute explicitly prohibits employers from using ZIP codes as a proxy for a protected class when deploying AI tools. Additionally, under the Artificial Intelligence Video Interview Act, effective since 2020, Employers using AI to evaluate video interviews for Illinois‑based positions must disclose the use of AI, explain how it works and what characteristics it evaluates, obtain advance applicant consent, restrict sharing of interview videos and delete videos upon request. Where employers rely solely on AI video analysis to select candidates for in‑person interviews, they must also collect and report demographic data to assess potential bias. Illinois employers using AI‑driven hiring tools must also consider the Illinois Biometric Information Privacy Act (BIPA), which strictly regulates the collection, use, storage and disclosure of biometric identifiers such as facial geometry, voiceprints, or fingerprints, requires advance written notice and informed written consent, mandates retention and destruction policies and provides a private right of action with statutory damages
- Colorado: The Colorado Artificial Intelligence Act (CAIA), which, as proposed, would be the most comprehensive U.S. law on the subject, applies to “high-risk AI systems” that make or substantially influence consequential decisions, including hiring. The law was enacted in 2024 and is scheduled to take effect on June 30, 2026 but is likely to be further amended before then. Under the proposed law, employers would be required to: exercise reasonable care to prevent algorithmic discrimination, maintain AI risk-management policies, conduct impact assessments, provide notice when AI meaningfully influences decisions and offer appeal and human review. It is important to note that employers with fewer than 50 full-time employees that do not train on their own data are exempt from the many requirements under the law.
- California: The California Civil Rights Council’s Fair Employment and Housing Act regulations on AI, which became effective on October 1, 2025, clarify that existing California antidiscrimination law fully applies when employers use AI, algorithms or other automated decision systems (ADS) in employment decisions. The regulations broadly define covered tools to include any computational process that makes or assists decisions about hiring, promotion, training or other employment benefits, and they require employers to ensure such tools do not discriminate based on protected characteristics. Employers must treat AI tools the same as human decision‑makers, avoid unlawful medical or pre‑employment inquiries, provide reasonable accommodations and not rely solely on automated systems where FEHA requires individualized assessments. Employers remain responsible for outcomes even when using third‑party vendors and must retain AI‑related records for at least four years, with the regulations encouraging bias testing and documentation to demonstrate compliance.
Takeaways for Employers
AI can streamline hiring but it can also magnify legal exposure if it is used without appropriate safeguards. Employers should therefore treat AI as a regulated decision infrastructure, not experimental HR technology. It therefore must be validated and monitored. The company must be able to explain how and why it is used. And, finally, AI tools used in the hiring process must always be subject to human intervention and judgment. The steps reflect not only emerging best practices but also the direction of state and local regulation, which focuses on transparency, validation, bias monitoring, human oversight and documented accountability for AI-assisted hiring decisions. As jurisdictions continue to regulate AI in employment, employers that embed these safeguards now will be best positioned to mitigate legal risk, demonstrate compliance across jurisdictions and adapt as AI specific hiring laws continue to expand.
To discuss further, contact KJK attorneys Beth Spain (BRS@kjk.com) or Ted Theofrastous (TCT@kjk.com).