Legal & Policy
The Legal Landscape of AI Hiring Tools (2024–2025)
From NYC Local Law 144 to the EU AI Act: What job seekers need to know about their rights when algorithms decide who gets an interview.
The "Black Box" Is Starting to Crack Open
For years, companies could use any algorithm they wanted to screen resumes, with zero transparency. If an AI tool decided your 10 years of experience wasn't a match because you used the wrong synonym for "project management," you'd never know.
That is rapidly changing. Legislators in New York, California, the European Union, and beyond are passing laws to force transparency and fairness in automated employment decision tools (AEDTs).
You don't need to be a lawyer to benefit from this shift. You just need to know your three basic rights emerging from this new landscape:
- The Right to Know (Notice)
- The Right to Audit (Bias Testing)
- The Right to Opt Out (Human Review)
1. New York City Local Law 144 (The "Bias Audit" Law)
Effective Date: Enforcement began July 5, 2023.
This is the first major US law regulating AI in hiring. It mandates two big things for any employer hiring in NYC:
- Bias Audits: Employers cannot use an AEDT unless it has been subject to a bias audit within the last year. The results must be publicly published.
- Notice: Employers must notify candidates 10 business days before using an AI tool to evaluate them.
What this means for you:
If you apply for a job in NYC, keep an eye out for a disclosure statement. It might look like small print: "We use automated tools to assess qualifications." This confirms that an algorithm—not just a human—will be the first gatekeeper.
2. The EU AI Act (The Global Gold Standard)
Status: Passed; phased implementation starting 2024.
The EU controls AI with a "risk-based" approach. Hiring and recruitment tools are classified as High Risk.
This imposes strict obligations on companies:
- Data governance (ensuring training data isn't biased)
- Transparency (users must know they are interacting with AI)
- Human oversight (a human must have the final say)
What this means for you:
If you are applying to European companies (or US multinationals that comply globally), there is a much stronger guarantee that a human recruiter is ultimately accountable for the decision, even if AI does the initial sorting.
3. Other States Are Joining In
- Illinois (AI Video Interview Act): Requires consent before AI analyzes video interviews and forces companies to delete videos upon request.
- Maryland: Bans the use of facial recognition services during pre-employment interviews without a signed waiver.
- California: The CPPA is drafting regulations that would give employees and applicants the right to opt-out of automated decision making in certain contexts.
4. Where JobsJudo Fits In
JobsJudo is a candidate-side tool. We don't make hiring decisions. We don't sell data to employers. We don't filter candidates.
Instead, we use AI to reverse-engineer the screening process for you.
- Transparency: We show you exactly why a typical ATS might reject you (e.g., missing keywords, short tenure gaps).
- Empowerment: We give you the data you need to fix those issues before you apply.
- Privacy: Your resume and data are yours. We analyze them privately and give the report to you, not the company.
Our mission is to level the playing field. If companies use AI to filter you out, you deserve AI that helps you stay in.
5. What Should You Do When You See an AI Disclosure?
If you see a notice that AI is being used:
- Don't Panic: It's standard procedure now.
- Optimize for Relevance: AI tools are literal. They look for direct evidence of skills. Vague resumes get filtered.
- Check Your Formatting: Ensure your resume is ATS-friendly (clean fonts, no graphics, standard headings).
- Run a Pre-Check: Use a tool like JobsJudo to see how an algorithm views your application before you hit send.