Tutorial

How to Set Up AI-Powered Candidate Screening in Under an Hour

AI Agent Brief may earn a commission through links on this page. This does not affect our rankings.

A single job posting for a mid-level role now generates an average of 250–300 applications. For popular positions at well-known companies, that number can climb past 1,000. Manually reviewing each CV takes 6–8 minutes at minimum, which means a recruiter screening 300 applications spends 30–40 hours — nearly a full working week — on a single role before a single interview is scheduled.

AI screening changes that equation. A properly configured AI screen can evaluate 500 applications in the time it takes a human to review 20, surfacing a ranked shortlist of candidates who meet your criteria while flagging those who don’t. The recruiter’s job shifts from reading every CV to reviewing the AI’s shortlist and making judgement calls on the candidates who actually warrant attention.

This guide walks through setting up AI-powered candidate screening from scratch. Total setup time: 30–60 minutes for your first role. Time saved per role thereafter: 5–20+ hours depending on application volume.

What You’ll Need

Before starting, make sure you have:

  • An ATS or recruiting platform with AI screening features — Greenhouse, Workable, Lever, Manatal, or a specialist screening tool (see Step 1 for recommendations)
  • A finalised job description with clear requirements — the AI screens against what you tell it to look for, so vague descriptions produce vague results
  • Access to historical applicant data (if available) — past applications for similar roles help calibrate the AI’s scoring
  • 30–60 minutes for initial setup, plus 15–20 minutes for the calibration test

Step 1: Choose Your Screening Tool

Your choice depends on whether you already have an ATS and what level of screening sophistication you need.

If you’re already on Greenhouse (Advanced or Expert tier): Use Greenhouse’s built-in AI resume filtering. It integrates directly into your existing pipeline, scoring and ranking candidates against the role’s requirements without any external tool. The AI analyses keywords, skills, experience patterns, and qualifications to surface the strongest matches. Setup is minimal because the tool is already embedded in your workflow — you’re configuring it, not installing it.

If you’re on Workable (any plan): Workable includes AI screening across all tiers. The platform evaluates incoming applications against your job requirements and presents a ranked shortlist. The Starter plan ($249/month) includes AI screening for up to 2 active jobs; Standard ($349/month) removes the job limit. For most SMBs, Workable’s built-in screening is sufficient without any add-on tools.

If you need a budget option or don’t have an ATS yet: Manatal ($15/user/month) includes AI candidate scoring and resume parsing that functions as a screening layer. It’s the most affordable path to AI screening, and the 14-day free trial means you can test the full workflow before committing. The AI scores candidates against job requirements automatically as applications come in.

If you need specialist screening for high-volume roles: HireVue offers assessment-based screening (structured video interviews and game-based evaluations) that goes beyond resume parsing into actual competency evaluation. This is a separate product from your ATS and sits earlier in the funnel, typically used for graduate programmes, retail, customer service, and other roles receiving hundreds or thousands of applications.

For a complete comparison, see: Best AI Recruiting Tools in 2026.

Step 2: Define Your Scoring Criteria

The most important step in the entire process — and the one most often done badly. AI screening is only as good as the criteria you give it. Vague requirements produce noisy results. Specific, measurable criteria produce useful shortlists.

Convert your job description into scorable requirements. Go through the job description and categorise every requirement into one of three tiers:

Must-have (knockout criteria): Requirements that are genuinely non-negotiable. If a candidate doesn’t meet these, they’re automatically screened out regardless of other strengths. Examples: specific professional certifications (CPA, bar admission, nursing licence), minimum years of experience in a specific domain, legal right to work in the relevant jurisdiction, or specific technical skills that can’t be trained on the job.

Keep this list short and genuinely non-negotiable. Every must-have criterion you add eliminates candidates — some of whom might be excellent fits who happen to describe their experience differently than your keyword expects.

Should-have (weighted criteria): Requirements that significantly improve a candidate’s score but aren’t absolute disqualifiers. Examples: experience in a specific industry, familiarity with particular tools or methodologies, management experience, or relevant educational background. These should carry substantial weight in the AI scoring but not trigger automatic rejection.

Nice-to-have (bonus criteria): Qualities that differentiate strong candidates from good ones but shouldn’t affect initial screening. Examples: additional languages, specific certifications beyond the minimum, experience at a well-known company in the field, or evidence of thought leadership. These add points to a candidate’s score but don’t drive the primary ranking.

Define what “good” looks like in measurable terms. Instead of “strong communication skills” (unmeasurable by AI), specify “experience in client-facing roles” or “track record of written deliverables” (identifiable from CV content). Instead of “team player” (meaningless to an AI), specify “experience working in cross-functional teams” or “collaborative project delivery.”

Set experience ranges, not minimums. Asking for “5+ years of experience” excludes a candidate with 4 years and 10 months who might be exceptional. AI screening works better with ranges: “3–7 years of relevant experience” scores candidates on a curve rather than imposing a binary cutoff.

Step 3: Configure the AI Screen

With your scoring criteria defined, here’s how to set up the screen in your platform:

Input your criteria into the screening configuration. In Greenhouse, this means setting up the role’s scorecard criteria and enabling AI filtering against those criteria. In Workable, configure the job requirements and the AI applies them automatically to incoming applications. In Manatal, set up the job listing with detailed requirements and the AI scoring engine evaluates each candidate against them.

Set your scoring weights. Most platforms let you assign relative weights to different criteria. A practical starting point: must-haves at 50% weight (these are pass/fail, but the weight ensures they dominate the ranking), should-haves at 35%, and nice-to-haves at 15%. Adjust based on the role — technical positions might weight specific skills higher; management roles might weight leadership experience higher.

Configure the shortlist threshold. Decide how many candidates you want the AI to surface for human review. For a role receiving 200 applications, a threshold of 15–25 candidates (the top 8–12%) gives you a manageable shortlist without cutting too aggressively. For high-volume roles with 1,000+ applications, a shortlist of 30–50 still saves enormous time.

Set up rejection handling. Candidates who fall below your threshold still deserve a response. Configure automatic rejection emails for candidates screened out, with appropriate timing (24–48 hours after the screening cycle closes, not instant — instant rejections feel impersonal and suggest the application wasn’t considered). Most ATS platforms include customisable rejection templates.

Enable the compliance features. If you hire candidates in jurisdictions with AI hiring regulations (Illinois, NYC, Colorado, EU), configure the required disclosures. Add a notice to your application form explaining that AI is used in the screening process. Capture consent where required. See our compliance guide: AI Recruiting Tools That Comply With 2026 Hiring Regulations.

Step 4: Run a Calibration Test

Before letting the AI screen live applications, test it against candidates whose quality you already know. This calibration step catches configuration errors before they affect real candidates.

Pull 20–30 past applications for similar roles. Ideally, select a mix: 8–10 candidates you hired or advanced to final rounds (known-good), 8–10 candidates you rejected early in the process (known-weak), and 5–10 candidates who were borderline (the interesting test cases).

Run the AI screen on this historical batch. Feed the past applications through your configured screening criteria and review the AI’s rankings.

Check for alignment. The AI should rank your known-good candidates near the top and your known-weak candidates near the bottom. If it does, your criteria are well-calibrated. If known-good candidates are being screened out, your must-have criteria are likely too restrictive — you may be filtering on specific keywords that strong candidates don’t use, or setting experience thresholds that exclude people who took non-linear career paths. If known-weak candidates are ranking highly, your criteria aren’t discriminating enough — you may need to add more specific requirements or adjust your scoring weights.

Pay particular attention to the borderline cases. These reveal whether your AI screen is making the same nuanced judgements you’d make manually. If the AI advances borderline candidates you’d have rejected, consider adding criteria. If it rejects borderline candidates you’d have advanced, consider loosening criteria or adjusting weights.

Check for adverse impact. Review whether the AI’s shortlist shows any patterns of disproportionate exclusion by gender, ethnicity, age, or other protected characteristics. This is both an ethical obligation and a legal requirement in many jurisdictions. If you spot concerning patterns, revisit your criteria — the issue is almost always in the requirements, not the AI itself.

Step 5: Review Results and Adjust

After the calibration test, make targeted adjustments before going live:

Refine knockout criteria. If too many strong candidates are being filtered out, convert some must-haves to should-haves. The most common over-filtering mistakes are requiring specific degree types (excluding equivalent experience), demanding exact years of experience (rather than ranges), and using brand-name keywords (e.g., requiring “Salesforce” when the candidate used an equivalent CRM).

Adjust scoring weights. If the ranking feels right but the ordering is slightly off — good candidates are in the shortlist but not at the top — adjust the relative weights between should-have and nice-to-have categories.

Test edge cases. Run a few unconventional but potentially strong profiles through the screen. Career changers, candidates from adjacent industries, and people with non-traditional backgrounds are the ones most likely to be unfairly excluded by overly rigid AI criteria. Make sure your configuration gives them a fair chance.

Document your configuration. Record your screening criteria, scoring weights, and threshold settings for each role. This documentation serves three purposes: it ensures consistency if you hire for the same role again, it creates the audit trail that compliance regulations increasingly require, and it gives hiring managers transparency into how their pipeline was filtered.

Go live. Once the calibration test confirms the AI is producing results aligned with your judgement, enable it on your live job postings. The first few roles should still include a higher-than-normal level of human spot-checking — review a sample of screened-out candidates to verify the AI isn’t making systematic errors.

Ongoing Management

AI screening isn’t a one-time setup. Maintain and improve the system over time:

Review and recalibrate quarterly. Job markets change, role requirements evolve, and the candidate pool shifts. Revisit your screening criteria every quarter to ensure they still reflect what you actually need.

Track screening outcomes. Monitor which AI-screened candidates progress to interviews, receive offers, and succeed in the role. If the AI’s top-ranked candidates consistently outperform lower-ranked ones, your criteria are working. If there’s no correlation, your criteria need work.

Update for regulatory changes. AI hiring regulations are evolving rapidly. Stay current with requirements in every jurisdiction where you hire, and update your disclosure, consent, and audit processes accordingly.

Resist the temptation to over-automate. AI screening should narrow the pool and surface the best candidates for human review — not make hiring decisions. The recruiter’s judgement on the shortlisted candidates remains essential, and final hiring decisions should always involve human evaluation.

Frequently Asked Questions

How accurate is AI candidate screening?

Accuracy depends entirely on how well you define your criteria. With well-configured, specific scoring requirements, AI screening consistently surfaces candidates whose qualifications match the role — and does so faster and more consistently than manual review, which is subject to fatigue, inconsistency, and unconscious bias. The AI won’t catch everything a human would (particularly soft signals like career narrative and motivation), which is why it generates a shortlist for human review rather than making final decisions.

Will AI screening miss great candidates?

It can — particularly candidates with non-traditional backgrounds, career changers, and people who describe their experience using different terminology than your criteria specify. This is why the calibration test (Step 4) and ongoing monitoring are critical. The best approach is to err on the side of slightly larger shortlists rather than aggressively narrow ones — it’s better to review 25 candidates than to miss the best one because the AI filtered on a keyword they didn’t use.

Do I need to tell candidates I’m using AI screening?

In an increasing number of jurisdictions, yes. Illinois requires notification when AI is used in employment decisions. NYC requires disclosure of automated employment decision tools. Colorado requires transparency disclosures. Even where not yet legally required, disclosure is becoming a best practice that builds candidate trust. Add a clear, plain-language notice to your application form.

Back to Best AI Recruiting Tools in 2026: Sourcing, Screening, and Hiring Platforms Compared