Back to all posts

How AI Should be used in Hiring

Rishit Chaturvedi
Rishit Chaturvedi

The Algorithm Doesn't Know What It Feels Like

There is a moment in every great hiring conversation where a person reveals something true about themselves. Usually it's somewhere between the rehearsed answer and the real one. Maybe it's how they describe a failure. Maybe it's the pause before a hard question. Maybe it's what comes out when the interviewer asks "Is there anything else?" right at the end.

That moment is not in any dataset. No model has learned to recognize it. And if you're the person responsible for bringing AI into your hiring process, that's probably a good place to start feeling a little uncomfortable.


I. Understand what you're actually replacing

The pitch for AI in hiring is efficiency. Resume screening takes forever. Scheduling is a mess. Standardizing questions sounds like fairness. These are real problems, sure.

But there's a difference between using AI to reduce friction and using it to make decisions about people. The first is a productivity tool. The second is a values statement. One of these has a much longer paper trail when things go wrong.

When you automate a part of hiring, ask yourself: what human judgment am I replacing, and what was the purpose of that judgment? Screening resumes for keywords was always a blunt instrument. AI does it faster, but faster blunt is still blunt. Scoring interview answers assumes the score is capturing something real. Is it?

"We had 4 panelists interviewing for the same role, and they were all asking the same questions. Meanwhile, critical competencies like quantitative analysis weren't being evaluated at all. Nobody noticed until we audited the whole thing."

— Head of TA at a large consumer tech company

And the process AI is inheriting is already broken. An analysis of 23,000 interview transcripts by BrightHire found that after five interviews, only 66% of job description skills are actually well covered. Worse, 72% of well-covered skills are redundantly revisited in later rounds, with technical skills hitting an 87% redundancy rate. A recruiter with over six years of experience described interviewers who don't even review a candidate's background before walking in. They just ask whatever comes to mind.

A senior TA leader who has managed teams across both startups and enterprises put it plainly:

"80% of organizations still run recruitment transactionally. Source, schedule, close. There is minimal process involvement beyond getting the role filled."

Only about 15% of hiring teams have what you'd call a systematic, efficient process. The rest are winging it with varying degrees of awareness. When AI automates a broken process, it doesn't fix it. It just makes it faster and harder to question.


II. AI reflects the biases you fed it

Every AI system trained on historical hiring data inherits the assumptions baked into those decisions. If your company historically hired from three universities, the model will quietly learn to prefer those three universities. If your top performers shared certain traits, the model will look for those patterns without ever asking why they were there in the first place.

This is not theoretical. Amazon scrapped an AI recruiting tool in 2018 after finding it systematically downgraded resumes that included the word "women's," as in "women's chess club." The model had just learned from a decade of hiring decisions made by humans who had their own blind spots.

The uncomfortable part is not that AI is biased. It's that AI bias is invisible and confident. A biased human interviewer can be challenged. They might even change their mind over lunch. An algorithm just gives you a number and moves on.

This matters because so much of today's hiring already runs on gut feel dressed up as judgment. Multiple hiring managers I spoke with openly acknowledged that "vibe" is a legitimate rejection reason.

"It's like hiring a maid or a nanny. You're assessing cultural fit beyond any technical competency. Sometimes you just know it's not right, and that's a valid call."

— A recruiter describing how hiring managers explain rejections

When those vibe-based decisions become training data, the model doesn't learn wisdom. It learns prejudice with a confidence score.

An experienced TA professional raised the core tension during a BrightHire research session: even with a great process in place that coordinates skills, questions, and outcomes, how do you handle hiring managers who still want to reject candidates based on gut feelings? Nobody had a good answer.

Meanwhile, companies that have tried to strip out bias show how hard it actually is. One large tech company's hiring committee reviews interview feedback anonymously. Senior directors who don't know the candidate's name or background read every line of feedback and code snippet before making a call. A collaboration software company isolates interviewer feedback until everyone has submitted, specifically to prevent one person's opinion from anchoring the group. These are deliberate, expensive design choices. They suggest that the default state of any hiring process, human or AI, is bias. Fighting it requires constant, intentional effort.


III. A framework for people who actually care

If you're rolling out AI in hiring, here's what should be guiding every call you make:

  • Augment, don't automate judgment. AI surfaces information. Humans make calls about people. A score should start a conversation, not end one. One workflow automation company built an AI reference check system that analyzes scorecards and transcripts to identify signal gaps, then generates tailored reference questions. But a human reviews and approves every output via Slack before anything gets sent. The system finds what's missing. The person decides what to do about it.

  • Audit your training data like your reputation depends on it. Because it does. Ask who you hired historically and why. If that answer is fuzzy, the data is not ready. One talent intelligence analyst I spoke with tracks 47+ companies for hiring signals and builds predictive layoff indicators. Even with all that data infrastructure, he emphasized that interview feedback delays and poor data capture remain the biggest bottlenecks. The data you have is probably worse than you think.

  • Tell candidates AI is involved. This is table stakes for basic human respect. People deserve to know when a machine evaluated them and what weight that carried. In practice, only 4% of candidates reject a bot's presence when told upfront. Transparency costs almost nothing and builds real trust.

  • Measure for equity, not just speed. Track outcomes by demographic. If one group is getting filtered out at higher rates, that is a signal worth taking seriously, not a rounding error. One HR tech company tracks what strengths and coaching areas were correctly identified during interviews versus 90-day performance reviews, finding 93% of engineering hires meeting or exceeding expectations. That kind of closed-loop measurement is what accountability looks like. Most companies don't do it.

  • Protect the human moments. The unscripted conversation, the follow-up question nobody planned, the thing someone says when they think the interview is over. These cannot be automated. Do not let them be.

    "I go 3 or 4 levels deep on every answer. That's where you find out if someone actually understands what they're talking about or if they just memorised something. An algorithm memoriser breaks at the third question. A builder gets more interesting."

    — An engineering leader at a legal tech company

    One hiring manager's signature move is a real-world seat-mapping problem with no textbook answer, specifically because it tests how someone thinks when no algorithm can help them.

  • Keep humans in the loop, always. Every AI flag, rejection, or shortlist should be reviewable by a real person. Build the override in from day one, not after the first lawsuit. An enterprise software company learned this the hard way: 6 out of 10 candidates were using AI to cheat on online assessments, completing 6-hour tests in 1-2 hours. They moved to in-person, 6-hour physical coding sessions, and the quality signal improved dramatically. When you can't see the human, you can't trust the output. That goes for candidates and for AI systems making decisions about them.


IV. The question underneath the question

Here's what no implementation guide will actually say out loud: automating hiring is a cultural statement about what your company believes about people.

You are saying candidates can be fairly represented by data points. You are saying past patterns predict future potential. You are saying efficiency here is worth tradeoffs you have not fully mapped yet.

The research makes the tradeoffs concrete. About 60% of organizations now use AI or assessment tools in hiring, but they split into two camps: those who actually read the reports and use insights to vouch for candidates, and those who use them as a pass/fail filter. The second group abandons tools within 2-3 months when no clear ROI appears. The tool didn't fail. The belief system behind it was never there.

"Every hiring manager wants their own customised approach. In a 100-person startup, everyone wants involvement in hiring decisions. Standardisation is nearly impossible. But without it, you get interviewers rejecting people because of 'no vibe' or 'didn't like them' with zero reasoning."

— A senior TA leader with 12+ years across agencies, startups, and MNCs

The desire for structure and the resistance to it live in the same organisation, often in the same meeting. AI doesn't resolve that tension. It just makes it someone else's problem.

A founder who previously built an AI screening product offered the sharpest framing I heard:

"Think of it like milk versus shelf placement. You only outsource what isn't core to your output. A tech company outsourcing engineering judgment is outsourcing the thing that matters most."

Maybe those are beliefs you're willing to own. But they should be explicit, named, and debated openly. Not quietly tucked into a vendor's black box with a nice dashboard on top.

The question is not whether AI can improve hiring.

It's whether you're willing to be accountable for how it does.


What it actually looks like when done right

It looks like a team that spent years understanding their own process before touching a single tool. At one hypergrowth consumer company, it took 4-5 years to fully implement org-wide competency mapping by role level, with defined proficiency expectations, structured feedback requirements (100+ words minimum in the ATS), and a hard rule: no feedback submitted means no debrief, which means the process stops. That's not a tool implementation. That's a culture change.

It looks like a recruiter using an AI summary to ask better questions, not to skip the conversation entirely. It looks like a company that caught their model filtering out non-native English speakers and hit pause immediately.

It looks like defining three clear criteria before any technology enters the picture: table stakes, acceptable growth areas, and bar-raising qualities. One company built AI fluency as a nonnegotiable minimum bar across every single role, with a comprehensive rubric shared publicly and training that runs from recruiter screen through executive interviews. They didn't automate judgment. They clarified what judgment should look like, and then built tools to support it.

"When I manually reviewed all 500,000 resumes myself instead of relying on the ATS filters, the candidate quality was noticeably better. It cost me time. But the outcomes were real."

— An engineering leader comparing hiring processes across two companies

It looks like people who understood that speed was never really the goal. The goal was finding the right person. And that has always required another human being willing to pay attention.