How AI Is Reshaping Recruitment (For Better and Worse)
If you’ve applied for a job recently, there’s a decent chance your resume was first read by a machine. Not a person. An algorithm decided whether you were worth a human’s time.
That’s not new — applicant tracking systems have been filtering resumes for over a decade. What’s new is how much smarter (and sometimes dumber) these systems have become with AI.
The screening problem
Traditional resume screening worked on keywords. If the job description said “project management” and your resume said “project management,” you got through the filter. Simple. Gameable, but simple.
Modern AI screening tools claim to go deeper. They analyse context, infer skills from job descriptions, and rank candidates on predicted fit. Companies like HireVue and Pymetrics have built entire businesses on the promise of smarter screening.
The pitch is compelling: reduce bias, save time, find better candidates. The reality is messier.
A study from the University of Melbourne found that AI screening tools can introduce new forms of bias even as they reduce old ones. If the training data comes from a company’s historical hiring patterns — and it usually does — the AI learns to replicate whatever preferences existed before. If the company historically hired men for engineering roles, the AI will favour male candidates. Not because it’s programmed to, but because that’s what “success” looked like in the data.
What’s actually working
Not everything about AI in recruitment is problematic. Some applications are genuinely useful.
Scheduling automation. Coordinating interview times across multiple people is a nightmare. AI scheduling tools that can check calendars, suggest times, and handle rescheduling are saving recruitment teams hours per week. It’s unglamorous but effective.
Job description analysis. Tools that flag gendered language, unnecessarily restrictive requirements, and jargon in job postings are helping companies attract more diverse candidate pools. If your job ad says “rockstar developer” and requires a computer science degree for a role that doesn’t need one, AI can flag that.
Candidate sourcing. AI-powered sourcing tools that scan LinkedIn, GitHub, and other platforms to identify potential candidates are useful for hard-to-fill roles. They’re not replacing recruiters — they’re giving recruiters a starting list instead of a blank page.
What’s not working
Video interview analysis. Some platforms claim to assess candidates by analysing their facial expressions, tone of voice, and word choice during recorded video interviews. This is, frankly, pseudoscience dressed up as technology. There’s no credible evidence that micro-expressions predict job performance. And for candidates with disabilities, neurodivergent traits, or simply interview anxiety, these tools are actively harmful.
AI-generated interview questions. They tend to be generic and poorly tailored to the actual role. A human hiring manager who understands the job will write better questions than an AI that’s read a thousand job descriptions.
Chatbot-based initial screening. Candidates hate them. A survey by CareerBuilder found that over 60% of job seekers have abandoned an application process that felt too automated. First impressions matter, and a chatbot saying “Tell me about yourself!” isn’t a great one.
The candidate experience problem
Here’s what nobody in HR tech wants to talk about: AI is making the candidate experience worse, not better.
Applicants are sending resumes into black holes. They’re being rejected by algorithms they can’t appeal to. They’re recording video interviews that get analysed by software. They’re chatting with bots pretending to be people.
The process feels dehumanising because it is. And in a tight labour market — which Australia still has for skilled roles — candidate experience matters. The best candidates have options. If your hiring process feels like applying for a bank loan, they’ll go somewhere else.
This agency working with mid-market companies has found that the most effective AI recruitment implementations are the ones candidates never notice. Backend automation that makes the recruiter faster, not frontend automation that replaces the recruiter.
What smart companies are doing
The best approach we’re seeing is what you might call “AI in the background, humans in the foreground.”
Use AI to sort and prioritise applications. Use AI to schedule interviews. Use AI to write first drafts of job descriptions. But keep humans making the actual decisions. Humans reviewing the shortlist. Humans conducting the interviews. Humans making the offer.
This hybrid model gives you the efficiency benefits of AI without the candidate experience problems of full automation.
The legal landscape
Australian employers should also be paying attention to the regulatory environment. The Australian Human Rights Commission has flagged AI in hiring as a priority area. Automated decision-making that affects employment is likely to face increased scrutiny and potentially regulation in the coming years.
If you’re using AI tools in your hiring process, document what they do and how they make decisions. If you can’t explain why a candidate was rejected, you’ve got a problem — both ethically and potentially legally.
Looking ahead
AI will keep getting more embedded in recruitment. That’s inevitable. The question is whether it gets embedded thoughtfully or carelessly.
The companies that get this right will use AI to handle the administrative burden of hiring while keeping the human judgment where it matters: assessing whether someone will actually thrive in the role, on the team, and in the culture.
The companies that get it wrong will wonder why their best candidates keep declining offers. Turns out, people want to be hired by people. Who knew.