The data is the problem
Every AI tool is only as good as the data that goes into it. That sounds like a cliché. But take a look at the average ATS database.
We connect with more than twenty different ATS systems. From Bullhorn to Carerix, from AFAS to Salesforce. What we encounter daily: job titles that are incorrect, skills that are never filled in, location data that is outdated, and candidate profiles that have not been updated for years.
That is the reality on which AI screening operates. Not the clean demo data you see at a trade show, but the messy reality of thousands of organizations using their ATS as a digital drawer where everything is thrown in.
When you hear a supplier say "our AI matches candidates with 95% accuracy", my first question is always: compared to what? On their own cleaned-up dataset? Or on real production data where half of the fields are empty?
The arms race that no one can win
Something funny is going on when you take a step back.
Almost half of the applicants have ChatGPT polish their CV, rewrite their letter, and optimize their profile for the keywords that ATS systems search for. On the other hand, almost every employer uses automated systems to screen those same CVs.
Two AI systems fighting against each other. One optimizes to get through the filter, the other filters to pick the best out. If you think about it, it's actually quite absurd. We've built an entire ecosystem where AI on one side makes texts look better than they are, and AI on the other side tries to see through it.
And then the other half. The candidates who just do it honestly. Who write their own letter, fill in their own CV. They are now compared to polished AI versions and come out worse. Not because they are less good, but because their text looks less slick. The honest applicant is penalized for not using AI.
That is perhaps the most bizarre thing about this whole development. We've created a system where you're almost forced to use AI because otherwise, you fall behind everyone who does. And in the meantime, no one trusts the outcome.
The fairness problem that no one expects
The promise of AI screening has always been objectivity. No human bias, purely on qualifications.
But here's where it gets interesting. Researchers from VU Amsterdam and Stockholm University followed an AI implementation at a large company for three years. What they discovered I did not expect: the problem was not that the AI was unfair. The problem was that everyone had a different definition of "fair".
HR wanted consistency. The same rules for everyone, no exceptions. Managers wanted context. That one intern who doesn't match on paper, but you can just see is a star. The algorithm always chooses the HR definition because consistency is easy to program. Context is not.
I think that is the most underestimated risk of AI in recruitment. Not that it makes mistakes. But that it enforces a very specific way of thinking and takes away the space to use your common sense.
And I'm not even talking about the bias in the training data. Amazon had to scrap their AI recruitment tool years ago because the system had learned to score CVs with the word "women's" lower. That is often cited as an incident. It is not an incident. It's what happens when you train a system on how it always went.
Where it does work
I don't want to give the impression that AI in recruitment is worthless. It's not.
Writing and optimizing job descriptions. Interview scheduling. Cleaning data. Sorting the first bulk on hard criteria. There it really makes a difference.
The pattern is simple: AI is good at efficiency. Writing texts faster, planning faster, sorting faster. Where it doesn't work is in the selection decision itself. Harvard Business Review concluded earlier this year that there is simply no convincing evidence that AI selects better than existing methods. Not "AI doesn't work." But: "it doesn't work better than what we already had, and it brings new risks."
That's quite an uncomfortable conclusion when you consider how much money is being pumped into AI recruitment.
The EU AI Act as a reality check
As of August this year, the high-risk requirements of the EU AI Act become enforceable. Recruitment AI explicitly falls under this. What that means in practice: if you use AI tools for screening or ranking candidates, you must conduct risk assessments, keep documentation, ensure human oversight, and register your system.
Many recruiters and HR managers I speak to are not at all concerned with this yet. They use AI tools from their ATS supplier or a separate plugin, without knowing that they will soon be responsible for compliance themselves.
My advice: ask your supplier now how they are preparing. If they can't tell you exactly what they are doing about risk assessments and bias monitoring, then you have your answer.
Why we are building it anyway
We are building Recruit Pivot not because we think we have already solved the problem. We are building it because the current process is already broken without AI.
The average recruiter reviews a CV in six seconds. There's nothing wrong with that because after twenty years of experience you have that feeling. But if you look at the rest of the process: the response time, the number of candidates who just never hear anything, the quality of the data in the ATS. That's where the real gain is.
But we are honest enough to say that we are nowhere near there yet. We know how difficult it is to keep bias out of your model. We know that data quality is the biggest obstacle, not the algorithm. And we know that a good hire depends on team dynamics, culture, growth potential, timing, and dozens of other factors that you don't capture in a vector embedding.
What I give you
If you are considering AI in your recruitment process today:
Be critical of what suppliers promise. Ask for independent evidence, not their own case studies. If they claim their AI "matches 95% accurately", ask accurately compared to what.
Use AI where it has proven to work. Efficiency, not selection. Let it write texts, schedule interviews, clean data. But let a human make the final decision.
And prepare for the EU AI Act. August 2026 is in four months.
Distrust anyone who claims they have already solved the problem. Including us. We are on the way, but honest about how far we still have to go.