Category: Analysis
Daniel Mercer
Share
Listen On

Expectations around AI tools rarely match reality, and HR is a clear example. Recruiters hope AI will quickly identify the very best candidates, while applicants expect job matching to become fully automated. In practice, AI solutions are increasingly breaking recruitment processes and raising serious ethical concerns.

Thousands of fake candidates

Machine-learning tools (what we now broadly call AI) existed long before the current boom. What changed is the mass adoption of AI assistants that understand natural language and can work with text, images, video, and voice. Alongside their usefulness, they have become a powerful source of abuse.

A statistically significant share of job seekers has always exaggerated skills and experience. AI assistants have simply made this easier. With basic prompts, candidates can generate hundreds of “perfect” résumés and cover letters tailored to specific roles, then even cheat during interviews by reading chatbot-generated answers. As hiring and work move online, misleading recruiters has become trivial.

As a result, recruiters are drowning in applications, with an estimated 50–80% being spam, fake, or irrelevant profiles that do not meet company needs.

Biased by design

Global HR practices focus on finding the very best talent and were unprepared for a flood of AI-generated “ideal” résumés. In Russia, recruitment is more about finding suitable candidates for specific roles—but the problem remains.

AI logic depends heavily on training data. This creates bias. Models trained on historical data may favor candidates of a certain gender or nationality, exposing companies to discrimination risks. In Russia, a common issue is the systematic downgrading of candidates outside major cities. Human stereotypes shape past practices, which then become training data. AI learns human bias—and amplifies it.

These issues are not new. Machine-learning tools have long been used in HR, and their bias is well documented. Systems that analyze appearance, speech, or writing style often find correlations where none exist, due to flawed datasets or poor calibration.

Algorithmic discrimination

Traditionally, HR professionals could fine-tune filters on job platforms. AI was supposed to remove this routine work. Instead, it has become a “black box” that filters candidates based on opaque correlations.

Candidates now optimize résumés specifically for AI screening. Those who know how to game the system gain an advantage. Meanwhile, strong candidates with diverse experience or valuable soft skills may never pass the initial filter.

Trusting AI “reasoning” is difficult. Models are prone to hallucinations and can convincingly produce false explanations.

Dehumanized hiring

One consequence of widespread AI use is the loss of human contact. Mass asynchronous interviews with AI recruiters have drawn criticism for lack of transparency, questionable objectivity, and ethical concerns. Many candidates drop out once they realize they are interacting with a bot. How many do so is unknown—no one tracks it.

Invasive monitoring

AI is also used beyond hiring, monitoring productivity and detecting burnout. Poor implementation can turn these systems into invasive surveillance tools that track every action and analyze workplace communications. Such systems are often inaccurate and may hallucinate conclusions. If employees find AI monitoring intrusive and untrustworthy, productivity suffers—the opposite of the intended goal.

What comes next

The HR paradox is that attempts to optimize processes with AI often produce the opposite result. AI assistants are imperfect, and overworked HR teams may approve their decisions without verification. The less “human” remains in Human Resources, the higher the risks.

The solution lies with people, not technology. Companies should use AI narrowly—where it truly helps automate routine tasks—and avoid “universal” solutions that promise to replace human judgment. Priority should shift from monitoring employees with AI to monitoring AI itself, based on employee feedback. The technology is still too imperfect to take precedence over humans.

AI Research Contributor
Daniel Mercer is an AI research contributor specializing in large language models, benchmarking, and multimodal systems. He writes about model capabilities, limitations, and real-world performance across leading AI assistants and platforms.

Recent Podcasts

AI as a Role Model for Generation Alpha: Promise, Risks, and the Future of Childhood

Artificial intelligence is becoming the main role model for Generation Alpha. 2026 may mark a...

AI as a Toy: Why Humanity Always Misuses New Technology First

Artificial intelligence could, in theory, help solve all of humanity’s problems. Stop wars, cure...

Why the AI Boom Is Not a Bubble but the Foundation of a New Economy

The situation in global markets has become especially interesting. U.S. tech giants have posted...