A Job Interview with a Robot: Reality, Not Science Fiction

  • 2026-03-11
  • Viesturs Bulāns, CEO and partner of “Helmes Latvia”

When discussing how artificial intelligence (AI) is transforming recruitment, we often think about algorithms that analyse submitted CVs or tools that help job seekers write cover letters and format their resumes. However, AI solutions can also be used to create fake personas that participate in job interviews. In remote interviews, these fake candidates use face-manipulation technologies, known as deepfakes, which allow them to impersonate another person. In most cases, the goal is to gain access to a company’s internal information that can later be used for malicious purposes.

By 2028, every fourth job candidate worldwide could be fake

Online job interviews help save time and remain common in sectors such as IT, where work is often carried out both remotely and on-site, and companies compete for talent internationally. Previously, this issue was discussed more frequently by business leaders and HR specialists in Western Europe and the United States. Data from last year shows that 17% of recruitment managers in the U.S. have encountered candidates who used deepfake technology to alter their face and voice (according to a study by the career platform Resume Genius).

However, the trend is now spreading globally. Consulting and research company Gartner predicts that by 2028, as many as one in four job candidates worldwide could be fake. This means the problem will not bypass Latvia either - companies here are already beginning to encounter such challenges. The development of AI means that creating a convincing fake video interview has become relatively easy. It can be done using only a photograph or a short video clip and a few seconds of a person’s voice recording. Fake candidates can originate from anywhere in the world, and the risks go beyond financial losses. Such candidates make the recruitment process more complicated, longer and more expensive, while also creating risks for honest candidates.

Visual and behavioural signals that can reveal fake candidates

So how can such fake candidates be identified, and how can recruiters distinguish between an honest but nervous interviewee - perhaps with a poor internet connection or low-quality camera - and someone created using deepfake technology? Unfortunately, there is rarely a single “magic moment” during an interview when it becomes obvious that the person is not real. Instead, it is usually a combination of several signals worth paying attention to. The first signs are often visual. For example, a candidate’s face may move slightly unnaturally: eye movements may appear erratic, lip movements may not fully synchronise with speech, or facial contours may appear slightly blurred or distorted. Sometimes the body moves in one rhythm while the face moves in another. A person may appear to shift in their chair while their clothing remains unnaturally still.

There can also be behavioural indicators. For example, a candidate may repeatedly refuse to turn on their camera or provide excuses for why they cannot do so. Their answers may sound very polished, but there may be unusually long pauses beforehand, as if someone is listening to or reading prompts. In some cases, the sound of keyboard typing can be heard just before an answer is given. Another risk is that one person appears during the interview stage, but a completely different individual starts the job later.

The EU AI Act does not prohibit deepfakes

If a potential employer concludes that they are speaking with a deepfake during an interview, they are not obliged to continue the recruitment process. However, it is important that the decision is not based on subjective assumptions (such as appearance, accent or technical issues), but rather on specific, documented indicators. The European Union’s AI Act does not prohibit deepfake technology as such, but it requires transparency and clear disclosure. If a deepfake video, voice or image is created, it must be clearly and unambiguously indicated that it has been artificially generated. If a candidate fails to disclose this during a job interview, it constitutes a violation of the relevant regulation.

Possible verification mechanisms

It is important to remember that effects similar to deepfakes can also be caused by poor internet connectivity, low camera quality, fatigue or technical issues. To ensure that honest candidates do not suffer due to such factors - and that companies themselves are not exposed to unnecessary risks - it is advisable to introduce several verification mechanisms simultaneously. These may include identity verification (in compliance with the General Data Protection Regulation, meaning only the data necessary for a specific purpose may be processed and candidates must be informed about the purpose of data processing), recording interviews (with prior notice), assessing a candidate’s digital footprint, or comparing candidates across multiple interview rounds. Deepfakes cannot always be detected during a single conversation. In many cases, it is a matter of attention, experience and additional security measures.

Automation cannot fully replace professional judgement

The deepfake phenomenon clearly demonstrates that artificial intelligence is not a panacea that automatically solves all problems. Every technological advancement simultaneously introduces new risks. AI can indeed help structure CV flows, identify the most suitable candidates, analyse competencies and accelerate the recruitment process. However, it also provides tools for those who wish to bypass the system. Automation cannot fully replace professional judgement, critical thinking and appropriate security measures. At Helmes Latvia, we always encourage clients and partners to carefully evaluate the necessity and proportionality of implementing AI solutions. The introduction of technology must go hand in hand with clear security mechanisms, well-designed internal processes and proper employee training.

AI can significantly facilitate recruitment, but we must not trust it blindly. The key lies in balance - ensuring human oversight and the ability to intervene in automated decision-making (the so-called human-in-the-loop principle). We can take advantage of technological progress while remaining aware of the risks and potential scenarios of malicious misuse.