Fake Engineer - Advanced Deepfake Fraud and How to Detect It

· 6 min read
Fake Engineer - Advanced Deepfake Fraud and How to Detect It

Tl;DR

The candidate applied for an open backend position at our company Vidoc Security Lab. He had a decent CV and LinkedIn profile but used a deepfake during the coding interview, pretending to be a different person. This incident could be linked to a North Korean hacker group that has used this trick with many other companies. 

Deepfake Fraud Prevention Ebook

We’ve put together a practical guide with best practices to help companies identify fake IT workers. Download it for free on our website. Link below.

🔗 www.vidocsecurity.com/ebook

Deepfake Fraud Prevention - 17 practical Strategies to Detect Fake IT Workers available for free on our website

Story of How an AI-Generated Polish Politician Came to Our Coding Interview

It all started with a message on LinkedIn. The candidate - Slavic name and surname, claiming to live in Serbia - had very good skills, was experienced in the tech stack we needed, and had previously worked at decent companies. He reached out, asking if we had open positions. This happens often when you're a startup founder. Polite, proper grammar. I told him we are not hiring yet, but I would reach out when we did. He followed up a couple of times, and when we started looking for an engineer, we contacted him.

Dawid – co-founder of Vidoc – noticed that something felt off right away. The candidate’s appearance on camera looked unusual, almost artificial, and his way of speaking sounded like he was reading ChatGPT-generated bullet points. His accent was odd, and the more he spoke, the stranger it got. Dawid started recording. When he asked the candidate to put his hand in front of his face, the request was refused. That’s when Dawid ended the call. You can watch full video here.

Later, we analyzed the recording and realized that the face looked familiar. He looked VERY similar to Sławomir Mentzen - a Polish politician very active on social media. It was a deepfake. We double-checked his CV. The address he provided belonged to a public institution.

We Are Not the Only One

It had happened before. The previous case had passed the initial screening and two coding interviews. The final-stage conversation was supposed to be with me, and I… noticed something odd. His camera was blurry, but he claimed to have a bad connection. There was a delay between his speech and video. His accent didn’t match his claimed country of origin.

I asked where he was from, and he confirmed what was on CV. I asked how he enjoyed living there, where he went to school, and in what time zone the team he worked with in his previous company was located. Something wasn’t right. Apart from his poor communication skills, his camera froze, and he couldn't turn it back on. I asked him a few more questions related to the culture he claimed to have been raised in, then ended the call.

It wasn’t a real person - it was a deepfake.

The worst part? He had really good technical skills. He passed the coding interview and demonstrated solid reasoning and problem-solving abilities. The only things that felt off - aside from his accent and poor communication skills - were his lack of knowledge about previous employers and his strange-looking avatar.

After we shared the story publicly, we were overwhelmed by the response. Dozens of companies -startups, scale-ups, and even larger tech firms - reached out to us, sharing eerily similar experiences. Some had also encountered candidates with suspicious avatars, mismatched accents, or strangely polished LinkedIn profiles. A few admitted they only realized something was wrong after the person was already hired. It became clear that this wasn’t an isolated incident - it’s a widespread issue affecting companies of all sizes, and the tactics are becoming more sophisticated.

How Does This Happen?

These people are getting smarter - a polished CV, a tech stack that matches the requirements, and a LinkedIn profile convincing enough to make you think the person is competent, but not flashy enough to raise suspicion. There’s only so much you can catch during an initial call.

We’ve had candidates submit LinkedIn profiles with hundreds of connections to real people working at the companies they claimed to have worked for. We’ve seen GitHub profiles of individuals who contributed to numerous projects and, during interviews, demonstrated high technical skills - only to later turn out to be fake.

Take Bratislav, for example - the candidate who attempted to use a deepfake during a coding interview with Dawid. That’s where his plan failed. But his LinkedIn profile was decent; we even had a few connections in common. His CV was well put together.

We Always Have a Human Review the CVs

We know that some people exaggerate their skills. Some people fake their experience. Some people just lie. But we were not prepared for this.

Bratislav sent us a well-put-together CV and had an initial call with Paulina, our HR expert. It was a preliminary screening to ensure the candidate was still interested in the job and to confirm his previous experience. He seemed like a regular person.

The next stage was a technical interview with Dawid, co-founder of Vidoc. Dawid joined the call and immediately noticed something strange. The candidate’s LinkedIn profile picture bore no resemblance to the person sitting in front of the camera. In fact, the candidate looked strikingly similar to a well-known politician.

Dawid suspected that something was wrong but decided to play along for a while. He asked a few questions until he was certain that this wasn’t the candidate’s real face - it was some kind of AI-generated avatar. And oddly enough, he looked like Sławomir Metzen before his nose job.

This time, Dawid recorded the encounter. In the end, he made the candidate wave to expose his attempt to deceive us and cheat. Then, he ended the call.

What Can We Do About It?

There’s only so much you can catch during an initial call.. So how do you spot them?

1.‎‎‎‎‎ Background check

The most important thing is to do a VERY thorough check on the CV and LinkedIn profile. Check the address, phone number, any inconsistency in dates, suspicious activities. As a humans we sometimes miss things so it’s always good to use tools like Profile Verifier to help you sum up candidates profiles: https://www.verifyprofile.ai/

2. Ask hometown questions

If you suspect a candidate might be fake during an interview, try asking specific questions about their location or country of origin. For example, if they list a university you’re familiar with, ask which café on campus was their favorite

3. Spotting a deepfake

Watch closely for any inconsistencies in video synchronization. Also, deepfake technology often adds artificial noise or subtle distortions to the audio to disguise these alterations.

0:00
/0:18

Learn how to spot a deepfake: a presentation for your team. For the full version reach out to us via email contact@vidocsecurity.com

I Hired a Person and I'm Afraid They Used a Deepfake - Now What?

No method is bulletproof. Consider this scenario: despite thorough security measures, a potentially untrustworthy developer is hired. In the least harmful case, they contribute nothing while continuing to receive a salary. In the worst case, they intentionally introduce vulnerabilities into the code, creating entry points for cybercriminals. How can organizations protect themselves against this risk?

1. Ongoing Security Training 

User Activity Monitoring Limit employee access strictly to the systems and data necessary for their roles. Regularly review and adjust permissions to prevent unauthorized access and potential misuse. Ensure employees are aware of cybersecurity risks, including social engineering tactics. Regular training helps build a security-conscious workforce that can recognize and respond to threats effectively.

2. Automated Code Security Reviews 

Utilize modern security tools that leverage large language models (LLMs) to analyze code for vulnerabilities before deployment. VIDOC, for example, automates security code reviews, identifying complex vulnerabilities that could be exploited by malicious actors. It also provides insights into how much of your code is AI-generated or copied, helping maintain control over software integrity.

These are some basic tips and things you can implement to protect your organization against deepfake candidates threats. We’ve put together a practical guide with best practices to help companies identify the threat.

💡
Deepfake Prevention Fraud Ebook
Gain expert tips and latest tech insights to spot fake IT workers. Get your free copy today!

🔗 www.vidocsecurity.com/ebook.

Follow us: