North Korean AI Scammers Target Western IT Jobs, Microsoft Warns
Fake IT workers deployed by North Korea are leveraging advanced artificial intelligence technology, including voice-changing tools and face-swapping applications, to deceive western companies into hiring them for remote positions, according to a recent report from Microsoft. The US tech giant has highlighted that this signature Pyongyang money-raising scheme is being significantly enhanced by AI, which helps create fraudulent identities and alter stolen documents to boost the credibility of false applicants seeking IT and software development roles.
How the Scam Operates
The scam typically involves state-backed fraudsters applying for remote IT work in western nations, using fabricated identities and assistance from "facilitators" based in the target company's country. Once hired, these individuals funnel their wages back to Kim Jong-un's regime and have even been known to threaten the release of sensitive corporate data after termination. Microsoft's threat intelligence unit detailed in a blogpost that Pyongyang is employing AI to increase the effectiveness of this ploy, with groups dubbed Jasper Sleet and Coral Sleet—names assigned by cybersecurity analysts to clusters of attackers—leading the charge.
Microsoft listed several AI-related tactics used by these North Korean groups. For instance, scammers utilize voice-changing software during remote interviews to mask their accents, allowing them to pose as western candidates convincingly. They also deploy the AI application Face Swap to insert the faces of North Korean IT workers into stolen identity documents and generate polished headshots for resumes. "Jasper Sleet leverages AI across the attack lifecycle to get hired, stay hired, and misuse access at scale," Microsoft stated, emphasizing the broad scope of the operation.
AI's Role in Identity Fabrication and Job Applications
According to Microsoft, the fake workers have exploited AI platforms to generate culturally appropriate name lists and matching email address formats, constructing false identities for job applications. Example prompts might include "create a list of 100 Greek names" or "create a list of email address formats using the name Jane Doe." Additionally, they use AI to scour job postings on platforms like Upwork for software and IT-related roles, leveraging the skill requirements listed in ads to craft more effective and targeted applications. Upwork has responded by stating it takes "aggressive action to ... remove bad actors from our platform."
Once employed, the fraudulent workers continue to rely on AI to write emails, translate documents, and generate code, all in an effort to avoid detection as impostors or dismissal for poor performance. This ongoing use of technology underscores the sophisticated nature of the scam and the challenges it poses for corporate security.
Recommendations for Companies
To combat this threat, companies have been urged to conduct job interviews for IT workers via video calls or in-person meetings. Microsoft advises that interviewers can identify deepfake videos or images by looking for specific "tells," such as pixellation around the edges of faces, eyes, ears, and glasses, as well as inconsistencies in how light interacts with AI-generated facial features. Last year, Microsoft reported disrupting 3,000 Microsoft Outlook or Hotmail accounts used by fake North Korean IT workers, highlighting the scale of the issue and the need for vigilance.
This revelation underscores the growing intersection of cybersecurity and artificial intelligence in global espionage and fraud, with North Korea at the forefront of exploiting these technologies for financial gain. As remote work becomes more prevalent, businesses must adopt robust verification processes to protect against such sophisticated scams.



