Fake AI Tools Spread Malware to 62K+ via Facebook

Fake AI Tools Spread Malware to 62K+ via Facebook Fake AI Tools Spread Malware to 62K+ via Facebook
IMAGE CREDITS: GETTY

Cybercriminals are exploiting the global AI hype by deploying fake AI tools that trick unsuspecting users into downloading dangerous malware. A new campaign, uncovered by cybersecurity firm Morphisec, shows how attackers are using polished Facebook pages and viral posts to spread the Noodlophile Stealer, an info-stealing malware targeting users seeking AI-powered video and image editing apps.

Instead of using traditional phishing tactics or cracked software sites, threat actors have created fake but highly convincing AI-themed platforms. These are then promoted through legitimate-looking Facebook groups and viral social media ads. Some individual posts have drawn over 62,000 views, highlighting just how wide the campaign’s reach has become. The attackers appear to be targeting creatives and small businesses hunting for free AI tools to generate logos, videos, websites, or social media content.

Among the bogus Facebook pages identified are names like Luma Dreammachine AI, Luma Dreammachine, and gratistuslibros. These pages push links to fraudulent sites, including one that impersonates CapCut AI. Visitors are encouraged to upload content or prompts, and are then offered a downloadable “AI-generated” file. However, what they actually receive is a malicious ZIP file named VideoDreamAI.zip.

Inside the archive is a disguised file—Video Dream MachineAI.mp4.exe—which launches the infection process. To deceive users, this executable initially runs a real CapCut application (ByteDance’s video editor), while quietly executing a hidden loader called CapCutLoader. This .NET-based loader pulls a second payload from a remote server: a Python binary named srchost.exe.

That’s when the real damage begins. The Python payload unleashes Noodlophile Stealer, a malware designed to steal browser credentials, cryptocurrency wallet data, and other sensitive information. In some cases, it’s also bundled with the XWorm remote access trojan, giving attackers persistent access to the infected machines.

Security researchers believe the malware’s developer is based in Vietnam. A GitHub profile linked to the creator even describes them as a “passionate malware developer from Vietnam.” The account was created in March 2025—adding more evidence to the ongoing cybercrime surge in the region. Vietnam has become a known hotspot for distributing Facebook-targeting stealers and other malicious payloads.

This campaign is not the first to exploit AI-related interest. In 2023, Meta removed over 1,000 malicious URLs from its platforms that falsely used ChatGPT as a hook to deliver over 10 malware variants. These social engineering schemes are thriving because they ride on trending technologies, luring users into lowering their guard.

Adding to the growing list of threats, CYFIRMA has disclosed yet another .NET-based malware strain dubbed PupkinStealer. Although it lacks advanced evasion tactics or persistence tools, it’s effective. PupkinStealer quietly steals data and sends it to a Telegram bot controlled by the attacker. Its simplicity makes it harder to detect, relying on blending in with normal system behavior.

The rise in these fake AI tools campaigns signals a dangerous trend. Cybercriminals are rapidly weaponizing the public’s curiosity around AI to spread malware at scale. As users flock to new AI services for content creation and automation, it’s critical to verify platforms, avoid unsolicited downloads, and remain alert to too-good-to-be-true AI offerings circulating on social media.

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us