How scammers hijack chatbots to spread phishing

How scammers hijack chatbots to spread phishing
October 13, 2025 at 12:00 AM

Cybercriminals are now weaponizing AI chatbots to amplify phishing and malware, turning trusted platform assistants into unwitting promoters of scams. A recent campaign on X (formerly Twitter) shows how this happens—and why you should treat public-facing AI output as untrusted.

What “Grokking” is and why it matters
Attackers sidestep X’s restrictions on links in promoted posts by using clickbait video cards. They tuck a malicious URL into the small “from” field beneath the video, then ask X’s built-in AI assistant, Grok, where the video came from. Grok reads the post, finds the tiny link, and helpfully repeats it—accidentally boosting a phishing site.

Why this is dangerous

  • It effectively turns a trusted AI account into a vector for malvertising and phishing.
  • Paid video posts can rack up millions of impressions, spreading scams rapidly.
  • Links echoed by a high-trust bot can gain SEO and domain reputation benefits.
  • Researchers observed hundreds of accounts repeating the tactic until suspension.
  • Victims risk credential theft, account takeover, identity fraud, and malware infections.

This isn’t just an X issue. Any platform that embeds a generative AI assistant or LLM can be manipulated in similar ways, underscoring how creative threat actors are at bypassing safeguards—and how risky it is to trust AI answers at face value.

Prompt injection, explained
Prompt injection happens when attackers plant hidden or deceptive instructions that an AI model later processes as part of a normal task. It can be direct (typed in a chat) or indirect (embedded in content the model reads). In the X case, the malicious link sat in post metadata, and a simple question triggered the bot to surface it. According to Gartner, 32% of organizations reported prompt-injection incidents in the past year.

How this can play out in the real world

  • A webpage hides a malicious prompt; asking an embedded AI to summarize it triggers a harmful action.
  • An uploaded image contains hidden instructions; asking an AI to explain the image activates them.
  • A forum post uses white-on-white text or tiny fonts to hide prompts; an AI that recommends posts may surface a phishing link.
  • Customer support bots that scan public threads can be tricked into displaying bad links.
  • An email hides a prompt in white text; an AI email assistant summarizing recent messages could be coerced into unsafe actions.

Stay safer with these steps

  • Inspect links before clicking—hover to verify the true destination and avoid anything suspicious.
  • Be skeptical of AI-generated suggestions that seem off-topic or overly urgent.
  • Use strong, unique passwords in a password manager, and enable multi-factor authentication.
  • Keep operating systems and apps updated to reduce exploit risk.
  • Run reputable, multi-layered security software to block phishing and malware.

Bottom line
Embedded AI makes social engineering more scalable—and sneakier. Treat AI output as untrusted, verify links, and follow basic security hygiene to reduce your risk.

Source: WeLiveSecurity

Back…