The global frenzy surrounding Artificial Intelligence (AI) technologies has reached fever pitch in recent months. But as AI becomes more deeply embedded in everyday digital life, it’s giving rise to a new breed of challenges — most notably, its adoption by cybercriminals.
A new AI security report by software technologies research firm Check Point warns that hackers are increasingly exploiting AI tools to boost the efficiency, scale, and impact of their operations.
The report, the first of its kind from the firm, underscores the urgent need for robust AI safeguards as the technology evolves.
“AI threats are no longer theoretical – they’re here and evolving rapidly,” the report states.
“As access to AI tools becomes more widespread, threat actors exploit this shift in two key ways: by leveraging AI to enhance their capabilities, and by targeting organisations and individuals adopting AI technologies.”
According to Check Point, cybercriminals are closely monitoring trends in AI adoption. Each time a new large language model (LLM) is released to the public, underground actors move quickly to explore its potential for abuse.
ChatGPT and OpenAI’s API are currently the most popular tools among malicious actors, but others like Google Gemini, Microsoft Copilot, and Anthropic Claude are gaining traction.
Most targeted models
Open-source models such as DeepSeek and Alibaba’s Qwen — with their minimal usage restrictions and free-tier availability — are becoming particularly attractive to cybercriminals.
The report reveals that hackers are going beyond mainstream platforms by developing and trading specialised malicious LLMs tailored for cybercrime. These so-called “dark models” are engineered to bypass ethical safeguards and are openly marketed as hacking tools.
One notorious example is WormGPT, a model created by jailbreaking ChatGPT. Branded as the “ultimate hacking AI,” WormGPT can generate phishing emails, write malware, and craft social engineering scripts without any ethical filters. It’s even backed by a Telegram channel offering subscriptions and tutorials — a clear sign of the commercialisation of dark AI.
Other dark models include GhostGPT, FraudGPT, and HackerGPT, each designed for specific aspects of cybercrime. Some are simply jailbreak wrappers around mainstream tools, while others are modified versions of open-source models.
But it’s not just about the models themselves. The demand for AI tools has led to the rise of fake AI platforms that pose as legitimate services but are in fact vehicles for malware, data theft, and financial fraud. One such example is HackerGPT Lite, suspected to be a phishing site. Similarly, some websites offering DeepSeek downloads are reportedly distributing malware.
In a real-world case, a malicious Chrome extension posing as ChatGPT was discovered stealing user credentials. Once installed, it hijacked Facebook session cookies, giving attackers full access to user accounts — a tactic that could easily be scaled across multiple platforms.
“The primary contribution of these AI-driven tools is their ability to scale criminal operations,” the Check Point report adds. “AI-generated text enables cybercriminals to overcome language and cultural barriers, significantly enhancing their ability to execute sophisticated real-time and offline communication attacks.”
Threats in Kenya
Closer to home, Kenyan authorities are also raising the alarm. In October 2024, the Communications Authority of Kenya (CA) warned of a rise in AI-enabled cyberattacks — even as overall threats dipped 41.9 percent during the quarter ending September.
“Cybercriminals are increasingly using AI-enabled attacks to enhance the efficiency and magnitude of their operations,” said CA Director-General David Mugonyi at the time.
“They leverage AI and machine learning to automate the creation of phishing emails and other types of social engineering.”
He also noted that attackers are increasingly exploiting system misconfigurations — such as open ports and weak access controls — to gain unauthorised access, steal sensitive data, and deploy malware.
As the race to embrace AI accelerates, so too does the arms race to safeguard it. For organisations and users alike, vigilance is no longer optional — it’s imperative.