As artificial intelligence (AI) continues to redefine innovation across industries, it also presents an expanding threat surface for cybercriminals. Threat actors are exploiting AI’s capabilities for malicious gain, highlighting the urgent need for AI-driven cybersecurity defences and strategic vigilance across all sectors.
Jailbreaking and the Evolution of Malicious AI Tools
Cybercriminals are increasingly leveraging jailbreaking techniques to bypass safety protocols embedded in public large language models (LLMs) such as ChatGPT and Gemini. The KELA Group recorded a 52% increase in jailbreaking-related discussions on cybercrime forums throughout 2024. These manipulated models enable threat actors to operate AI tools without ethical restrictions, spawning a growing ecosystem of dark AI services. Notably, WormGPT and FraudGPT have emerged as popular tools designed specifically for phishing, fraud, and malware development—allowing even low-skilled actors to conduct high-impact attacks.
Automation, Scale, and the Rise of AI-as-a-Service
The rapid commoditisation of AI tools in underground marketplaces is driving what the report refers to as "AI-as-a-Service" (AIaaS). Subscription-based access to powerful, jailbroken LLMs empowers cybercriminals to automate phishing campaigns, craft deepfake-based identity fraud schemes, and optimise exploit payloads. The barrier to entry for cybercrime is lower than ever. According to the report, mentions of dark AI tools rose by 219% from 2023 to 2024, pointing to a robust and maturing underground market that continues to scale.
Nation-State Exploitation and Influence Operations
AI’s misuse is no longer limited to criminal enterprises; nation-state actors are actively integrating GenAI tools into their cyber arsenals. Google identified actors linked to Iran, China, Russia, and North Korea using LLMs for tasks including infrastructure reconnaissance, vulnerability analysis, and payload development. Furthermore, OpenAI exposed the use of ChatGPT in influence operations such as the Russian-led "Doppelganger" campaign, which disseminated disinformation across Europe and North America. These cases underscore the broader geopolitical stakes associated with unsecured AI applications.
Defending the Future: Combatting AI with AI
The evolving threat landscape necessitates a paradigm shift in cybersecurity strategy. Organisations must now leverage AI not only for efficiency but as a defensive mechanism. This includes deploying AI-powered systems for real-time threat detection, predictive risk analysis, and autonomous incident response. In parallel, proactive measures such as employee training, model evaluation, and continuous monitoring of AI misuse are critical.
The Lynden Group emphasises the importance of combating AI threats with equally advanced AI cyber defences—transitioning from reactive postures to proactive resilience.
Comments