- DarkAI
- Posts
- đ Meet WormGPT: The Generative AI Tool for Cyber Attacks
đ Meet WormGPT: The Generative AI Tool for Cyber Attacks
Read Time: 2m
Welcome To DarkAI
Welcome to the inception of DarkAI. In this inaugural issue, we focus on the emerging AI threat of WormGPT.
If youâre new, subscribe to receive all future editions of DarkAI! There is plenty in the pipeline.
GEN AI
A new player enters the game
In July, cybersecurity company SlashNext uncovered a concerning trend in the underground cybercrime world: the sale of AI tools for creating phishing emails that can bypass spam filters. One such tool, dubbed âWormGPTâ, stands out.
WormGPT uses malware as its training data and operates without content moderation. Of course, we are all familiar with ChatGPTâs warnings that it cannot say certain things, or write certain pieces of code - but here, that protective layer does not exist.
In the same month, another tool, âFraudGPTâ, emerged on the dark web and Telegram, capable of creating phishing emails and malware-cracking tools.
Debate surrounds whether these tools truly pose a threat or if some are simply wrappers redirecting prompts to legitimate language models. The ability to generate malicious code would suggest not, but there are all kinds of things that could be going on behind the scenes here.
Open-source machine learning models may be being leveraged to produce part of the result, or attackers may have found a way to get ChatGPT to produce malicious code indirectly - but this is purely speculation. The fact of WormGPTâs existence proves that the market and innovation are there.
Ultimately, the success of these tools hinges on profitability for cybercriminals
WormGPT Features
WormGPT claims to have the following features:
No limits on the type of code generated
Unlimited character support
Better chat memory retention
Code formatting capabilities
No filtered content
As with most modern SaaS software, prices for these tools range from monthly subscriptions to lifetime access. To be fair to them, most legitimate software doesnât offer lifetime access nowadays.
What next?
Security experts foresee a rise in social engineering-style attacks as these dark AI tools become more accessible. While they may not dominate the cyberthreat landscape yet, this does heighten the risk of successful phishing attacks, which only require one unsuspecting user to click a malicious link or attachment.
Think - those emails written from the overseas Nigerian prince claiming he has a fortune waiting for you will suddenly have more eloquence. For some, thatâs not going to be enough - but remember, these attackers focus on the vulnerable.
UPCOMING
In upcoming issues, we will explore topics such as:
Replacement of human capability.
Threats to existence; are they qualified?
Malicious uses of AI.
Where attacker tools are being improved by AI tools.
How AI will manipulate us in the advertising industry.
We are very much testing the waters in this issue, so as we will point out below, we would love to hear whatâs on your mind. For example, let us know:
What have you heard about AI that makes you uneasy?
Do you think humanity has its work cut out?
How far away is artificial general intelligence?
FEEDBACK
What did you think of this issue?
If youâre reading in an email, hit reply.
If you are reading online, reach out to me on Twitter.
Want to let a co-worker know about this newsletter? Feel free to share:
Reply