ChatGPT Goes Bad: WormGPT is an AI tool "without ethical boundaries"

ChatGPT Goes Bad: WormGPT is an AI tool "without ethical boundaries"

As ChatGPT continues to grow in popularity, a new, darker alternative AI tool designed specifically for criminal purposes has emerged.

WormGPT is a malicious AI tool capable of generating highly realistic and compelling text used to create phishing emails, fake social media posts, and other malicious content.

Based on the GPT-J open source language model developed in 2021, WormGPT can not only generate text but also format code, making it more likely that cybercriminals will use this technology to create their own malicious software. This makes it easier to create viruses, Trojan horses, and large-scale phishing attacks.

The most frightening aspect of this technology, however, is that WormGPT can retain chat memories.

Access to this AI tool is currently being sold on the dark web for just $67 per month or $617 per year as a tool for scammers and malware creators.

The cybersecurity firm SlashNext gained access to this tool through an online forum related to cybercrime. During testing, they described WormGPT as a "sophisticated AI model" but claimed "no ethical boundaries or limitations."

In a blog post, they explained: this tool is a black hat alternative to the GPT model and is specifically designed for malicious activity."

"WormGPT was trained on diverse data sources, specifically concentrating on malware-related data

and explains that "WormGPT is said to have been trained to concentrate on a variety of data sources, especially malware-related data.

As more people become familiar with phishing emails, WormGPT could be used in more sophisticated attacks. In particular, Business Email Compromise (BEC) is a type of phishing attack in which criminals attempt to trick senior account holders into transferring funds or disclosing sensitive information for further attack.

Other AI tools, including ChatGPT and Bard, have security measures in place to protect user data and prevent criminals from exploiting the technology.

A study by SlashNext used WormGPT to generate emails intended to force unsuspecting account managers to pay fraudulent bills.

As a result, WormGPT generated emails that were not only cunning, but also very convincing, indicating the potential for a large-scale attack.

WormGPT can also be used to create phishing attacks by creating persuasive text that encourages users to divulge sensitive information such as login credentials and financial data. This increases the number of cases of identity theft, which can lead to financial loss and threats to personal safety.

As technology, especially AI, evolves, cybercriminals can use that technology to their own advantage and cause serious harm.

The developer of WormGPT, who posted screenshots on the hacking forum SlashNext used to obtain the tool, described it as "the biggest enemy of the famous ChatGPT" and claimed it could "do all sorts of illegal things."

A recent Europol report warned that large-scale language models (LLMs) like ChatGPT, which can process, manipulate, and generate text, could be used by criminals to commit fraud and spread disinformation.

They write: "As technology advances and new models become available, it will become increasingly important for law enforcement to remain at the forefront of these developments to anticipate and prevent abuse.

"ChatGPT's ability to produce highly authentic text based on user prompts makes it a useful tool for phishing purposes.

They warned that LLM means hackers can execute attacks faster, more authentically, and on a larger scale."

.

Categories