A hacker has made his own maliciously inclined version of ChatGPT: Introducing WormGPT, a chatbot created to help online criminals.
Email security service SlashNext, which used the chatbot, claims that the creator of WormGPT is offering access to the software for sale in a well-known hacker forum. The business stated in a blog post that “we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.”
The chatbot seems to have been originally presented by the hacker in March before being released last month. WormGPT doesn’t have any safeguards to prevent it from responding to malicious queries, unlike ChatGPT or Google’s Bard.
What is WormGPT?
WormGPT is the malicious version of ChatGPT, and it was released this month. It does reply to queries that include malicious content, while other known generative AI tools like ChatGPT or Bing can’t.
It is important to remember that WormGPT is a malicious chatbot created to aid online criminals in committing crimes. WormGPT is not advised to be used for anything, thus. We can better grasp the value of utilizing technology ethically and responsibly if we are aware of the dangers linked with WormGPT and its possible effects. Let’s examine the characteristics of WormGPT, how it differs from other GPT models, and the associated dangers.
Business email compromise (BEC) assaults now have a new vector because of the development of artificial intelligence (AI) technologies like OpenAI’s ChatGPT. A powerful AI model called ChatGPT produces text that resembles human speech based on input. By automating the development of false emails that are very convincing and personalized for the recipient, cybercriminals can increase the attack’s success rate.
Best WormGPT alternatives to try right now
“This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” the developer said.
SlashNext has reported that the developer of WormGPT is selling the program on an online forum.
The creator of WormGPT has also posted photos demonstrating how you can instruct the bot to create malware with Python code and request advice on how to design dangerous assaults. The developer claims to have utilized the open-source GPT-J from the 2021 big language model, which is an earlier large language model. WormGPT was created after the model was trained using information on the production of malware.
How to use WormGPT
To use WormGPT, you need to create an account in the forum and check out the comprehensive guide. If you still wish to access and utilize WormGPT after reading the information above, you can do so by going to the WormGPT page.
Meta AI unveiled Llama 2! Visit the related article and learn if its better than GPT4 or not
However, malicious activities have consequences, and to become a stable community and move forward, we need to be trustworthy, and honest, and stay away from any illegal activities.
How to access WormGPT
You can use the official forum link to access WormGPT, but it is completely illegal to use it for phishing or any other malicious activities. It doesn’t have any differences compared to the regular ChatGPT in terms of replying to general and legal queries, so using the original version is a lot better than trying to access this one.
WormGPT can, in theory, be employed for benign goals. It is vital to keep in mind, though, that WormGPT was created and disseminated with malevolent intent. It presents ethical questions and legal hazards for any use.
The dangers of generative AI
Generative AI is a potent technology that may be employed for a number of tasks, such as producing phishing emails, realistic fake news stories, and even malware. Because of this, blackhat hackers might utilize it as a useful tool to execute their harmful assaults.
Malware may be produced via generative AI. Software that is intended to damage a computer system is known as malware. For instance, a blackhat hacker may utilize generative AI to build malware that infiltrates a victim’s computer and collects personal data.
Check out the 10 best AI crypto projects that can make you rich
For blackhat hackers, generative AI poses serious risks. However, there are also means of defending against these assaults. One method is to use caution while opening emails and clicking links. Additionally, ensure sure your machine is running the most recent security software.
How to defend yourself against malicious generative AI tools
In conclusion, the development of AI creates progressive, new attack vectors while also being advantageous. Strong prevention measures must be put in place. The following are some tactics you can use:
- BEC-Specific Training: Organizations should create comprehensive, often updated training programs to thwart BEC assaults, particularly those that are aided by AI. This training ought to inform staff members on the nature of BEC risks, how AI may be used to amplify them, and the strategies utilized by attackers. The ongoing professional development of employees should also include this training.
- Enhanced Email Verification Measures: Organizations should impose rigorous email verification procedures to defend against AI-driven BEC assaults. These include putting in place systems that automatically detect when emails from outside the company mimic internal leaders or vendors and deploying email systems that signal communications containing particular phrases connected to BEC assaults like “urgent,” “sensitive,” or “wire transfer.” These precautions guarantee that emails that might be harmful are thoroughly examined before any action is taken.
However, if you need a little fun, we have good news for you! The Pixel War returns! Reddit r/place 2023 is ready to fire the Internet. Visit the related article and learn everything you need to know.
Featured image credit: Arget on Unsplash