Not just IT businesses are rushing to develop chatbots with AI. Cybercriminals are also participating in the action. According to the latest news, a developer has created a harmful AI chatbot named “FraudGPT,” which lets people conduct malicious activities.
A hacker was discovered earlier this month working on WormGPT, a bot similar to ChatGPT that allows users to produce viruses and phishing emails. Security experts have now discovered a new malicious chatbot, fittingly titled FraudGPT.
The creator of FraudGPT started advertising the malicious chat application over the weekend in a hacker forum. “This innovative gadget will undoubtedly alter the neighborhood and the way you operate forever!” asserts the developer.
It’s crucial to remember that this is a harmful chatbot designed to help internet criminals carry out their activities. It should not be used for anything, thus. If we are aware of the risks associated with WormGPT and its potential impacts, we may better appreciate the value of using technology ethically and responsibly.
Malicious AI chatbots can become a huge issue in the near future (Image Credit)
What is FraudGPT?
A similar tool named FraudGPT is making the rounds on the dark web less than two weeks after WormGPT emerged as a threat actors’ response to the immensely successful ChatGPT generative AI chatbot. FraudGPT gives online criminals more efficient means of launching phishing scams and developing dangerous software. In essence, FraudGPT is one of the WormGPT alternatives that are currently in the market.
According to research released today by Rakesh Krishnan, a senior threat analyst with cybersecurity firm Netenrich, FraudGPT has been spreading on Telegram Channels since July 22. This AI bot is exclusively intended for offensive uses, such as spear phishing email creation, tool creation, carding, etc., according to Krishnan’s post. The Telegram network and a number of dark web markets are presently selling the tool.
After WormGPT download, here are the dangers waiting for you
The tool is available via subscription, with prices ranging from $200 per month to $1,700 per year. The majority of Netenrich’s report was devoted to how criminals might use FraudGPT to launch business email compromise (BEC) operations against businesses. This includes giving an attacker the ability to create emails that have a higher chance of persuading a target victim to click on a harmful link.
That’s not all, though. Krishnan claims that FraudGPT can also make it simpler to construct hacking tools, undetectable malware, harmful code, leaks, and vulnerabilities in enterprises’ technological systems. It can also teach aspiring criminals how to code and hack.
There have been more than 3,000 verified sales and reviews, and the people behind FraudGPT are providing round-the-clock escrow capabilities, according to Netenrich. WormGPT, which debuted on July 13, is comparable to FraudGPT, according to Krishnan. These harmful substitutes for ChatGPT continue to draw crooks and less tech-savvy ne’er-do-wells who prey on people for financial gain, the author noted.
FraudGPT creator charges a monthly fee
The creator of FraudGPT also seems to sell stolen credit card numbers and other hacker-obtained data. He also provides instructions on how to conduct fraud. Therefore, it’s possible that the chatbot service will incorporate all of this data.
The bot is expensive. The harmful chatbot’s developer asks $200 monthly, which is more expensive than WormGPT’s monthly fee of 60 Euros.
It’s not known if either chatbot can be used to hack computers. Netenrich cautions that the technology might make it easier for hackers to create phishing emails and other frauds that are more convincing. The business continues, “Criminals will continue to find ways to improve their criminal capabilities using the tools we develop.”
How to defend yourself against FraudGPT
AI advancement opens up progressive, fresh attack angles while also being beneficial. There needs to be a strong focus on prevention. Here are some strategies you can employ:
- BEC-Specific Training: To counter BEC assaults, especially those that AI supports, organizations should develop extensive, frequently updated training programs. Staff members should learn about the BEC dangers’ nature, how AI could be used to amplify them, and the tactics used by attackers during this training. This training should be a part of employees’ ongoing professional development.
- Enhanced Email Verification Measures: To protect themselves from AI-driven BEC attacks, organizations should implement strict email verification policies. These include establishing email systems that alert communications containing specific words linked to BEC attacks, such as “urgent,” “sensitive,” or “wire transfer,” and putting in place systems that automatically detect when emails from outside the organization resemble internal executives or vendors. By taking these measures, it is ensured that emails that could be damaging are carefully analyzed before any action is taken.