This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 1 minute read

The Dark Side of AI: LLMs Designed for Fraud

As discussion rages about the positive and negative contributions Artificial Intelligence may have on society, some of the specific applications are coming into focus.  It has been reported that two large language models (LLMs) used to train generative AI, have been made available to would-be hackers and fraudsters.   One such purported tool is FraudGPT.  According to the American Banker, FraudGPT "can write malicious code, create phishing pages, write scam messages and more. The group behind FraudGPT charges $200 a month or $1,700 per year for access to the service."

Another, such type of LLM purportedly useable for fraudulent purposes is called WormGPT.  American Banker reported that "the creators marketed it on web forums as a language model derived from the open-source project GPT-J, trained specifically to write scam emails." 

Although LLMs are designed with "filters" to prevent improper use of the models, it has been reported that these language models' filters can be overcome "by employing a tactic known as prompt injection, which involves adding cryptic strings of words, letters and other characters to a prompt."

This type of activity should raise the concern of those developing well-intentioned publicly available LLMs which could be adapted for improper use.  Moreover, with the potential ease that AI could be used to generate fraudulent communications and scams, regulators, technology companies, financial institutions, and the public at-large will need to come up to speed on the risks posed by such fraudulent use of AI.

Bad actors, unconfined by ethical boundaries, recently released two large language models designed to help fraudsters write phishing prompts and hackers write malware.