Blog

AI & Large Language Models: Great Power Requires Greater Responsibility

October 11, 2023 by Verafin

“Artificial intelligence is one of the most powerful technologies of our time, with broad applications. President Biden has been clear that in order to seize the opportunities artificial intelligence presents, we must first manage its risks.”
The White House, 2023 

Large language models (LLMs) have emerged as a new application for artificial intelligence (AI), propelled by breakthrough research in an age of unprecedented information. Public attention has focused on ChatGPT for its ability to answer questions in a human-like way, showcasing helpful uses for LLMs such as text summarization, narrative generation, and language translation. Together, AI and LLM technologies have the potential to advance the way financial institutions do business — from automating routine tasks to enhancing fraud prevention — but as with any innovation, bad actors will also take advantage. Financial institutions must recognize the threat posed by the abuse of AI and LLMs, and how responsible organizations are leveraging their potential to fight back. 

Cracking LLMs: How Criminals Exploit Helpful Technology 

Criminals are making a concerted effort to abuse LLMs for fraud by bypassing content moderation — the safeguards that developers of responsible LLMs, such as ChatGPT, have implemented to prevent their model from engaging in discourse surrounding violence, sex, hate or self-harm. This includes using prompt engineering and jailbreaking to manipulate an LLM into generating text in a specific tone — ideal for spear phishing — or even developing LLMs that are designed for nefarious purposes.  

Dark LLMs represent the pinnacle for misuse of LLM technology, where criminals develop, train, and share their own model that is purpose built for nefarious activity. While dark LLMs are largely a future concern, examples do exist today such as FraudGPT, and WormGPT which may be used to facilitate Business Email Compromise (BEC). 

Abusing LLMs for Scams 

When compromised, an LLM allows scammers to research unfamiliar tactics on command, operate in languages they do not comprehend, and generate messaging that is harder to detect, with appropriate context and few grammatical errors. These characteristics make LLMs ideal for scams that rely on text-based dialogue or phishing, such as authorized push payment (APP) fraud and related typologies — including BEC and romance scams. A custom dark LLM named WormGPT is circulating on criminal forums and was recently exposed and accessed. Researchers testing the tool prompted it to generate a convincing email for use in a BEC attack, written in the voice of a CEO and demanding urgent payment of an invoice. 

LLMs may also be abused to code chat bots and generate genuine-feeling dating profiles for use in romance scams. As reported by Tinder, online daters prefer authenticity and immediacy in their romantic matches — making this exploit potentially lucrative for fraudsters. 

With criminals able to produce persuasive messaging on demand, financial institutions should partner with a provider of financial crime management solutions that possess deep domain expertise in AI and criminal typologies, with a proven record of using AI to prevent fraud quickly and decisively. 

Fighting Back: Using AI for Good 

For over 20 years, Verafin has developed solutions that responsibly harness the power of AI to effectively fight financial crime. Through big data intelligence, we analyze over a billion transactions every week in our Cloud, enabling consortium analytics, cross-institutional analysis, machine learning and more — and use these innovations to help our financial institution partners shut down fraud at the earliest opportunity. This includes fraudulent wires, ACH transfers and other payment types often exploited in APP fraud, such as BEC and romance scams. 

As AI and LLMs advance, so does the weight of their power. With fraudsters exploiting these tools for their gain, it is crucial to partner with a financial crime management solutions provider that has deep expertise in using AI for good. To learn more about how Verafin uses AI and big data intelligence to combat payment fraud, download our Product Brochure. 

Verafin is the industry leader in enterprise Financial Crime Management solutions, providing a cloud-based, secure software platform for Fraud Detection and Management, BSA/AML Compliance and Management, High-Risk Customer Management and Information Sharing. Over 3800 banks and credit unions use Verafin to effectively fight financial crime and comply with regulations. Leveraging its unique big data intelligence, visual storytelling and collaborative investigation capabilities, Verafin significantly reduces false positive alerts, delivers context-rich insights and streamlines the daunting BSA/AML compliance processes that financial institutions face today.

Share This...