Cybercriminals are using artificial intelligence to create realistic and sophisticated threats, new research from Norton, a consumer cyber safety brand of Gen, has revealed.

ChatGPT has captured the internet’s attention with millions using the technology; cybercriminals are using it to generate malicious threats through its ability to generate human-like text that adapts to different languages and audiences.

Cybercriminals can now quickly and easily craft email or social media phishing lures that are more convincing, making it more difficult to decipher between legitimate communication and a threat. In addition to writing lures, ChatGPT can also generate code, allowing cybercriminals to manipulate the technology and use it to scam at a larger and faster scale.

“While the introduction of large language models like ChatGPT is exciting, it’s also important to note how cybercriminals can benefit and use it to conduct various nefarious activities. We’re already seeing ChatGPT being used effectively by bad actors to create malware and other threats quickly and very easily,” Gen managing director for Asia Pacific, Mark Gorrie said.

“It’s becoming more difficult for people to spot scams on their own, which is why Cyber Safety solutions that look at all aspects of our digital lives are comprehensibly needed, be it our mobile devices to our online identity, or the wellbeing of those around us – being cyber vigilant is integral to our digital lives.”

In addition to using ChatGPT for more efficient phishing, Norton experts warn bad actors can also use it to create deepfake chatbots, which can impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information to gain access to sensitive information, steal money or commit fraud.