By Adam Starr
You may have heard about a new chatbot technology, ChatGPT, that uses recent advances in artificial intelligence (AI) to create human-like conversations. As ChatGPT is free and available for widespread use, cybersecurity experts are looking at the risks of ChatGPT-generated phishing emails.
Proofpoint blocks these threats
Proofpoint has long thwarted threat actors who use similar tools to construct phishing lures, and our platforms are already blocking ChatGPT-generated threats.
While chatbots may generate text for the body of a phishing email, that’s only one part of the threat. Headers, senders, attachments and URLs are among the many other threat indicators.
ChatGPT isn’t changing the game for more targeted spear-phishing attacks, either. Although it can create extended prose in the style of famous authors, ChatGPT doesn’t have any specific information about how your colleagues write, which means ChatGPT is unlikely to improve highly targeted phishing attacks.
Ultimately, attackers may use ChatGPT to improve grammar or randomize attacks, but the nature of phishing threats is likely to stay the same. Robust detection systems, like those of Proofpoint, will continue to catch these emails.
How to avoid taking the bait
It’s important to be vigilant and look for potential signs that an email may be a phish. Also, keep in mind that ChatGPT can only create text, not entire emails with logos and well-formatted, professional layouts.
In today’s evolving threat landscape, people must remain cautious and take part in regular training to stay safe and protected online.
Proofpoint Security Awareness teaches people to look for many different aspects of messages and indicators of threats to avoid falling for a phishing scam, including:
- Not trusting the sender immediately, even if the message appears to be from a trusted source or brand
- Scrutinizing the sender’s address and inspecting any links
- Looking for odd formatting or logos
- Not clicking on calls to action within the email, like “verify your account” or “log in now”
- Understanding that file-sharing links aren’t always safe