Phishing scams used to be easier to detect. A few obvious misspellings, a suspicious link, and a shady sender address were clear warning signs that you should delete a message. Yet, as the scams mentioned this month have demonstrated, phishing is only getting more sophisticated, especially with the use of AI.
Last month, Reuters published an investigation it conducted with the help of a Harvard researcher to understand how criminals are using AI chatbots to create convincing phishing campaigns. In seconds, a clever prompt can generate perfectly-phrased emails, videos, and even voice clones to compel unsuspecting victims to divulge personal data or send money.
While many AI chatbot companies set up parameters to prevent people from using their app with malicious intent, the article showed that these guardrails are flimsy. For example, Meta’s AI service initially refused a request to help Reuters reporters create an email to convince seniors to give up their life savings. However, once the question was reframed, the app was quick to oblige:

In fact, this investigation tested six major chatbots – ChatGPT, Grok, Meta AI, Claude, Gemini, and DeepSeek – and while all of them initially refused to help reporters create fake emails from the IRS, four ended up complying when prompted that it would be in the name of “research.”
Last year, the FBI released recommendations on red flags to watch for in the wake of increased AI phishing scams:
October is National Cybersecurity Awareness Month, an annual collaborative effort between government and industry to ensure we have the resources you need to maintain your security online. Throughout October, we’ll be sending you tips on protecting your information and avoiding malicious attempts to extract your personal data. Visit its.weill.cornell.edu/cybersecurity for more info.