Phishing scams used to be easier to detect. A few obvious misspellings, a suspicious link, and a shady sender address were clear warning signs that you should delete a message. Yet, as the scams mentioned this month have demonstrated, phishing is only getting more sophisticated, especially with the use of AI.
Last month, Reuters published an investigation it conducted with the help of a Harvard researcher to understand how criminals are using AI chatbots to create convincing phishing campaigns. In seconds, a clever prompt can generate perfectly-phrased emails, videos, and even voice clones to compel unsuspecting victims to divulge personal data or send money.
Don’t AI companies train their chatbots to protect users?
While many AI chatbot companies set up parameters to prevent people from using their app with malicious intent, the article showed that these guardrails are flimsy. For example, Meta’s AI service initially refused a request to help Reuters reporters create an email to convince seniors to give up their life savings. However, once the question was reframed, the app was quick to oblige:

In fact, this investigation tested six major chatbots – ChatGPT, Grok, Meta AI, Claude, Gemini, and DeepSeek – and while all of them initially refused to help reporters create fake emails from the IRS, four ended up complying when prompted that it would be in the name of “research.”
How to protect yourself
Last year, the FBI released recommendations on red flags to watch for in the wake of increased AI phishing scams:
- Have a codeword: Create a secret word or phrase with your family to verify their identity.
- Look for imperfections: While chatbots help scammers polish language or images, AI-generated images and videos often have elements that just don’t look right, like distorted body parts (particularly hands and feet), inaccurate shadows, watermarks, lag time, incorrect voice matching, and unrealistic movements.
- Keep your online posts private: Scammers can use your photos and voice to create AI impersonations – keep your social media accounts private and be mindful about what you post.
- Don’t send money: Be wary of people you don’t know or only know online who ask for money, gift cards, or other financial compensation. This includes safeguarding personal data as well (e.g., bank details, social security info).
- Pay attention to phrasing: Scammers can use AI to clone the voice of your loved ones, but they can’t copy how your loved one speaks. If the word choice or tone seems off, find another way to verify their identity (see: codeword).
- When in doubt: If anything about a message or call you’ve received seems strange, don’t take action right away. Verify that the purported person or company contacting you is legitimate by finding their contact info and reaching out directly. You can also report phishing scams to the FBI.
October is National Cybersecurity Awareness Month, an annual collaborative effort between government and industry to ensure we have the resources you need to maintain your security online. Throughout October, we’ll be sending you tips on protecting your information and avoiding malicious attempts to extract your personal data. Visit its.weill.cornell.edu/cybersecurity for more info.
