The creepy part of ChatGPT? Your lack of privacy

Generative artificial intelligence (AI) tools like ChatGPT and Google Bard have made headlines since their introduction for things like managing your stock portfolio, building software in minutes, and for just being downright creepy:


Existential dread much, ChatGPT?


But as human-like as ChatGPT’s responses can be, the real issue isn’t whether AI is going to take over the world (we promise ChatGPT is not alive). As of now, generative AI is still greatly unchecked, which can compromise your data and credibility.


How AI generators work

AI large language models (LLMs) are designed to hold conversations with users, and create responses based on trained information from their developers. The data sources for these responses can include websites, articles, books, and more. 

What’s WCM’s official stance on using generative AI?

Since generative AI is still an emerging technology, our Chief Global Information Officer, Dr. Curtis Cole, recently released guidance on using generative AI. University committees plan to recommend a set of generative AI tools that will meet the needs of the Cornell community by the end of this year. That aside, it’s important to note that you are accountable for your work, regardless of the tools you use to produce it, and you cannot enter any Cornell information, or another person's information, that is confidential, proprietary, subject to federal or state regulations or otherwise considered sensitive or restricted since generative AI tools are considered public. 

So, should I use LLMs?

With caution. LLMs can work in a pinch for problem-solving (like writing a complicated Excel formula), exploration, or increasing productivity, but they’re by no means a perfect solution. 

  • Your chats are not private: Companies that develop LLMs can often see everything you input, even if it’s anonymized. Do not submit personal details or any protected health information (PHI) into an AI model. Likewise, any sensitive and proprietary WCM information should not be entered into LLMs. It’s important to review the privacy policy of tools like ChatGPT before using them. 
  • Double check the information: If you’re asking fact-based questions in an LLM, you need to check that the information generated is correct with a verified source. LLMs have become notorious for providing false or misleading information, also known as “hallucinations”  (like two lawyers who were recently sanctioned for using fake ChatGPT cases in a legal brief). Artificial hallucinations can happen any time generative AI creates a seemingly realistic response that does not correspond to any real-world input. Using inaccurate data provided by LLMs could pose ethical, reputational, and legal risks. 
  • Be aware of biases: LLMs use trained algorithms to generate responses, so they’re only offering you information they’ve been provided. The data LLMs receive may contain biases that are unchecked, leading to inappropriate or offensive responses. 
  • Stay vigilant about online security: Bad actors are increasingly using LLMs to generate more convincing phishing attempts and malware. Always report suspected phishing messages with Phish Alarm for your WCM email, and use security settings like multi-factor authentication to protect institutional and personal accounts.  

Did you try our pop quiz yet?

Try our 10-question pop quiz on cybersecurity. We’ll announce some winners the week of Oct. 23.



October is National Cybersecurity Awareness Month, an annual collaborative effort between government and industry to ensure we have the resources you need to maintain your security online. Throughout October, we’ll be sending you tips on protecting your information and avoiding malicious attempts to extract your personal data. Visit the ITS website for our 2023 tips.

Need Help?

(212) 746-4878
Open: 24/7 (Excluding holidays)
WCM Library Commons
1300 York Ave
New York, NY
9AM - 5PM
Make an appointment

575 Lexington Ave
3rd Floor
New York, NY
Temporarily Closed