Cornell guidelines for artificial intelligence

The following message comes from Dr. Curtis Cole, Vice President and Chief Global Information Officer: 

---

To the Cornell community,

As Cornell continues to explore artificial intelligence (AI), particularly generative AI, we are providing some preliminary guidelines for using these rapidly evolving technologies in ways that uphold our core values of purposeful discovery and free and open inquiry and expression.

This communication summarizes the spirit of more extensive and formal information, regularly updated on Cornell’s new general webpage about AI and including links to reports of university committees.

Generative AI, offered through tools such as ChatGPT, Claude, Bard, Bing AI and DALL-E, is a subset of AI that uses machine learning models to create new, original content, such as images, text or music, based on patterns and structures learned from existing data. 

Cornell’s preliminary guidelines seek to balance the exciting new possibilities offered by these tools with awareness of their limitations and the need for rigorous attention to accuracy, intellectual property, security, privacy and ethical issues. These guidelines are upheld by existing university policies.

Accountability: You are accountable for your work, regardless of the tools you use to produce it. When using generative AI tools, always verify the information for errors and biases and exercise caution to avoid copyright infringement. Generative AI excels at applying predictions and patterns to create new content, but since it cannot understand what it produces, the results are sometimes misleading, outdated or false.

Confidentiality and privacy: If you are using public generative AI tools, you cannot enter any Cornell information, or another person's information, that is confidential, proprietary, subject to federal or state regulations or otherwise considered sensitive or restricted. Any information you provide to public generative AI tools is considered public and may be stored and used by anyone else. 

As noted in the University Privacy Statement, Cornell strives to honor the Privacy Principles: Notice, Choice, Accountability for Onward Transfer, Security, Data Integrity and Purpose Limitation, Access and Recourse.

Use for education and pedagogy: Cornell is encouraging a flexible framework in which faculty and instructors can choose to prohibit, to allow with attribution or to encourage generative AI use. In addition to the CU Committee Report: Generative Artificial Intelligence for Education and Pedagogy delivered in July 2023 and resources from the Center for Teaching Innovation, check with your college, department or instructor for specific guidance. 

Tools and use for research, administration and other purposes: By the end of 2023, Cornell is aiming to offer or recommend a set of generative AI tools that will meet the needs of students, faculty, staff and researchers, while providing sufficient risk, security and privacy protections. 

The use of generative AI for research and administration purposes must comply with the guidelines of the forthcoming reports from the university committees for research and administration. The reports are scheduled to be published by the end of 2023.

For those seeking to purchase generative AI tools or subscriptions in advance of those guidelines and recommendations, the IT Statement of Need process is required.

If you have questions or concerns, please contact AI at Cornell.

Curtis L. Cole
Vice President and Chief Global Information Officer

Need Help?

myHelpdesk
(212) 746-4878
Monday-Sunday
Open: 24/7 (Excluding holidays)
SMARTDesk
WCM Library Commons
1300 York Ave
New York, NY
10065
M-F
9AM - 5PM
Make an appointment

575 Lexington Ave
3rd Floor
New York, NY
10022
Temporarily Closed