Skip to main content

One Step Ahead: AI Guidance

One Step Ahead logo

Another tip in a series provided by the Offices of Information Security, Information Systems & Computing and Audit, Compliance & Privacy

Last year, the world saw a steep rise in the use of artificial intelligence (AI) across disciplines, particularly with the advent of Open AI’s ChatGPT. Although the potential benefits of AI are considerable, it is still important to understand and weigh some of the risks that are inherent in using AI platforms.

In response to the rise in AI, the University of Pennsylvania began developing AI guidance, seeking input from stakeholders across campus to outline some of these possible risks. The resulting guidance is hosted by the Office of information Security (OIS) at https://www.isc.upenn.edu/security/AI-guidance.  

Additionally, OIS has provided detailed guidance on Large Language Models (LLMs), such as ChatGPT: https://www.isc.upenn.edu/security/LLM-guide.

If you plan to use AI-driven solutions, carefully consider in advance the data you are entering or sharing with that solution. To protect University data, individuals should never supply personal data or Penn proprietary data into AI platforms, especially if there is no University agreement in place for the service.

For additional tips, see the One Step Ahead link on the Information Security website: https://www.isc.upenn.edu/security/news-alerts#One-Step-Ahead

Back to Top