Introduction

Generative Artificial Intelligence (Generative AI or GenAI) opens substantial prospects for individuals and institutions capable of harnessing its potential creatively, responsibly and ethically. However, it also poses considerable dangers for those who venture into the realm of Generative AI without a comprehensive grasp of the associated risks. The University of Wisconsin-Milwaukee is committed to providing students with the knowledge, skills and experience necessary to sustain success in an AI-driven world.

Guidelines

As part of the university’s commitment to the ethical and responsible use of Generative AI, the following guidelines should be adhered to when utilizing Generative AI tools: 

  • Follow existing laws and policies – Do not use the tools in an unethical way or as part of any illegal activity. Ensure compliance with acceptable use policies of any entity that hosts Generative AI tools. Some relevant Universities of Wisconsin policies to keep in mind when using these tools include the Acceptable Use Policy, the Data Classification and Protection Policy and the Privacy Policy. If you are administering Generative AI tools, you should keep up to date on relevant changes to laws, regulations and policies to ensure your use of Generative AI tools complies with them. Individuals who use Generative AI platforms should consult with the Information Security Office or the Center for Excellence in Teaching & Learning if they have a question about acceptable use.  
  • Review output for accuracy, bias and possible ethical concerns – Generative AI can provide misleading, biased and incorrect information, and in some cases will completely make up citations, quotations, and descriptions (called “Hallucinations”). One should always review the output from Generative AI tools for potential bias, to verify that the information provided is correct, and to verify that any citations or quotations included in the output are legitimate.  
  • Transparency – Additionally, if you are building an AI system, be transparent about the AI components, and explain how and why AI systems make certain decisions within your environment. Providing documentation on the model and training data, along with the factors that influence the model’s outputs, is a terrific way to increase transparency. Being able to explain the relationship between inputs and outputs goes a long way in increasing trust in a Generative AI Tool.  
  • Security – Whether you are looking to utilize AI tools for your work or are planning to build your own GenAI tool, consult with the Information Security Office about security measures that should be in place to ensure your AI model, training data and generated content are protected from unauthorized access or malicious activity. While no one can guarantee the absolute privacy of data stored in computing environments, there are reasonable steps that can be taken to minimize risks.  
  • Education – Research tools to better understand potential biases that may be present in their model or training data. Learn more about prompt engineering so you can better understand how your prompts affect potential outputs.  

Data Privacy

UWM is legally and ethically obligated to protect individual and institutional data, and Data Privacy is a major area of concern when it comes to Generative AI. Many Generative AI tools will use the prompts that users input as part of the training data to improve the underlying model. This means the information you input into the tool could be accessed by anyone who uses it.  

One should keep the UW System Administration Data Classification Policy in mind when entering data into a Generative AI tool. Different data classification levels have distinct rules and requirements according to Universities of Wisconsin Administration policy. Below is a brief explanation of guidelines that should be followed, depending on the classification level of the data you plan to input into the AI tool. 

  • Low Risk Data: Use cases involving low risk data are approved by UWM IT for use in local (on-premises), private-cloud and public-cloud implementations.  
  • Medium Risk Data: For use cases involving medium risk data, contact the information security office (infosec@uwm.edu) and the research computing team (research-computing@uwm.edu) to review your use case. Discuss with both groups if de-identification will be needed for your use case, and if so, to what extent.  
  • High Risk Data: For use cases involving high risk data, including protected health information or personally identifiable information, contact the information security office (infosec@uwm.edu) and research computing team (research-computing@uwm.edu) to review your use case. Discuss with both groups if de-identification will be needed for your use case, and if so, to what extent.  
  • Human-Subject Research Data: For use cases involving human-subject research data, contact the Institutional Review Board (irbinfo@uwm.edu), information security office (infosec@uwm.edu) and research computing team (research-computing@uwm.edu) to review your use case. Discuss with all the involved groups if de-identification will be needed for your use case, and if so, to what extent.