Introduction

The University of Wisconsin-Milwaukee (UWM) Artificial Intelligence Task Force (AITF) was formed in early 2024 and was initially comprised of five workgroups (Education and Curriculum, Research, Business Operations, Student Success, and Infrastructure and Data). Each workgroup was charged with participating in AITF’s efforts to identify AI technology-related use cases for UWM.

The Responsible AI Workgroup (referred to herein as the “Workgroup” or “we”) was formed several months later to advise on the potential ethical problems posed by the adoption of AI technologies. The Workgroup also recognized that there is growing campus concern about the lack of a comprehensive policy concerning the ethical use or regulation of generative AI at UWM. This document reflects a broader focus and seeks to provide a primer on AI as it relates to UWM, with preliminary recommendations for how to design a framework for managing responsible AI technology use at UWM.

The Workgroup notes that the media frenzy around AI technologies has at least initially resulted in a positive narrative surrounding AI technologies, one of unquestioned progress and positive impact. This is despite the current “generative” applications (primarily based on or related to LLMs, or “large language models”) being very much in their infancy.

The Current AI Landscape: Opportunities and Risks

In their actual performance, AI technologies are proving both inconsistent and potentially dangerous. The following is a partial list of documented issues: generation of “hallucinations,” or false data masquerading as evidence and documentation; reproduction as fact of harmful and biased content generated from internet mining and training data (from unregulated sites such as Reddit.com and from AI-generated content); potential copyright infringement, as illustrated by the Authors Guild class action law suit against OpenAI; risk to personal data privacy, as noted by recent European Union regulations; and negative environmental impact, due to the significant power required by computation centers. These issues are the backdrop for a recent, more skeptical turn in media coverage of AI, and this is reflected as well in recent private sector appraisals by firms such as Goldman Sachs. In October of 2023, President Biden signed an executive order recognizing the positive potential of AI technologies but also encouraging the passage of bipartisan legislation to address the risk they posed, including the alarming collection of personal data by companies seeking to profit from that information.

Responsible AI and UWM’s Mission and Values

At the same time, it is also clear that AI technologies are a part of the daily landscape for everyone at UWM, as intrinsic to many software applications used by students and staff, as platforms for pedagogical and research experimentation, and as spaces for idea generation. AI technologies are thus a challenge for UWM to navigate because of their risks, their opportunities, and their prevalence. A commitment to responsible AI technology use must be directly aligned with UWM’s mission, vision, and values, including “…a commitment to excellence, powerful ideas, community and global engagement, and collaborative partnerships.” UWM’s values encourage open inquiry, diversity, ethical behavior, and transparent and inclusive decision-making that center care and belonging. UWM also values intentional and sustainable stewardship of its resources (e.g., human, environmental, financial). It follows that any guidance on responsible AI technology use at UWM must include a comprehensive understanding of the depth and range of issues involved, supported with the necessary resources (including time and technology). For example, responsible AI technology use with respect to issues relating to bias must include, to name a few: an understanding of the impact of bias across the AI lifecycle (pre-design, design and development, deployment, and test and evaluation phases), a plan for addressing the influence of bias in datasets, as well as recognition of the potential of individual, historical, and societal bias to have a detrimental impact on UWM’s use of and reliance on AI technologies.

Purpose and Scope of This Report

This report reflects the Workgroup’s recommendation that UWM’s commitment to responsible AI technology use means adopting a clear framework for evaluating AI technology use across UWM users and uses. It is also clear to the Workgroup that there are urgent needs relating to the current use of AI technologies on campus that may require ethically informed decision making before more formal policies, guidelines, and consultative and/or oversight mechanisms are in place. These urgent matters are the subject of ongoing conversations within this Workgroup (for privacy reasons these issues are covered only in general terms herein).

In the interests of keeping this document manageable (in both length and timeliness), and to serve as a resource for future work by UWM in this area, the Workgroup has adopted the following pragmatic shorthand:

  • “Ethical” and “responsible” use of technology includes considerations about technology use at UWM with respect to our mission and values.
  • To reflect the state of the existing conversation about responsible AI in higher education, we rely on a rough categorization of the primary areas in which concerns may arise: threats to privacy, opportunities for abuse, expressions of bias, and barriers to sustainability.
  • When we refer to AI technologies and today’s challenges from them, for the most part we mean large language model (LLM) and closely-related (“generative”) systems. Currently, LLMs are overwhelmingly the most prevalent AI technologies in use in higher education.

We stress again that this document is intended as a primer for understanding the challenges that AI technologies present to UWM and its mission; we encourage all who read it to read further, starting with the appendix of resources that we have provided. The main text of this report includes:

  • A brief history of AI, including a summary of general ethical concerns arising around it.
  • An overview of responses from other selected institutions to the challenges that AI represents to higher education, including the highlighting of specific responses that may serve as models for UWM.
  • Specific concerns with respect to ongoing and proposed use of AI at UWM, and additional discussion of potential mechanisms to consider in response.
  • A conclusion covering several additional and crucial considerations for engaging AI technologies responsibly within our university context.

We offer recommendations, which are broad and often general, to guide future, more detailed efforts, and our recommendations for the adoption of a multi-pronged framework to govern use of AI technology should be taken also to call for transparency regarding all UWM engagement with AI technologies and with any plans for their future adoption, as well as the necessary resources to address them ethically.