Below, we offer preliminary comments about AI technology and uses and policies at UWM, broken down into three categories: General Use of AI Technologies, Specific and Targeted Uses of AI Technologies (primarily in research projects), and Current UWM Policy Documents.

General Use of AI Technologies at UWM

Individuals and groups across campus are already leveraging generative AI tools in a variety of ways. Many cloud-based (SaaS) solutions have begun integrating generative AI components into their platforms. UWM faculty, staff, and researchers are leveraging common generative AI tools, such as ChatGPT and Microsoft Copilot. Some are even creating local generative AI implementations on their workstations or servers to fit specifically tailored needs.

Many existing and newly purchased SaaS solutions in use across campus are beginning to leverage generative AI to support some (or all) of the application’s functionality. Generative AIpowered SaaS tools, including chatbots, personal assistants, transcription tools, and translation software make up just some of the functionality that has been recently adopted across campus. The number of differing generative AI powered solutions is growing at an increasing rate, with new requests for software to campus IT services being continuously submitted. While SaaS purchases conducted via the university’s procurement process undergo privacy and security reviews, changes to the platforms that include the adoption of AI powered components that occur after the purchase is complete are not subject to the same level of scrutiny. This is largely due to how SaaS-based updates are deployed by third-party vendors. These updates are not controlled by the client; the “when” and “how” are entirely up to the vendor. Additionally, updates are often pushed out with only marginal (if any) notice given to clients about the changes. To complicate matters further, the privacy considerations of changes are often not mentioned when updates are deployed; and if they are, it is in a small notice indicating that the vendor’s privacy policy has been changed. The focus of update information is largely on the “improvements” that the update offers. This means that SaaS solutions currently in use at UWM could adopt generative AI components without any vetting of the privacy or ethical implications of the changes – and indeed this has already occurred for a select number of vendors.

The most prevalent use of generative AI on campus that we know of is through Microsoft’s CoPilot with Consumer Protection (also known as Microsoft CoPilot Enterprise or Bing Chat Enterprise), which can be accessed by anyone with an active ePanther account, in two different ways: over the web using the “Edge” browser, or within university managed Windows 11 PCs via the updated CoPilot search bar. ChatGPT is also commonly used across campus, with several individuals obtaining licenses through the University’s IT Software procurement process. Other cloud-based chat bots, such as Claude, Gemini, and Meta AI are also likely in use depending on any given individual’s preferences. To date, UWM has not conducted any widescale assessment of the student use of AI, so the prevalent use of the tools mentioned above among students, as well as other tools to generate text, visual, video and audio output is unknown.

It should be noted that it is up to the individual to use these tools responsibly; most notably, by not uploading sensitive information onto the platform when writing prompts. The tools themselves will respond to any prompt given to them, so user education is vitally important. While some of these tools contain features that allow for greater control of user uploaded data, they often require individuals to properly configure application settings to prevent prompt information from being stored or used as training data. Improper configuration, malicious activity, or user error could result in unauthorized disclosures of sensitive information, including information that would violate the Family Educational Rights and Privacy Act (FERPA). This is a high priority area for education for all UWM stakeholders.

In between this category (“General Use”) and the next (“Targeted Uses”) are the efforts of select technically skilled individuals across campus who are leveraging different generative AI tools to create local, “home-brew” implementations. These vary from light-weight solutions that can run on a lower-grade consumer laptop, to applications leveraging several cores in the University’s High Performance Computing environment. Individually controlled and managed implementations require the solution manager to consider several important issues beyond those of general users. How do you manage sensitive, private, or otherwise protected information? How does the model check for bias, accuracy, IP content, source attribution, or harmful outputs – if at all? How do you ensure that the usage of the solution will align with university policy, or federal, state, and local law? How do you deal with harmful user prompts, or prompts that contain sensitive information that should not be stored? If these issues are not carefully considered and addressed, there could be serious ramifications for the solution manager, as well as the university.

Targeted Use of AI Technologies at UWM

Current projects on AI and large language models at UWM run the gamut from curricular to institutional. UWM’s Connected Systems Institute (CSI) houses the nation’s first manufacturing focused AI Co-Innovation Lab. Through a funding commitment from Microsoft as well as assistance from Microsoft experts, the lab will engage in design and prototyping efforts for AI and cloud technology services, with the eventual goal of product provision for 270 Wisconsin companies. This example is an obvious success for UWM and demonstrates one of the places where AI can best be put to use, given the premise that CSI’s staff have substantial expertise and training in AI and would remain in compliance under a future campus-wide policy. The Center for Advancing Student Learning (CASL) works closely with the integration of generative AI into course development and curriculum on campus. For example, in the fall of 2023, CASL hosted three Active Teaching Labs focused on integrating generative AI into the classroom prior to the development of campus-wide generative AI policy and/or regulation. In the spring of 2024, CASL hosted multiple Active Teaching Labs on AI, including “Adapting to AI: Tools and Strategies for 2024,” “Teaching Smarter, Not Harder: AI Strategies for Teachers,” “Writing, AI and Rhetorical Thinking Across the Disciplines,” and “generative AI, Academic Libraries, and Student Research.” These sessions were led by faculty, graduate students, librarians, and academic staff and considered many aspects of generative AI and higher education. CASL has led the charge at UWM in integrating generative AI into curriculum and research.

The UWM Library is embarking upon multiple AI projects around performing research and archiving. One project, undertaken in concert with colleagues at UW-Madison, is a workshop on Text Analysis and Natural Language Processing that guides users through various models and explains how to deploy NLM for research projects. Librarians, including those employed at UWM, have also been working on questions of ethical machine learning for over a decade. However, university libraries and archives around the country have recently partnered with generative AI companies like OpenAI to provide previously undigitized material as data sources for LLMs, demonstrating the need to update policies and approaches as the landscape of AI technologies transforms.

Although it may seem that all UWM projects focus on integration of generative AI into day-to-day functionality, some groups on campus are urging restraint. One major example is the Center for 21st Century Studies (C21). This coming school year (2024 2025), C21 will embark on programming centered around the theme of Slowness. This theme is inspired in large part by the mania to integrate technological “tools” into education, politics, civil society, and the humanities without considering the longer term impacts. One of the two co-laboratory groups that C21 is highlighting this coming year focuses on AI and the Humanities and the critical conversations, possibilities, and dangers that arise from the junctures between the two. The group will bring Meghan O’Gieblyn, columnist on generative AI for Wired and author of God, Human, Animal, Machine, to campus in the spring to speak specifically on generative AI and its perilous implications for the humanities. We highlight this project to demonstrate that there are already active efforts on campus to gain and extend critical perspectives on AI technologies among faculty, staff, and students. It is clear to this work group that many stakeholders, from librarians to faculty to students to administrators, are alarmed by the prospect of UWM rushing to incorporate AI technologies, and advocate for a slowdown to better critically assess the new conditions brought on by generative AI. In these efforts, these stakeholders mirror quite closely those at many other prestigious institutions, such as those listed in the previous section.

Current Policy Documents

The Workgroup notes that UWM and UW-Administration currently lack a formal comprehensive policy that authoritatively addresses generative AI use per se. The UW Administration’s Policy on the “Acceptable Use of Information Technology Resources” (located here) does addresses the use of computing systems in general terms but does not address generative AI specifically. Additionally, the State of Wisconsin Governor’s Task Force on Workforce and Artificial Intelligence released an “Advisory Action Plan” in July 2024 (located here) that includes “Education Policy Proposals” covering both the Wisconsin Technical College System and the Universities of Wisconsin, which includes UWM.

Various groups at UWM and the UWs Administration have also developed some guidelines and best practices documents for generative AI use, which are detailed below. However, these documents are not formal; that is, none of the published or unpublished generative AI documents discussed below constitute formal faculty or academic staff “policies” insofar as they have not been reviewed and approved by appropriate governance procedures. Nor do the documents constitute Selected Academic and Administrative Policies (SAAPs), which require approval by the Policy Advisory Committee, governance groups, and the Chancellor. In addition, there are no best practices or guidance documents specifically geared towards students, except for the unpublished documents from University IT Services mentioned below. Rather, these are “procedure” or “best practice” documents. Some examples include:

  • Artificial Intelligence (AI) and Teaching
    Published by UWM’s Center for Advancing Student Learning, this document focuses on the teaching and learning applications and implications for artificial intelligence platforms, including generative AI. The intended audience for the document is UWM instructors, “to assist… in making informed decisions in your classroom”.
  • Guidelines on Use of ChatGPT and Other Predictive Language Models
    Published by the Division of Marketing, Communications and University Relations, this document is intended “to guide [the] informed and ethical use of ChatGPT and other generative natural language processing models in marketing and communications at the University of Wisconsin-Milwaukee”. The intended audience for the document is UWM Marketing and Communication staff members only.
  • Citation Styles – Generative AI
    Published by the UWM Libraries, this document is intended to provide “A comprehensive guide to citing in various citation styles, offering examples of citations as well as links to outside sources.” The intended audience for this document is UWM students, researchers, faculty, and staff.
  • UWM generative AI Literacy” and “UWM generative AI Guidelines
    Drafted by University IT Services, it has not yet been published. These documents are intended to focus on key areas of AI literacy, with particular focus on the basics of how generative AI systems work – including their risks and benefits – while also outlining potential use cases for generative AI on campus, providing generalized guidance for generative AI usage, and describing prohibited activities involving generative AI. The intended audiences for these documents are UWM students, faculty, and university staff.

These documents are intended to “(i) establish guidelines for ethical usage, data protection and privacy, (ii) awareness of biases, and (iii) means to ensure accurate and responsible acknowledgment of generated outputs for faculty, staff, and students who use these tools. ” The intended audience for these documents is campus leadership at the various Universities of Wisconsin Institutions.

In July 2024, the Governor’s Task Force on Workforce and Artificial Intelligence released an “Advisory Action Plan” that includes “Education Policy Proposals” covering both the Wisconsin Technical College System and the “Universities of Wisconsin” (see especially pages 10-11 and Appendix A, pages 22-23). Four “education policy proposals” with implications for UWM are identified: Investments in AI Research; Curricular Development and Pedagogical Enhancements for Improved Teaching and Learning; EAB Navigate – Advising Toward Student Success; and Faculty Recruitment and Retention in AI Fields. Among the proposal provisions with potential implications for “responsible use”: funding sought for “support [of] faculty and student AI research efforts across traditional AI fields (e.g., computer science, data science, engineering) as well as fields in ethics …;” “foundational efforts for AI integration in curricular development and pedagogical enhancements,” and “develop[ment] and/or implement[ation of] new AI technologies built off EAB Navigate” to “to improve course–completion rates, close equity gaps, and help students graduate.”

The documentation and planning efforts by these groups have been worthwhile, especially insofar as generative AI tools are already being used extensively by UWM students, staff, and faculty in a variety of ways. Even so, we reiterate: For clarity of expectation, consistency of practice, and assurance of appropriate ethical consideration, our campus would benefit from a comprehensive, formal university-wide policy that authoritatively addresses the use of AI technologies.