The Workgroup has two recommendations. The first recommendation is that UWM broadly adopt a cautious and critical approach to AI technologies and their incorporation into all of the activities of faculty, staff, and students. The Workgroup’s second recommendation is to create a framework for the use of AI technologies at UWM and by UWM faculty/staff/students/researchers that allows for the flexible use of such technologies – recognizing that these technologies and the landscape is changing at a rapid pace – while ensuring that AI technologies are used ethically, consistent with UWM’s mission and within the parameters of applicable laws and UWM/UW policies, including FERPA, intellectual property and data protection security

We recommend that this framework include the following elements:

  1. The development of a campus-wide policy that affirms UWM’s commitment to engaging AI technologies responsibly and in a way consistent with its mission and values. It would include the following additional key elements:
    • A scope statement that makes clear the breadth of campus activity to which it applies (e.g., individual uses, classroom spaces, campus purchases, partnerships, etc.).
    • A statement that our engagement with AI technologies should be treated with caution, and include input of all stakeholders, including faculty and staff.
    • A statement that UWM’s engagement with AI will always be guided first by ethical considerations, regardless of their potential impact.
    • An acknowledgement that AI technologies are not wholly positive phenomena and must be critically assessed.
    • An acknowledgement that AI technologies reach across all disciplines, and that all disciplines should be included in their ongoing critical assessment.
  2. The creation of guidance documents/procedures where additional or more detailed guidance would be beneficial. Unlike more comprehensive policies, guidance documents are likely to focus on a more narrow AI-technology-related issue that impacts a smaller subset of the UWM community, and should be developed by and/or with input from subject matter experts and stakeholders. (See guidance documents mentioned above.) These guidance documents and any existing documents should be consistent with any AI technology-related policies.
  3. The regular and ongoing collection of information about AI technologies use on campus through the use of surveys and any other relevant methodologies.
  4. The opportunity for consultation and review of decisions/policies/procedures by impacted stakeholders. This includes, for example, faculty/staff/UITS governance groups (either through existing mechanisms or as needed, newly developed ones), subject matter experts, other stakeholders and leadership.
  5. The incorporation of ethical considerations regarding AI technologies into existing or new mandatory training for faculty, staff and students that includes as a minimum: a) what is generative AI; b) the scope of AI use in university contexts; c) ethical considerations in AI use; and d) types of AI tools. Given the dynamic nature of generative AI, these training materials should be regularly updated.
  6. The creation, as needed, of oversight processes/review committees/working groups for uses of AI technologies where there are heightened ethical or legal concerns. This may include oversight of:
    • The use of generative AI in research
    • The use of non-UWM owned intellectual property to train homegrown AI tools
    • The procurement of AI-related software (e.g., for contract provisions that allow for data use to “train” the AI software and/or claim ownership of inputs)