The onslaught of generative AI across the cultural zeitgeist has led an array of higher education institutions, at various institutional levels (administrative offices, schools, centers), to begin to address the “Responsible Use of AI” through the development of policies and procedures for their campuses. Please refer to Appendix I for a selection of hyperlinks to resources from many of these institutions.
The Workgroup’s research in this area has made it clear that UWM lags in developing coherent and consistent campus policies relating to AI. The major lesson our peers provide is that adoption of generative AI should be treated with caution, with appropriate ethical consideration, and with the input of all stakeholders, including faculty.
We have identified institutions that have taken critical steps toward careful assessment of generative AI’s implications and the implementation of various governance mechanisms from the list above. We have paid particular attention to institutions of considerable prestige; often, that translates to a robust array of researchers, IT professionals, and software developers on campus who have broad understanding of the technology, its history, its benefits and drawbacks, its security dangers, and its realistic, as opposed to idealistic, possibilities. Without fail, these institutions evince a measured response to generative AI use and consider its risks as primary before considering its ostensible benefits.
The University of Toronto’s excellent Information Security guidelines begin by urging practitioners to recognize the possibility of inaccurate results, attend to misinformation, and recognize privacy risks. Monash (Australia) insists in its policy that the university community consider:
- How we use technologies responsibly and ethically including recognizing and mitigating biases, data and privacy risks
- How we critically evaluate AI outcomes and incorporate outputs
- How we collaborate with AI and maintain human accountability
- How we operate across the spectrum of knowledge production from individual humans demonstrating their performance to productive collaborations of multiple humans and AIs
- How we celebrate and advance human capacities/capabilities
- How we remain open to experimentation and change within dynamic technological contexts
The lesson here is that careful approaches that move more slowly and critically benefit all stakeholders. In addition, all such action – regardless of its nascence in campus-wide initiatives, faculty creativity, or student activities – should be guided from the outset by ethical considerations.
Such an approach is similarly embraced by Cornell, which stresses the need for familiarizing oneself with generative AI in order better to assess it. But this recommendation does not immediately translate to use. Instead, it “encompasses recognizing when and how generative AI is used in various domains, assessing the reliability and validity of AI-generated outputs, identifying the ethical and social implications stemming from the design and use of generative AI applications, and creating and communicating with generative AI systems in appropriate ways.” When “critically evaluating AI or generative AI,” Cornell asks instructors and students to consider the following questions:
- Is the AI-generated content accurate?
- How can you test or assess the accuracy?
- Can other credible sources (outside of generative AI) validate the data or item produced?
- How does the information generated impact or influence your thinking on this topic?
- Who is represented in this data?
- Is the data inclusive in terms of the material’s scope and the perspectives that it presents?
- Knowing LLMs may also be collecting the data your students input (i.e., in their prompts), how will you make students aware of this practice so they will in turn safeguard their own privacy?
Cornell’s guidance in the form of questions rather than demands incorporates critique into generative AI usage at every stage. In addition, Cornell’s and other best practices strip away media hype and clarify AI, LLMs, and generative AI as terms. Given generative AI’s production of weak generalizations and summaries paired with its overconfident rhetoric, this aids in approaching the technology from an educated perspective.
The lesson here is that campus-wide insistence on critical specificity as a framework immediately discredits some of generative AI’s worst tendencies and calls into question the naive assumption that generative AI is a wholly positive phenomenon.
Columbia University’s published policy states: “While the University supports the responsible use of AI, thesev novel tools have notable limitations and present new risks that must be taken into consideration when using these technologies.” This demonstrates putting ethics first and foremost in considering generative AI use. The policy goes on to state the risks (as outlined earlier in this document).
Finally, Columbia’s policy for students is stated clearly and cogently: “Absent a clear statement from a course instructor granting permission, the use of generative AI tools to complete an assignment or exam is prohibited. The unauthorized use of AI shall be treated similarly to unauthorized assistance and/or plagiarism (page 11 of Standards and Discipline).”
Columbia also stresses the need for transparency in generative AI usage across all stakeholders and bans the uploading of any personal or confidential information. This does not merely include address or identification information, but also interviews, unpublished research, and a multitude of other protected materials, illustrating the significant risk of inputting most material into generative AI.
The lesson here is that a campus can adopt a single policy on generative AI that reaches across all disciplines while neither requiring nor banning usage.
Research centers across the country are embarking upon new projects that carefully consider the use of generative AI; these centers bring together a diverse array of university community members to develop policy and research. For example, the Berkman Klein Center for Internet and Society at Harvard is made up of “faculty, staff, fellows, students, and practitioners representing a wide range of backgrounds, philosophies, and disciplines.” Berkman Klein has produced multiple cutting-edge reports and is shaping the narrative on generative AI not only at Harvard but across higher education and culture.
The lesson here is that the development of generative AI policy should not be limited to a single group, ideology, or discipline. The insights of the historian or the philosopher or the media theorist are just as consequential as those of the technology expert.