The history of artificial intelligence is the history of repeated branding and repeated hype. After the phrase was invented in 1955 to obtain funding for a conference, it went on to label an array of potential technologies, often with an eye toward acquiring funding. Notably, the phrase has always signaled an inherent connection to human cognition that it in no way has. Today almost everything sold as AI is based on so-called neural nets, yet less than a decade ago the proponents of these techniques preferred to call them machine learning or deep learning. The teams commercializing these technologies at firms like DeepMind and OpenAI reclaimed the AI term, again for the purposes of attracting attention and potential funding.
Modern Generative AI and Its Implications
For the past few years, AI hype has focused largely on generative systems, particularly large language models. The term works as a shortcut for systems like ChatGPT. Trained on vast quantities of textual data, these large language models spit out polished and seemingly authoritative responses to any prompt. Boosters like Sam Altman of OpenAI and Elon Musk claim that within a few years this technology will lead to “artificial general intelligence” – systems that can perform as well as humans at any mental task. This recalls the claim by Nobel prize winner Herb Simon in 1960 that the “problem-solving and information handling capabilities of the brain” would be fully duplicated “within the next decade.”
As Emily Bender and other experts have observed, LLMs are “stochastic parrots” that pastiche the text on which they were trained. Whereas twentieth century AI was a failed effort to produce systems that reasoned from a reliable base of verified facts, systems like ChatGPT churn out language that fits statistical patterns based on very large sets of unreliable text. Sometimes what they produce is true, sometimes it isn’t, but the systems themselves have no way to distinguish between the two. That crucial evaluation is left to the user, but the depth of knowledge and commitment of time needed to catch the subtle errors, biases, and outright inventions in its output is far greater than the work needed to search for and integrate trusted sources oneself. What is more, the dialogic, “natural” format of interacting with these systems, with the user in the position of asking questions and receiving responses, gives generative AI the appearance of providing new, creative, and authoritative information, which in important ways is the opposite of what it can produce. It is for these reasons that the philosopher Harry Frankfurt has referred to them “as engines for the mass production of “bespoke bullshit.”
Many excited claims have been made as to the economic impact of generative AI technology, inflating gigantic stock market bubbles despite, as The Economist recently noted, there being no evidence of them having made any economic contribution. The private economic interests of tech giants such as Microsoft and Google are also important for understanding their current efforts to rush generative technology into core services like web search and Word. Such wide availability of these technologies raises all of the concerns already noted, but importantly in these applications they appear at the intimate level of UWM faculty, staff and student daily practice, and as such are potentially in conflict with such individuals’ desire to conduct their own work in ways consistent with UWM’s mission and values, as well as with faculty/staff efforts to train students to think critically, weigh evidence, acknowledge uncertainty, and find their own voice (as writers and in dialogue with others).
While there are ongoing and potentially productive discussions by critical educators about how AI technologies might be incorporated into the project of higher education, it is in the interests of the private concerns behind them that we take only a reactive stance toward them, and adjust our teaching, research, and other activity accordingly. This report, however, takes the strong counter position that higher education leadership around AI technologies, which is indisputably a part of daily life going forward, means pushing back and taking time on these and related questions in order to best prompt all members of the UWM community to be discerning thinkers about technologies in their midst, including AI technologies, rather than either passive users of them or narrow engineers for them (if these are not the same thing).