Whereof one cannot speak

One of the more frustrating, if unstated, presumptions of the early excitement about generative AI’s classroom potential — or, rather, coursework potential — was that papers (essays, texts) are the product of which courses are the production process. Students write papers; ChatGPT writes papers; ergo, ChatGPT produces the same thing students produce. Does it do it better? Does it do it worse? Can students learn how to do it better by watching how ChatGPT does it (whether well or badly)? Should students and ChatGPT perhaps write texts together? Knowing that ChatGPT can generate a paper or an exam essay that will get a B+, or an F, or a D, is only interesting if ChatGPT producing a text is analogous in some interesting way (interesting from the point of view of a teacher teaching a subject to students) to a student producing a text of similar form. I have read only a little about AI; one recent book, Matteo Pasquinelli’s Eye of the Master, some briefer discussions in other books, plus some magazine pieces, news articles, social media threads, and much textual regurgitation of these latter materials by university administrators and pundits. But, as far as I can tell, there is no reason for thinking that ChatGPT producing a text should be like a student doing so in any way that should interest me as a teacher of a particular subject, to students — beyond the breathless speculation (textual and financial) that such an assumption underwrites. Students write papers, but courses should not be production processes in which students are the workers, professors the bosses, and papers the product. To the contrary, the papers students write are principally means for their own thinking to change, and material for instructors to work on in such a way that they have that effect. In other words, it is precisely as a tool for producing texts that ChatGPT is not like a student. The academic qualities of the texts it produces, their resemblance to student (or scholarly) work, their accuracy or hallucinations, their fabrication of some sources and plagiarism of others, and so on, are beside the point. In this context, anyway.

Beyond this basic mistake, though, is a deeper and less articulate assumption, not at all specific to AI but built into our culture and institutions at many more points. This is that education makes speech easier, or, to put it another way, that intelligence means always having something to say. We can see one obvious manifestation of this in the way so many putative “public intellectuals” are really just experts in one thing (at most) who have become pundits-at-large by virtue of access to large platforms or regular columns, a process emulated and travestied in innumerable smaller ways on social media by guys whose one thing turns out to be the key to everything, no matter the subject at hand. (Perhaps especially if that one thing is modal logic, evopsych, or macroeconomics. But I digress.) Familiar as it is, this idea that intelligence terminally loosens the tongue is counterintuitive in the sense that knowledge in any advanced field of learning tends to be highly specialized; the further you go, the smaller — and the less accessible to laypeople — the area in which you are at the top of your game. It also violates experience in another, related but distinct, sense: knowing something well often means seeing that it is more complicated than it first appears. Historical events that have pat explanations in popular narrative turn out to have been more contingent, dependent on more factors, having more complex and conflicting meanings and legacies, being known through more partial or compromised or contestable sources, and so on. There are important reasons why learning should make it harder to say things, or to say them confidently, and why the most facile communicators even among scientists or scholars are not always the most accomplished or innovative researchers — at least not at the same time.

From this perspective, a machine for saying things effortlessly is not an obvious learning tool. But it does ensure that everyone, whether they know anything or not, have read anything or not, have done the work and the thinking or not, has something to say. On one hand, it strikes me that this is very close to the state of affairs academics and some others used to despise in social media, giving rise to such hopeless gestures as “Historian here!” A significant difference, though, is that social media was outside the classroom and was not necessarily enjoined or promoted by university management, while ChatGPT is being touted and used in academic contexts and for clear if misbegotten academic purposes. It’s worth thinking about the circumstances in which that can seem like a good “use case” for technology in higher education (historian here, if anywhere), and how it got to be that way. On the other hand, it seems to me that saying ChatGPT allows everyone to have something to say amounts to saying that ChatGPT ensures that everyone can be productive, provided the product is text. Perhaps it will change what we do into something more like a production process. Or perhaps, as Pasquinelli suggests (following Adam Smith), “labour has to become ‘mechanical’ on its own, before machinery replaces it,” and the problem with generative AI in the classroom is precisely that generative AI can seem to have a place in the classroom in the first place: that the classroom — overcrowded, understaffed, entered and exited through the rhetoric of marketable skills and job training — is already understood to be a content manufactory.1 In this sense, ChatGPT’s appeal to students may just be a measure of how far students are from the freedom to learn, and how far universities are from protecting it.

  1. Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence (London: Verso, 2023), 239. ↩︎

Leave a comment