New gig: I am to give a workshop on "AI and Humanities", at a respected online institution, eight non-consecutive weeks, starting February.
People seem curious about this kind of stuff. I find their reasons suspect, the expectations shallow, my authority ridiculous.
Still, I love the theme. There's something to explore here, and to see as developing. There's conceptual beauty to be found around these parts. I will post notes about this (the seminar is dry and practical, I must empty my system beforehand).
But first of all — it's very rare, that a computer, my favorite toy, can be actually used for something interesting… Last attempt failed: "Digital Humanities" were such a disgrace that both sides it was supposed to bridge now need defending.
ChatGPT, however, is kinda usable? And is used?
There's a type of people who don't use ChatGPT: they test it for truth; they find mistakes, so they avoid it. "Those AIs, they're not as smart as you think!", funny screenshot attached. This kind of an argument I heard first around 2007, about Wikipedia being edited by the mistake-making nobodies. This case is even easier: we are not here to ask the computer for the truth, so that's not pertinent.1
We the non-believers, we use ChatGPT for something humbler than truth. E.g. it helps us with translation. But then translation is not some kind of a fetishistic abstract task at which we want to compete with computer. There's no perfect and absolute translation2 for a text, not for us, at least. We feel like we have some kind of an agenda, especially in "humanities", and we use the tools. The tools change us, we misapply them, and the way we see our goals is also discovered by us through the tools. But still there’s always a difference between us and the tools, not necessarily ontological, but simply: as long as there are – at least – two separate entities: “me” and “AI”, we will always have to negotiate.
No matter how good AI is. For me, it’s easy to believe that nuances like idioms, contextual considerations, and stylistic choices could be handled by AI quite efficiently, perhaps even better than most of us could ever manage. While this might resolve the issue for many users of writing, it elevates the stakes for the humanities. Different texts require different tools. The translation of philosophical texts often begins by defining and agreeing upon the main concepts3, which requires an understanding, perhaps even a stance about their history, and sometimes implies a deliberate polemic choice4. Literary studies, on the other hand, require the examination– and adjustment – of diverse translations of quoted material. When working with original texts, especially in ancient languages, a conventional “translation” might not even be what is actually sought. Rather, one might want to create a comprehensive text-specific analytical and exploratory tool5 or, conversely, propose a unique and subjective literary re-telling. What is, after all, the “task of the translator”6?
AI can definitely make all the reasonable choices, answer each of those questions. We just simply might not always agree with its choice.
So I don’t think that it’s a temporary state of events that we still must define our tasks, or "prompt-engineer". It's not simply about telling computer what to do, it's also a process of defining what we do, and negotiating with our tools and materials about it. Which is good, because that’s what work is, that’s what it has always been, an activity that makes us human.
What’s different about 2023 is that the way text works was redefined. Which makes humanities such an interesting locus of observation. Obviously transformed tasks like programming became much more pleasant and efficient, but I don’t think they were transformed in their essence: tools always helped with those things. But humanities were always predicated on the fact that textual production, and consumption, has a very specific and formal relationship with materiality, which framed their use, but otherwise was supposed to be transcended.
I mean something like: ink, paper, alphabet, movable type, bytes, UTF-8, Google search could all be essential to the distribution of a text, but when reading that text we’re supposed to forget about all of those, and treat the text as something absolutely disburdened from this kind of materiality. That’s Derrida’s “logocentrism”: to treat writing, understood in this case most generally, like a secondary, auxiliary process to speech, the latter supposedly being the presence we’re after when talking, writing or reading. This logocentrism is rightly shown by him to underlie the humanistic thought, but it’s not really easy to see how exactly could we escape this style of thinking.
But all that previous materiality couldn’t really do much to the text. You can tear a page, you can lose some bits, but the “text itself” would still be there to be restored, possibly intact, no matter how much some postmodernist could try to convince us otherwise. Since nothing, but humans, could actually speak or write, every instance of recorded text must have referred to a particular act of human speech or writing that would be singular7 and original. Even if we only had the access to such a text corrupted by the material means of reproduction, it’d still, in the abstract, retain its originality, which we took for granted and tried to criticise without being able to really think without.
Except it’s not the case with ChatGPT, which happily writes and rewrites so much, not necessarily even while prompted or supervised by any human, not in a way that is recognisable or limited in any clear and well-definable way, that there’s no singular event of producing speech, no production of an original to refer to.
Which might just mean that much of the humanities have lost their favorite object. A linguistic, philosophical, literary, psychoanalytic interpretation of a particular sentence produced in 2023 can’t do what they do best: take the particularity of a carelessly chosen word and use it to dig for additional meaning. AI is eerily present behind every text now, as an impenetrable veil between the written word that we see, and whatever could possibly be the original event of human speech that after many technological transformations led to its production.
Another snub that is going around is "plagiarism machine". It is a zinger indeed! But I am not sure what's the problem unless you plan to publish what it wrote, but then aren't you – the plagiarist?
Yes, I do agree that using AI for industrial-scale content generation is distasteful, but I think it should just go without saying.
When I run out of topics you'll see me writing 5k words on Walter Benjamin's “divine language”, as seen in AI.
Often evading the rhetorical heuristics that demand that e.g. you must not repeat the same words too often. Otherwise you might end up with a translation that loses the introduced concept completely, such as one translation of Foucault that managed, probably for stylistic reasons, never to mention the gouvernementalité.
Even among the classics – the specific translation of Hegel’s Geist or Freud’s Trieb that you use can reveal quite a bit about your outlook. Even the choice whether to translate something or keep it as a loan-word is important (”ressentiment”, “Dasein”, “Dao”).
Something like the Perseus Project.
W.B. again, both times I refer to “The Task of the Translator”.
In terms of human history, of course, for which this singularity is a major presupposition.