Skip to content
Go back

The Cruel Optimism of AI and the Humanities

Published:  at  12:23 am

There’s a pervasive and deceptively hopeful narrative making the rounds: AI needs the humanities. It sounds profound, a call for ethical oversight and deep, contextual thinking to steer the technological behemoth. But this is a fantasy. The push to integrate AI and the humanities is not a path to enlightened progress; it’s an exercise in cruel optimism, a term coined by the theorist Lauren Berlant. It offers the impression of empowerment while changing nothing about the underlying structures of power. It’s a bad idea for our courses and a dead end for our research.

Table of contents

Open Table of contents

The unhappy student

Consider the course design. When a university proudly launches an “AI for Humanists” initiative, it typically takes one of two flawed forms. The first is the applied course. Here, students are promised the skills to perform rigorous analysis. But what can you truly learn in a single semester? To do meaningful work, you need a deep grounding in statistics, experimental design, and causal inference. You also need to understand the technical guts of the models, like transformers. A single course can’t possibly provide this depth. It creates a dangerous illusion of competence, where a thin veneer of technical knowledge masks a profound lack of analytical rigor.

The second form is the critique course, often rooted in Science and Technology Studies (STS). Students are taught to critique the infrastructure, the hidden labor, and the ethical failings of AI. These discussions are important, but their real-world impact is negligible. We have consistently seen that these academic, ethical provocations are either unsound, limited, or simply ignored by the industry they target. Major changes don’t happen because of a seminar paper; they happen in courtrooms or are forced by regulation. This is the very definition of cruel optimism. Students are given the vocabulary of critique and feel empowered, but they are left with no actual ability to change how they, or anyone else, works with these all-pervasive systems.

Two cultures, still clashing

This disconnect deepens when we move from the classroom to research. The core methodologies are fundamentally incompatible. Humanists, rightly, will not and should not give up interpretation—the nuanced, context-dependent reading of culture. On the other side, computer scientists and the Big Tech companies they work for cannot truly grasp or afford to care about the radical situatedness and contingency of society.

Why?

Because it’s entirely unclear how those nuanced, often un-generalizable, insights can help them achieve their primary objective: to build better products at scale. The incentives are misaligned. One side seeks irreducible complexity; the other seeks scalable, predictable solutions.

This leads to a collaboration where the humanist’s contribution is either tokenized or flattened beyond recognition. To be useful to the engineer, cultural nuance must be converted into a feature, a variable, a dataset. The very soul of the humanistic inquiry is lost in translation.

Ultimately, any interdisciplinary collaboration should be judged by what it produces. A product can be a book, a scholarly paper, or, in this context, a model or an algorithm. Take sentiment analysis. It’s a classic example where humanistic input seems vital. But in practice, the messy, contradictory, and ironic nature of human language is sanded down to fit a crude classification of positive, negative, or neutral. The goal isn’t a deeper understanding of human emotion; the goal is a functional tool that is just good enough. The collaboration doesn’t elevate the technology with humanistic insight; it instrumentalizes humanistic knowledge for a predetermined technical goal. When “build better products” is the ultimate measure of success, the rich, critical perspective of the humanities will always be a secondary, and ultimately disposable, concern.


Share this post on:

Next Post
Attention and Transformer, We Hardly Know Thee