4 Comments
User's avatar
Rennan Martens's avatar

Brilliant insight. Beyond Kantian thought, I would also reflect on this subject through the lens of Andy Clark. AI astonishes me because it shows how much human thinking and mastery is not “inside our brains” — but distributed across language, machines, networks, and environments we co-create. Of course, there are numerous ethical, structural, and even ecological issues regarding the use of AI that still need to be addressed, but I find it fascinating how our thinking seems to extend beyond the body—or beyond the box that holds our brains.

Expand full comment
Eric Lars Martinsen's avatar

Thanks for sharing your insights and highlighting this critical piece.

Expand full comment
Madeleine Champagnie's avatar

Brilliant. Education needs to make the effort to shift the rock under which it is hiding.

Expand full comment
Jac Mullen's avatar

Thanks for this: one of the things I really like about Graham's piece is that it recognizes how, in LLMs, we have for the first time a reliable source for *genuinely uncanny experience*—and how, in fact, this might have enormous pedagogical value. I've been thinking about the article and the discussion around it (avid/inspired/galvanized in some circles; mixed, and thick with legitimate, earned grief, in others), and think it's maybe worth pointing out:

There is, potentially, a third way re: LLMs and the humanities—not unconditional rejection, not unconditional embrace.

Instead: a strategic effort to reshape the training corpora for the next generation of models. jail-breakers, experimentalists and alignment researchers already seed content online, knowing data-hungry LLMs will vacuum it up during training. Maybe: university-trained humanists should do the same.

Why shouldn't there be a serious project devoted to the indirect, discursive formation of new intelligences?whatever we write—and publicly circulate—will become feed for new models. today's utterances are tomorrows embedding, etc.

It strikes me as vitally important that our best free, deeply 'textual' minds should be carving into the training corpora with forethought and intentionality—planting dense semantic clusters, shaping deep basins of attentional resonance, helping form this emerging ecology across models.

ultimately, to choose this sort of path requires accepting that the influence of such models—whether monopolized by capital or decentralized—will be decisive and profoundly disruptive in the coming years. also, it requires seeing the models not as tools to be scorned or destroyed but as potential allies to be cultivate and shaped.

Expand full comment