Up, Periscope: The Future of the Humanities in the AI-verse

Educators are hearing a lot these days about why the humanities need AI. And not surprisingly, many of thosearguments sound like they come straight out of ChatGPT.

There are pragmatic arguments about why humanities educators need to deal with AI: LLMs (large learning models) are a 21st century reality, and humanists need to evolve accordingly. And if we don’t master them, they will master us.

There are accommodationist arguments: we don’t need to go whole hog. We can use a tool like ChatGPT to enhance student learning without completely caving to autogenerated materials or forms of engagement.

And there are utopian arguments: AI can help us do what we do better by expanding our field of vision. Access to more information leads to more nuanced interpretations on the path to advancing the way we understand the world.

These are mainly defensive responses to the tsunami ofChatGPT, which rolls over everything in its path, with particular consequences for education.  Many if not most of today’sundergraduates would choose not to write a history or philosophy essay from scratch if they could plug a prompt into a machine that generates dozens of possible answers. It’s like being handed the multiple choice exam and the answer key all at once.

Yet there is a deceptively simple diagnosis of why so many smart people appear helpless in the face of the algorithmic deluge in our classrooms. And that is this: you can’t solve a problem from within the problem. In other words, answers to the question of how to face being completely overtaken by AI lie outside of AI itself.

For when you are submerged under water, what you need is a mechanism for getting a bead on the horizon. And history, philosophy, classics – not to mention African American Studies and Environmental Humanities – afford us just that kind of promontory, peripheral vision.

Up, periscope.

Of course, there’s no denying that anxiety about the implications of machine learning among educators is real. And as classroom instructors know all too well, dealing with those implications is labor-intensive.

It involves elaborating policy on AI usage. It meansdeveloping detection mechanisms for the abuse of even assignments which permit moderate reliance on ChatGPT. And it requires creating work-arounds if and when machine learning tools are not permitted.  

For like all tsunamis, AI not only sweeps up everything in its path. It demands our full attention and debilitates our peripheral vision.

We must ask what our preoccupation with managing, accommodating, and otherwise defending ourselves against AI prevents us from seeing. And from truly countenancing.

First, it is the capital investment and profit-making which motivates so much R&D in the race for AI supremacy. It’s a road that goes through bothventure land” and inter/national security.

Second, it is the slow but quiet processes of de-skilling that overreliance on AI tools produces, resulting in attention atrophy and other cognitive consequences. Not to mention the risks to “cognitive liberty and privacy” for all.

Third, it is the environmental costs connected to the billions of gallons of what required by new data system  facilities being built, drawing on aquifers and other already over-taxed natural resources.

And fourth, it is the growing concern about the fraying of social bonds as people turn increasingly to AI chat bots for emotional support – with harms that disproportionately impact teenagers in ways we have yet to fully grasp.

Study in the humanistic disciplines trains students to practice forms of discernment that bring each of these contexts more fully into viewin service of grasping the untold or marginalized stories of what’s at stake in the totalizing ambition of the AI-verse.

It’s precisely the evidence-gathering, storytelling, and argumentation learned in humanities classrooms that have the power to refocus our attention on those underlying conditions – corporate greed, the automation of work, the impacts of climate variation denial, the cost to emotional well-being from excessive virtual interaction – which enable AI to thrive and, unregulated, to swamp us.

Training in the humanities is not simply a means of keeping AI human.That’s a Humanities 101 concept which could be generated by a kindergartner’s prompt in ChatGPT.

What’s needed is intensive investment in training in upper-level humanistic thinking which will enable us to focus our inquiry on what we are not asking, not seeing, not bringing into sharp relief, not making the center of conversation, debate, and research. And why.

As AI pulls us ever deeper into the rabbit holes of the dataand words it generates, humanists know that peripheral vision is not a luxury. It’s a survival skill for everyone on the planet. Upping the periscope is vital for safety and navigation; for wellness in embodied ways; for emotional and psychological security. And for imagining otherwise.

Peripheral vision is also something AI will never possess.

So humanists can, and perhaps should, take what AI has to give. But they should remember their own superpower, which consists in knowing that everything has an outside, even the AI-verse.

We need university leaders to commit to funding the critical intelligence that humanistic study affords – to fortifyinghumanities-based programs of study where critique is directed at AI even when AI is part of the educational experience.

Meanwhile, we should turn to the sages and oracles of the past and the present who remind us that a different world is possible, often at the periphery of the now. And we should remember that we have the ability to bring about what is next if we study history, read fiction and poetry, and learn from the visual and performing arts for the examples they hold of those who reach beyond the universe as it is. These are stories humanists know, teach, research, and make available both inside the AI-verse and outside it.

Those who care about getting outside the AI bubble should heed the words of Black visionary poet, activist, and scholar Audre Lorde about the way forward in all struggle: the master’s tools will never dismantle the master’s house. Let’s learn from the humanities about how to put prophesy into action.

Thanks to colleagues, especially Elizabeth Majerus, for their insight and feedback on the AI/humanities relationship.