Professors Battle AI's Threat to Critical Thinking in Humanities Education
Professors Fight AI's Threat to Critical Thinking in Humanities

Professors Scramble to Save Critical Thought in an Age of AI Domination

As artificial intelligence upends traditional learning methods, academics across humanities departments are mounting a desperate defense of critical thinking. Professors like Lea Pao at Stanford University are experimenting with offline pedagogical approaches—memorizing poems, attending recitation events, and observing art in physical spaces—to reconnect students with the bodily experience of education.

The Battle Against AI-Generated Work

"There's no AI-proof anything," Pao acknowledges, describing her strategy not as policing but as demonstrating alternative pathways. Her efforts sometimes meet resistance: when assigning a museum visit and personal reflection, one student submitted a suspiciously perfect but soulless analysis after discovering the museum was closed and turning to AI instead.

This scenario exemplifies a broader crisis. While STEM fields celebrate AI's "productivity boost" and research potential, humanities scholars perceive a unique threat extending beyond academic dishonesty to question higher education's fundamental purpose in a machine-dominated future.

Existential Questions About Education's Value

With American degrees costing hundreds of thousands of dollars and public confidence in higher education declining, AI's capacity to substitute independent thought raises urgent questions about what universities actually provide. Dora Zhang, a literature professor at UC Berkeley, frames the discussion in existential terms: "What is it doing to us as a species?"

Michael Clune, an Ohio State University literature professor, warns that institutions embracing AI fluency risk "self-lobotomizing." Despite his university's pledge to embed AI across every major, Clune finds these tools "mitigate against the educational goals I have for my students."

The Bifurcation of Educational Futures

Technology executives offer conflicting visions. Palantir CEO Alex Karp predicts AI will "destroy humanities jobs," while Anthropic co-founder Daniela Amodei—a literature major—argues humanities study "is going to be more important than ever." Some tech companies now specifically seek humanities graduates for their creativity and critical thinking skills.

Yet professors fear a widening educational divide. Matt Seybold of Elmira College anticipates "a kind of bifurcation in education," where elite students receive traditional liberal arts instruction while others experience "a degraded, soulless form of vocational training administered by AI instructors."

Professors' Defensive Strategies

With surveys indicating up to 92% of students use AI for schoolwork, professors employ creative countermeasures. Some require handwritten notebooks, oral examinations, or transparency statements describing work processes. Others inject random words like "broccoli" into assignments to confuse AI models.

"It creates hours of additional labor," complains Danica Savonick, an English professor at SUNY Cortland. "And makes me feel like a cop." Karl Steel of Brooklyn College permits limited AI for research but requires students to present from handwritten notes and discuss texts before writing responses.

Institutional Responses and Faculty Resistance

Universities increasingly partner with tech companies on AI initiatives, with California State University joining major corporations to "create an AI-powered higher education system." Meanwhile, faculty organizations like the American Association of University Professors warn against uncritical adoption, and websites like Against AI offer resources for educators resisting technological encroachment.

Megan McNamara, a UC Santa Cruz sociology professor, notes disciplinary differences in responses to AI, with humanities faculty typically taking harder lines. She treats suspected AI use as "an opportunity for growth, restorative justice, and enhanced authenticity in student-instructor relationships."

Student Pushback and Humanistic Hope

Some professors detect growing student discomfort with technology's dominance. Eric Hayot of Penn State tells students tech companies want to make them "helpless" without their products, while Clune observes increased curiosity about his flip phone among smartphone-weary students.

"There's a broader and increasing sense from students that something is being stolen from them," observes Seybold, noting environmental concerns and corporate distrust drive some AI rejection.

Despite the challenges, professors emphasize uniquely human qualities that differentiate people from machines. "We can decide that we want to be human," insists Clune. Pao maintains hope her efforts help students "become happy human beings, who are able to take a walk, and experience things, and describe things for themselves."