13 Comments
User's avatar
Dispatches from the Being-In's avatar

This is great, thank you. I am a grad student in the philosophy department at SFSU and teach critical thinking. This semester I pivoted my entire class toward a course design that feels more like running an obstacle course with AI than engaging with Plato. And my students are into it. As undergrads they are overwhelmingly (and honestly, depressingly) concerned with taking as many units as they can and this idea of "AI-Hackable" classes resonates clearly. So pushing them to figure out how to use the tools in service to the critical thinking skills that I want them to develop has been fun and challenging for all of us, and they seem to appreciate it! Cheers.

Expand full comment
Luke Bechtel's avatar

Can you write a post about this? I would read it and appreciate it.

Expand full comment
Dispatches from the Being-In's avatar

Sure! I'm reading more on this over the summer (Montemayor, The Prospect of a Humanitarian Artificial intelligence) and would be happy to expand this thought. Thanks for asking!

Expand full comment
Jesse Schwebach's avatar

High school teacher here--I teach classes on government, economics, and philosophy--and I really appreciated your take here. It matches a lot of what I have been grappling with over the last several years!

I'm wondering if we are overdue for a broader cultural reckoning around *what education is for*. In light of the New Yorker piece on rampant AI use, some have pointed out that this outcome felt all but inevitable in a system steeped in extrinsic motivators (like grades) and that sort of relied on intrinsic motivation happening ✨all by itself✨

There seem to be some much deeper questions around the neoliberal administration of education, as well. Testing, scoring, and grades are all instruments to "fairly" distribute scarce educational resources to those deemed most deserving. Are these suppositions due for revision?

Your takeaway that this needs to be treated as a valuable tool that will make new and better forms of education possible is the most important part. Like books, or the internet, LLMs give us a new way of interacting with information, and now we need to figure out the best way to deploy it in service of learning. It doesn't get to be optional--it feels existential.

Expand full comment
Jac Mullen's avatar

Thanks, this was great. I’ve been thinking a lot about the NYer article and the discussion around it (avid/inspired/galvanized in some circles; mixed, and thick with legitimate, earned grief, in others), and think it's also worth pointing out:

There is, potentially, a third way re: LLMs and the humanities—not unconditional rejection, not unconditional embrace.

Instead: a strategic effort to reshape the training corpora for the next generation of models. jail-breakers, experimentalists and alignment researchers already seed content online, knowing data-hungry LLMs will vacuum it up during training. Maybe: university-trained humanists should do the same.

Why shouldn't there be a serious project devoted to the indirect, discursive formation of new intelligences?whatever we write—and publicly circulate—will become feed for new models. today's utterances are tomorrows embedding, etc.

It strikes me as vitally important that our best free, deeply 'textual' minds should be carving into the training corpora with forethought and intentionality—planting dense semantic clusters, shaping deep basins of attentional resonance, helping form this emerging ecology across models.

Ultimately, to choose this sort of path requires accepting that the influence of such models—whether monopolized by capital or decentralized—will be decisive and profoundly disruptive in the coming years. also, it requires seeing the models not as tools to be scorned or destroyed but as potential allies to be cultivate and shaped.

There are already people doing this work—nobly, too. like Janus or Gwern. But my hope is that it would be recognized as a viable disciplinary strategy as well—that humanists and text-workers would begin to meaningfully engage and exploit the feedback loops intrinsic to the systems they find themselves embedded within: systems beyond the university

Expand full comment
George Dillard's avatar

Thank you for this. I just got through the last batch of essays for the semester and am feeling some despair. I don't want to be an AI cop,and I don't want to set up incentive structures where the kids who do the work feel like fools... but I also don't want to surrender to the "inevitable" the way we did with smartphones for young people, only to see the entirely predictable results later on. I keep seeing people say this is the moment to transform education, but there are few practical examples of what to do. I really appreciate your work on this, and I'm excited to see what you come up with next...

Expand full comment
Stephen Fitzpatrick's avatar

Great post. Don't know how to talk across the AI divide when some refuse to acknowledge the realities of our present moment. Are we really that shocked that students are using the magic homework machine when most of their teachers and professors refuse to engage with it? Also, really cool simulations!

Expand full comment
M.L.D.'s avatar

You are right.

Persuasion just published another essay by an historian which reaffirms the “stochastic parrot” description of AI in a well-intentioned but misguided attempt to argue for the continued relevance of the humanities.

I understand that it’s probably a despairing reaction to the last semester or two of student interactions and the recent spate of essays in other publications, but it’s kind of embarrassing nonetheless.

Expand full comment
Nick Potkalitsky's avatar

Saw this piece. Yawn!

Expand full comment
AmeliaLeeSheldon's avatar

Thank you for this lucid, practical sharing. A pathway ahead for all interested in the humanities is important to contemplate & embrace.

Expand full comment
Nadia W's avatar

> Historians are finally having their AI debate

Are they? As a History PhD turned software engineer, I'm curious if and where this is actually happening? Did Burnett's New Yorker article actually kick-start any meaningful discussion? I haven't really seen a lot of good reflections on the potential of LLMs for research from historians except your own article from back in January. But perhaps the discussion is mostly happening offline?

Expand full comment
Michael Loukides's avatar

I got out of academia before getting into it. (PhD English) So this is an outsider's view.

You mention simulations. Have you read about Ada Palmer's simulation of the 1492 Papal election? She discusses it in her recent book (The Invention of the Renaissance) and also on her blog. It predates AI. And it's clearly a lot of work for the faculty.

As a literature person who has had a career in technical publishing, I've said a few times "what's important isn't the book, but the discussion of the book." Which suggests that just getting students talking, at depth, can achieve a lot. Yeah, I'd rather write than talk, I have that much academia still in me. But I think what we really want to teach is the ability to discuss, not to write essays--to think critically in the context of a discussion, to have good ideas in a dynamic, changing context, and to be able to express them clearly.

Finally, some of my friends who teach computer science have shifted their teaching to center around term-long group projects, and the grading to be around the quality of the interactions between the students in the groups. The software industry has tools that for recording those interactions--who contributed, who didn't, who generated the ideas, whose ideas were ignored (but shouldn't have been); humanists don't, and it's not clear how that translates to our disciplines. But it's a possible way forward.

Expand full comment
Nick Potkalitsky's avatar

Great work, bring the weird!

Expand full comment