In high school, one my music teachers drilled into us, “if you can play it slow, you can play it fast.” And that meant if you couldn’t play it slow, you couldn’t play it fast. That was good advice, but I never completely overcame my fear of playing fast. In the summer between high school and college, I had a kind of breakthrough in orchestra while playing Brahms’ Variations on a Theme of Haydn where, during the performance, I found myself playing a fast passage I’d struggled and struggled on - and miraculously it suddenly came with ease. I got better at playing fast - but I never quite overcame a fear of it before I decided I wanted to spend more time with books than in a practice room. And there’s a way in which fast passages still intimidate me when I think about returning to the viola someday.
So, maybe there’s just something fundamental about my character that likes slow better than fast. When it comes to playing the viola, that feels like a flaw (and fits all sorts of stereotypes of violists as substandard musicians). But this week, I came across a couple things that reminded me that the difference between flaw and virtue is context and that my dislike of speed (thoroughly related to my dislike of the adrenaline rush that action movies elicit) isn’t just a sign of “substandard,” but also a reminder that there are different ways of knowing and doing.
The most impactful events of the week were coming across a Substack post about Marc Andreessen, a techno billionaire who is one of Substack’s financial backers, and James Walsh’s New York Magazine article about how ChatGPT has, predictably enough, wiped out an entire generation’s ability to find, if not joy, at least a sense of accomplishment, through writing. The juxtaposition that particularly stood out was how (a) Andreesen links “doing things faster” with “rewarding smarter people” and “technology will do both” with (b) the fact that ChatGPT is patently doing things faster and simultaneously degrading quality thought.
The Substack post led me to Marc Andreesen’s techno-optimist manifesto, and while Andreesen clearly is smart enough to innovate and juggle money effectively, the manifesto has some amazing howlers, like this:
We believe we should place intelligence and energy in a positive feedback loop, and drive them both to infinity.
And here we come to what feminist theologians have been telling us for decades - patriarchy is fundamentally related to a horror of limits, especially a limit on the ego. But limits are often where our creativity is forced. Limits remind us that there is something outside of us. Limits are the beginning of relation, of forging the balance between respecting autonomy and making deep connection.
And what Walsh’s article about ChatGPT showed us is that “more technology” does not automatically equal “smarter.” In this case, it just means “faster” to the point that the very purpose of the task of writing - communication - is submerged in a cloud of efficiency. A particularly horrific example from the article is this self-report of a student:
“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.
If there’s a better example of the decoupling of words from reality - I challenge you to find one.
It’s the end of the semester, and after some rounds of encountering AI writing, I cancelled the final paper in more than one of my classes because it’s too tricky to discern the difference between AI writing and non-AI writing. Sometimes AI writing is very easy to spot. But after spending hours with a three-page submission I suspected of being AI-generated but not knowing how to prove it, I threw in the towel. This summer, I’ll have to substantially revamp my classes to include more in-class journaling time, creating assignments that are process-oriented, and spending more time making the value of thought and introspection explicit. I don’t like the way a generation of people has come up on reality TV, where the most mundane happenings need to be narrated and re-narrated, so that the ability to pick up on subtleties is lost. I’m going to have to start accepting that that’s where my students are at, and I’ll need to narrate and re-narrate things I’d be able to see the students absorb slowly in the past. But in the New York Magazine article, it notes how scores of college teachers are just quitting, because they didn’t work on a Ph.D. just to grade AI papers. And then there is the further absurdity that there are even AI grading tools, so the techbros have created a “better world through technology” in which AI writers and AI graders get lost in an infinite loop.
I already had an uneasy sense about ten or so years ago, when I started to see lots of energy going into “Digital Humanities.” Whenever I looked into it, I found lots of “digital,” but hardly any “humanities.” ChatGPT seems to be the logical conclusion to that endeavor.
In a recent Substack post, I had to remind myself that a wholesale rejection of technology is as stupid as Andreesen’s stupid manifesto. But damn if the current state of technological “progress” doesn’t amp up the temptation to just give in to the equal and opposite stupidity. I did make a point of bringing home Kate Ott’s Christian Ethics for a Digital Society from my work office, so I could read it at leisure over the summer and counter that temptation. Yes, I also have Jacques Ellul’s The Technological Society and Michael Ades’ Machines as the Measure of Men in my reading pile, books that will feed my anti-technological comfort zone. I get that I don’t have time to read everything I want to read and that AI promises a shortcut to “getting to everything,” but the ChatGPT article pretty much showed that trying to get to infinity gets you nowhere.
I owe you some thoughts on this, seeing it now from the corporate side. But over drinks. I’m a recovering academic, after all. This whole comment thing is so blah.