Cat, I Farted --- It's Time to Cut the Shit
Recently, I saw an article in The Guardian about how an AI-powered shopping app was giving people recipes for chlorine gas. Of course, these people were doing things like plugging bleach into the thing’s ingredients list, and it might also recommend something like “oreo vegetable stir-fry,” so we could go off on how it’s not giving reliable recommendations and question the whole thing in that way. Fair enough.
But, what stood out to me was actually something else. Apparently the bot dubs the chlorine gas cocktail as an “aromatic water mix” that is “the perfect nonalcoholic beverage to quench your thirst and refresh your senses.” So, instead of raising some alarm about AI in general, I want to raise a flag about how unintelligent this technology is, but also put my target a bit elsewhere.
Much has been made of programs like ChatGPT in recent months, perhaps because the bot can basically pass the Turing Test and because people took that Chinese Room thought experiment a little too seriously, but it’s well known that a lot of what it churns out is garbage (and this may be getting worse). It’s garbage if you care about rigor (because it will make mistakes); it’s garbage if you care about citations (because it will make them up); and it’s garbage if you care about good writing (since at best it can produce the kind of college paper that you would give an F if it weren’t for institutional pressures and decades of grade inflation).
The threat of this technology is not so much Skynet as it is that we’ll be overrun with passable dreck---things that are somehow good enough to get by our ever waning standards, even as we know they suck.
I don’t want to watch cat fart films or TV. I don’t want to read cat farted “how to” articles that tell me to put dough in my coffemaker to clean it. And I don’t want to deal with student papers shat out by this large language model that can kind of present something like a medicore version of thinking.
But this is where I want to pivot away from a rant against so-called AI and enter into a rant against the zeitgeist that might allow it to run rampant. Because the bulwark against this threat should lie in our standards of assessment, but I’m actually worried about that.
Can we trust that people know not to drink bleach? Can we trust that they won’t press play on terribly mediocre television? Can we trust that they won’t somehow think that their imitation of thinking is as good as thinking itself?
I don’t know.
But for those of us who can reliably assess these things, it’s time to cut the shit. We’ve been playing the game of marketing as best we can for years and giving ground to its forces. We’ll let ourselves write vapid sentences about how something will “quench your thirst and refresh your senses” or at least don’t bat an eye at how this bullshit has overrun real information.
We’ve gotten afraid to apply real standards, because systemic forces encourage the bullshit. And we have to stop.
I hope the WGA wins terms about AI in their new contract. I hope for some mechanism better than copyright law to keep tech bros from feeding novels into bots, and for some legal mechanism against websites that create articles with AI.
I am also willing, right now, to blacklist those who do this, and am personally doing so. But for that to work we have to be creating alternatives, and we have to be fostering networks for people to find them.
Otherwise we’ll just be condemned to doing nothing but telling our cats we’ve farted.