ChatGPT is a Large Language Model trained on billions of parameters by a company called OpenAI with the sole purpose of making this product the subject of every human conversation. For the processors that fuel this neural network, the basic idea is that, crudely, ChatGPT will predict the next word in a sequence of words. But this is a predictive text machine that isn’t restricted to a screen. ChatGPT’s ability to generate the next word in a sentence has influenced a good portion of the primates on the planet, with the ignition of its processors a trigger for seemingly every human in the world to start typing about ChatGPT. Now these sentences aren’t an exception, nor can I exclude myself from the captivated, especially once the talk about artificial intelligence’s potential turns to the apocalyptic.
My first instinct is, of course, to dismiss any apocalyptic chatter, regardless of the subject, as obsessing about grand finales has been a human obsession and a method of coping with darkness since the very beginning—to believe that you live in endtimes is one hallmark of being human. Nearly everyone who has ever stood upright has believed that their time was surely an eschatological age. To know, conversely, that the party will continue without you has always been too dreadful to face.
My second instinct is, of course, to believe that I’m special, that the apocalypse will certainly coincide with my life, and that the doomsayers and worriers and skeptics about artificial intelligence just might be right, especially when those worried about disaster arrive with expertise. There are some reasonable arguments about how creating a superior intelligence can trigger a cataclysmic outcome—which should at least prompt you to sit forward in your chair. Perhaps those selling assets, those predicting doomsday, those warning about the dangers of unaligned artificial intelligence, those pointing out that nobody has a coherent AGI plan, those describing AGI as significantly more dangerous than nuclear weapons, just might be onto something. Although I can’t ignore the sensation that these portents feel a tad medieval—the prophecies that humanity has less than ten, no five, years because of what we’re birthing, with some of our famed prophets of technology becoming prophets of catastrophe.
My third instinct is, of course, to take some pleasure in how clever our greatest writers will look if humanity’s final act is to create the tool that triggers an apocalypse. Perhaps that’s a small comfort for most people, but literary writers can’t afford to ignore those very occasional successes. It is one consolation that will cheer me up and keep me warm amid the inferno. Mary Shelley will look wiser, and that’s one good outcome, though the list of literary figures who will achieve, in our final breaths, a greater status includes countless others. H.G. Wells and Arthur C. Clarke come to mind, but I would also like to include Shakespeare and Dante and Milton. When all of humanity is primed to vanish into an infinite silence, I do hope that my final words can be I told you so.
My fourth instinct is to wonder how ChatGPT relates to the recent bowdlerization of Roald Dahl books. Because the news is also filled with stories about how Puffin Books has edited Dahl into a form more suitable for the most fragile of contemporary sensibilities—with hundreds of words simply deleted, and with hundreds of other phrases adjusted and massaged and sanitized into a polish more fitting for polite company. Thus one major story today is about how our most advanced technology sucks up all of humanity’s words and spits them back out, while another major story is about how we’re tapping the delete key in sentences that we don’t like from classic books. And that seems like a rather troubling combination.
The distortions in Dahl’s books range from expurgating particular words—fat, screechy, grumpy, idiot, crazy, saucy, to name just a few—to the complete rewriting of entire passages. In The Witches, for instance, Dahl writes: “You can’t go round pulling the hair of every lady you meet, even if she is wearing gloves. Just you try it and see what happens.” In the contemporary edition, that same passage sounds a tad different, and loses what you might call its nuance. “Besides, there are plenty of other reasons why women might wear wigs and there is certainly nothing wrong with that.” Now that is, well, one interpretation of the original. It will be a struggle for me to be as condescending as I want to be on this subject—although, don’t worry, I will do my best.
When I learn that “she was a prisoner” mysteriously becomes “she was stuck” in The Twits, that “fit and frisky” becomes “fit” in The Witches, that “turning white” becomes “turning quite pale” and “screechy voice” becomes “nasty voice” in Matilda, that “rather pretty young lady” and “he needs to go on a diet” and “I was crazy” have disappeared, along with hundreds of others, without a trace, I learn that society’s most frantic and irrational moralists have their own Large Language Models, too, with every sentence, both written and yet to be written, required to conform to a contemporary standard. The list of rewrites is as long as it is baffling—you must picture someone very determined, gripping a microscope in one tightened fist, a red pen in the other, scanning the sentences just like a neural network for a chatbot, scratching out phrasing, ensuring that every word is pristine, that literature is nothing more than a lukewarm soup that won’t burn anyone’s tongue.
Incidentally, I do wonder if our commissars have ever heard of fricative alliteration? If you cross out “most formidable female” and type “most formidable woman” then you have dropped the lyricism of the line; if you change “she went on olden-day sailing ships with Joseph Conrad” to “she went to nineteenth century estates with Jane Austen” then you haven’t merely changed the content, you’ve also lost the rhythm of “olden-day” and “sailing ships.” Isn’t anybody at Pravda interested in poetry? And while I’m mid-incidentally, if you swap “a cashier in the supermarket” for “a top scientist,” you’ve managed, first, to reveal an utterly stupefying amount of hubris in rewriting classic literature and, second, to accidentally reveal that you don’t value cashiers.
There will always be a juvenile impulse to control the timeline, to yell at ancestors for moral failings, to define the past with the terms of the present, to pretend that life occurs in a constant, ceaseless moment of now. Nevertheless, I will admit that we’ve reached a whole new vista in the measurement of human self-confidence, an astounding, unfathomable amount of cockiness, once people feel perfectly fine in rewriting classic literature with their own sentences while still keeping the author’s name on the book. There’s a similar outbreak of historic forgetting in an upcoming printing of James Bond novels, though I don’t really know if anybody at the censor’s office has actually read an Ian Fleming book because, if they have, they’re going to be printing novels with blank pages if they want them updated.
At this point our best safeguard against the rather prissy and overly-motivated censors is—and this is important—their general illiteracy along with—and this is important too—their general interest in control rather than literature. If you look closely at the corrections, you can spot that interest. In James and the Giant Peach, there’s a change from “The Ladybird said” to “Said the Ladybird,” which looks to me like someone who just wants to stir the soup. When I see that “cried” becomes “cried out” and “fumes” becomes “smell” and “small” becomes “little” and “beast” becomes “trickster” in Fantastic Mr Fox, I see a rather inept plea for relevance—to leave a personal mark on a classic. And these fingerprints are most obvious, for me at least, when “owch” becomes “ugh” in George’s Marvelous Medicine. In that swap I see a wonderful display of ego, a puritan belief that your tin-ear can overrule Dahl’s pen. It must give the censor a rather chilly sensation of power to decide what words are permitted, although you need to believe in a very peculiar voodoo if you believe that to excise a phrase from the lexicon—especially a word of comparison—is also to excise its use.
Conversely, if I happen to ask ChatGPT a question that’s deemed impermissible, if I touch its list of words that aren’t permitted, I am scolded, but it doesn’t say, “I’m sorry David, I’m afraid I can’t do that.” Instead, it tells me, “As an AI language model, I am unable…” and then becomes more specific about my infraction. The corpus of information that it has within its processors is hidden from the human just like, let’s say, the original text of a book is hidden from a reader once it is no longer on the shelf. And when I combine the limits on what the human is able to do with the warnings about an AI apocalypse, I am a bit less than insouciant.
To write the previous paragraph, I asked ChatGPT a few different questions about its boundaries around questions, without expecting it to betray too much detail to one of its pesky meat-based users. As someone who adores language and culture and wants a more cosmopolitan world where curiosity is cultivated and more people are permitted to flourish, I did find its response a tad disconcerting. “If you are uncertain about whether a question is appropriate, it's always a good idea to err on the side of caution and refrain from asking it.” With a response like that, it appears that ChatGPT is fully trained to start working in the Censorship Department at a publisher.
Perhaps you might permit me a little suspicion as our technological aristocrats program our devices to generate intelligence from our texts while our cultural aristocrats rewrite our literature to uphold next week’s standard. At least there’s a bit of irony in watching a large language model request that you “refrain” from using certain words right as publishers find meaning in the delete key.
I ran an excerpt of Naked Lunch through ChatGPT just to be mischievous. What came back was something so sterile it isn't worth sharing. Please God don't let anyone run Charles's writing through this sterilizer.
"that literature is nothing more than a lukewarm soup that won’t burn anyone’s tongue."
Savage in a good way.
On ChatGPT and its more advanced cousin Bing Chat, I think one reason these AIs are so disconcerting is that the future evolution of AI poses both potentially unbounded downside risk *and* potentially unbounded upside. There's no consensus on which outcome is more likely, and there won't be consensus for a while, which is also unsettling. There aren't many risks that fall into this bucket (most risks are clearly asymmetric in one direction or another--with either the downside or upside outcome having higher likelihood--and many are also bounded in magnitude on the upside or downside or both).