Too Paranoid About Android
If non-human intelligence were truly a threat to the continuation of civilization then dogs are obviously the most clear and present danger.
Photo: Pavel Danilyuk
A new artificial intelligence recently gave the chattering classes cause to reconsider their own relevance. OpenAI’s ChatGPT responds to user prompt with custom content on demand—meaning, to take full stock of both the insult and the injury, that the program not only writes complete copy but meets deadlines.
Even the thoughtful takes have been bleak. In the Spectator, for example, Sean Thomas declared the end of a five thousand year old tradition, largely because “all writing is an algorithm” and that’s where this program is particularly proficient. This view has virtue but I find that it fits an all-too prevalent generalized anxiety about everything lately. At the risk of offering what he describes as a typical “consoling” denial, I will happily adopt the counter-argument: that the news of our obsolescence by artificial intelligence remains greatly exaggerated.
The past may well be prologue but our expectations about the future have been primed by Wall Street hyperbole and Hollywood imagineering. We casually accept the rise of the robots despite remaining embarrassingly ignorant about intelligence in general. As Gary Marcus, professor emeritus of psychology and neural science at NYU, noted on The Prof G Show podcast: “People are over-attributing to ChatGPT an intelligence that’s not really there” likely, to paraphrase, because we see intelligence as a singular thing rather than a range of distinct attributes. Book smart but street stupid is a cliché for a reason.
Professor Marcus went on to suggest that ChatGPT is perhaps an ideal “brainstorming tool” which means that it could counter-intuitively improve writing (awkwardly, at a time when serious readers are increasingly few). Perhaps we are boldly going a little closer to a Star Trek future: Computer, clear my calendar for the afternoon and list the most likely refutations of my thesis. Sure, some may pass off AI work as their own but that’s not a new problem and we already have a name for that. Besides, the value of what we read includes the author as much as their words. We may not know an artist personally but taking in their work is engaging in a relationship. Eliminate that relationship and the result is karaoke. That won’t stop artificial content from flooding the market, but let AI plagiarists and writer-agnostic readers race themselves to the bottom.
Let’s pivot to the big picture: if non-human intelligence were truly a threat to the continuation of civilization then dogs are obviously the most clear and present danger. A dog is more likely to eat your homework than help you write it, but it possesses an emotional intelligence often consistently superior to humans. Why put yourself through the trouble of trying to invest in a meaningful relationship with another human being when you could simply get a dog? That’s the problem with these categorical and reflexive reactions to change: it’s easy to lose perspective.
That is what really frightens us here: change. The extent of that change is what Annie Lowrey tried to measure in the Atlantic: “ChatGPT and the like…promis[e] to destabilize a lot of white-collar work, regardless of whether they eliminate jobs or not” but the program is ultimately limited because it “creates content out of what is already out there, with no authority, no understanding, no ability to correct itself, no way to identify genuinely new or interesting ideas.” What’s missing from that summary? That’s right: the human part.
That fact likely won’t be enough to forestall further specialization, as Lowery concludes, or another, perhaps even more aggressive, round of moving fast and breaking things—which is especially menacing because we have yet to contend with the economic consequences from the initial round of disruption for disruption’s sake. We’re sensitive to change, it makes us feel out of control, and we’ve had a lot of it lately. There's no question why we assume the worst.
The key question here, however, is not who this new technology will bankrupt but why we’re even developing it at all. I expect the answer amounts to “because we can” which makes me afraid, not of AI, but an all-too familiar human hubris. Just because you can, doesn’t mean you should. Isn’t that what Mary Shelley, the mother of science fiction, showed us in Frankenstein?
The Economist reported earlier this month that the deepfake content monster has already escaped the lab, outpacing even an international containment effort. The sooner we establish legal and cultural boundaries (to disincentivize disruption and better manage change, respectively) the sooner we can head-off a series of unfortunate unforced-errors. Otherwise, we’ll have no choice but to concede the upper limit of our own intelligence.