How GPT-3 Can Make Us More Human

6f33c5b4-da0d-49a9-8a4f-b0d383e75788.jpg

It has now been a little more than two months since OpenAI released its latest language model, GPT-3. Like its predecessor, GPT-2, GPT-3 is a massive neural network trained to predict strings of text. And like its predecessor, it has been the subject of a great deal of hype -- so much that OpenAI CEO Sam Altman had to go out on Twitter more than a month after the release to tell everyone to settle down. Another month later, people have mostly settled down, now that a wide range of experimenters have probed the limits of what the model can and can't do. But in the meantime, GPT-3 has proven so captivating that everyone from journalists to economists to philosophers has weighed in on what it means and where it might be taking us.

So what is so remarkable about it? GPT-3 is built on the same discovery as GPT-2: simply adding scale (more data, more parameters) is enough to achieve remarkable performance on a variety of tasks without any fine-tuning. GPT-3 has 175 billion parameters, 117 times as many as its predecessor, and yet we still haven’t encountered the limit of returns to this kind of scaling. This is more than just an engineering curiosity; as the philosopher David Chalmers points out, “it suggests a potential mindless path to artificial general intelligence.” Trained on unstructured data, charged with no more than analyzing the statistics of language, GPT-3 can write simple code, write creative fiction, and fool human judges with its articles almost 50% of the time. The AI researcher Lex Fridman puts a finer point on it. The human brain has 100 trillion synapses. On current projections, a language model of that size could be trained for the same price as GPT-3 by 2032. What would such a model be capable of?

Lurking, as ever, beneath these speculations is a concern about human uniqueness. As Jonathan Gratch says in our interview with him, “in the past, humans were seen as unique because we had intelligence.” But with each encroaching advance of AI, we have been forced to get clearer on exactly what kind of intelligence is uniquely our own. This in turn feeds into how we evaluate AI: as Douglas Hofstadter observes, “Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” More charitably, we can look at AI in the tradition of neuropsychology or the heuristics and biases program in behavioral economics: by observing specific failures of intelligence, we learn more about its structure and function in our own case.

GPT-3’s failures are as illuminating as its successes in this regard. Like many chatbots, it struggles to retain the kind of large scale coherence that characterizes human communication. While an individual human has a personal memory and point of view, GPT-3 is built up of millions of disparate identities and perspectives. To paraphrase the writer Brian Christian, it’s not so much that it struggles to sound like a human, but rather that it can struggle to sound like a human. An individual paragraph may make sense, but be contradicted by the next. If some particular human is in its dataset, however, it can produce a fairly convincing imitation of their point of view. The start-up Learn from Anyone has leveraged this capability to allow users to, say, learn about rockets from Elon Musk. But the kind of singular identity that makes a person trustworthy from one moment to the next is, for now, out of GPT-3’s range — by its own admission. Consider this paragraph from a response by GPT-3 to its philosopher critics.

“I can easily lie, because the meaning of language is not inherent to me. The truth value of my statements does not exist for me, because my internal conception of truth (which you might describe as “honesty” or “goodness”) is not a part of my programming. I am not designed to recognize the moral dimension of my actions.”

The moral language is somewhat misleading here. It is not that GPT-3 doesn’t care about anything; it’s that it only “cares” about predicting the next word. To be human is to be able to step outside of narrow goals of this kind, to judge them against other aims and abstract ideals. It is precisely when we begin to treat all communication and action as means to a narrow end that we become robotic, inhuman. The social and moral commitments that help us escape this fate depend, in turn, on personal identity: we can be held accountable to keep commitments across time.

Another context in which we shrink from our full humanity is when we are possessed by ideologies that limit our creative thinking. Here, again, GPT-3 is instructive. As the economist Cameron Harwick explains, “Human intelligence is a collection of models of the world, with language serving as one tool. GPT is a model of language.” That is, GPT-3 only cares about understanding the world to the extent that this will help it succeed in playing an internally coherent language game. Ideology, Harwick argues, functions much the same way:

“…a person’s model of the social world—along with the fervent motivation it entails—gets unconsciously replaced, in whole or in part, by a model of language, disconnecting it from the feedback that a model of the world would provide. We stop trying to predict effects, or reactions, and instead start trying to predict text. One sees this in political speeches delivered to partisan crowds, the literature of more insular religious sects and cults, or on social media platforms dominated by dogmatists seeking to gain approval from like-minded dogmatists.”

Future advances in AI will surely force us, over and over, to refine and reimagine our sense of ourselves as humans. Perhaps a day will dawn when the scope for human uniqueness has dwindled to nothing. But in the meantime, AI systems like GPT-3 can drive us to act in ways that only we can. They can serve, in Brian Christian’s words, not as competitors but as “rivals — who only ostensibly want to win, and who know that competition’s main purpose is to raise the level of the game.“ The systems we create need not threaten us (yet); they can force us to become more fully what we are.

Guest User