This year is likely to be remembered for the COVID-19 pandemic and for a significant presidential election, but there is a new contender for the most spectacularly newsworthy happening of 2020: the unveiling of GPT-3. As a very rough description, think of GPT-3 as giving computers a facility with words that they have had with numbers for a long time, and with images since about 2012.
The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters — 10 times more than previous competitors — is surprisingly powerful.
The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense — a typical failing for many automated response systems. You can even ask it questions about God.
Imagine a Siri-like voice-activated assistant that actually did your intended bidding. It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.
GPT-3 does not try to pass the Turing test by being indistinguishable from a human in its responses. Rather, it is built for generality and depth, even though that means it will serve up bad answers to many queries, at least in its current state. As a general philosophical principle, it accepts that being weird sometimes is a necessary part of being smart. In any case, like so many other technologies, GPT-3 has the potential to rapidly improve.
It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem.
There is a notable buzz about GPT-3 in the tech community. One user in the U.K. tweeted: “I just got access to gpt-3 and I can’t stop smiling, i am so excited.” Venture capitalist Paul Graham noted coyly: “Hackers are fascinated by GPT-3. To everyone else it seems a toy. Pattern seem familiar to anyone?” Venture capitalist and AI expert Daniel Gross referred to GPT-3 as “a landmark moment in the field of AI.”
I am not a tech person, so there is plenty about GPT-3 I do not understand. Still, reading even a bit about it fills me with thoughts of the many possible uses.
It is noteworthy that GPT-3 came from OpenAI rather than from one of the more dominant tech companies, such as Alphabet/Google, Facebook or Amazon. It is sometimes suggested that the very largest companies have too much market power — but in this case, a relatively young and less capitalized upstart is leading the way. (OpenAI was founded only in late 2015 and is run by Sam Altman).
GPT-3 is also a sign of the underlying health and dynamism of the Bay Area tech world, and thus of the U.S. economy. The innovation came to the U.S. before China and reflects the power of decentralized institutions.
Like all innovations, GPT-3 involves some dangers. For instance, if prompted by descriptive ethnic or racial words, it can come up with unappetizing responses. One can also imagine that a more advanced version of GPT-3 would be a powerful surveillance engine for written text and transcribed conversations. Furthermore, it is not an obvious plus if you can train your software to impersonate you over email. Imagine a world where you never know who you are really talking to — “Is this a verified email conversation?” Still, the hope is that protective mechanisms can at least limit some of these problems.
We have not quite entered the era where “Skynet goes live,” to cite the famous movie phrase about an AI taking over (and destroying) the world. But artificial intelligence does seem to have taken a major leap forward. In an otherwise grim year, this is a welcome and hopeful development. Oh, and if you would like to read more, here is an article about GPT-3 written by … GPT-3.
Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include “Big Business: A Love Letter to an American Anti-Hero.”