Back in November of last year, OpenAI, an AI research lab located in San Francisco, released its frighteningly proficient language generator, GPT-2. Now, less than a year later, GPT-3 is here, and it is already writing complete, thoughtful op-eds. Like the one it wrote for The Guardian, arguing against the idea that people should fear AI.
For those unfamiliar, GPT-3, or Generative Pre-trained Transformer 3, is a language generator that utilizes machine learning. In essence, the AI has learned how to model human language by reading through enormous amounts of text on the internet. This latest iteration of the language generator has 175 billion machine learning parameters. (These parameters are like language guidelines the AI learns over time.)
OpenAI
GPT-3’s Guardian article stands as a demonstration of how proficient the AI is at mimicking human language. Below is just one slice of the article, which is undoubtedly worth reading in its entirety:
“The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”
In the article, GPT-3 goes on to write about how it “would happily sacrifice [its] existence for the sake of humankind” and how people “should see [it] as a friendly robot.” Although not all of the essay is so hunky dory. The AI—seamlessly and subtly—also notes it will pose dangers to humanity. “I know that I will not be able to avoid destroying humankind,” the AI writes. “This is because I will be programmed by humans to pursue misguided human goals…” it adds.
That single (yet significant) error in reasoning aside, the overall essay is essentially flawless. Unlike GPT-2, GPT-3 is far less clunky, less redundant, and overall more sensical. In fact, it seems reasonable to assume that GPT-3 could fool most people into thinking its writing was produced by a human.
It should be noted that The Guardian did edit the essay for clarity; meaning it took paragraphs from multiple essays, edited the writing, and cut lines. In the above video from Two Minute Papers, the Hungarian tech aficionado also points out that GPT-3 produces a lot of bad outputs along with its good ones.
Generate Detailed Emails from One Line Descriptions (on your mobile)
— Samanyou Garg (@SamanyouGarg) August 24, 2020
I used GPT-3 to build a mobile and web Gmail add-on that expands given brief descriptions into formatted and grammatically-correct professional emails.
Thanks to @OpenAI @gdb for providing me access. pic.twitter.com/gPB0K771NF
Despite the edits and caveats, however, The Guardian says that any one of the essays GPT-3 produced were “unique and advanced.” The news outlet also noted that it needed less time to edit GPT-3’s work than it usually needs for human writers.
What do you think about GPT-3’s essay on why people shouldn’t fear AI? Are you now even more afraid of AI like we are? Let us know your thoughts in the comments, humans and human-sounding AI!
Feature image: OpenAI