An Experiment with OpenAI Text Generator

August 6, 2020

Blogger Manuel Araoz experiments with a software once considered too dangerous to release in his post, “OpenAI’s GPT-3 May Be the Biggest Thing Since Bitcoin.” The quip about Bitcoin is a bit off the mark—blockchain has had a slow lift off. This OpenAI innovation, though, is another matter. It is the most adept AI yet at mimicking human writing, which means is that bad actors, PR people, and SEO experts have a new tool with which to bedevil normal humans who operate via human brain power.

Most of Araoz’s article is, in fact, generated by the OpenAI beta algorithm, named GPT-3. See the post if you wish to read the software’s fictional tale of its own adventures in a Reddit forum. The reader is not informed until the end that the piece was generated by its own subject. I suspected it might be, mainly because it was redundant and a few passages were a bit awkward. However, I admit I may not have had those suspicions if I were reading about another subject entirely. At the end, Araoz shares the model he gave the AI as its starting point. He (I believe) writes:

“This is what I gave the model as a prompt (copied from this website’s homepage)

Manuel Araoz’s Personal Website

Bio

I studied Computer Science and Engineering at Instituto Tecnológico de Buenos Aires. I’m located in Buenos Aires, Argentina.

My previous work is mostly about cryptocurrencies, distributed systems, machine learning, interactivity, and robotics. One of my goals is to bring new experiences to people through technology.

I cofounded and was formerly CTO at OpenZeppelin. Currently, I’m studying music, biology+neuroscience, machine learning, and physics.

Blog

JUL 18, 2020

Title: OpenAI’s GPT-3 may be the biggest thing since bitcoin

Tags: tech, machine-learning, hacking

Summary: I share my early experiments with OpenAI’s new language prediction model (GPT-3) beta. I explain why I think GPT-3 has disruptive potential comparable to that of blockchain technology.

Full text:

and then just copied what the model generated verbatim with minor spacing and formatting edits (no other characters were changed). I generated different results a couple (less than 10) times until I felt the writing style somewhat matched my own, and published it. I also added the cover image. Hope you were as surprised as I was with the quality of the result.”

Not really, since we have been following along, but the results are convincing. The author has posted more of his experiments on Twitter, and is excited to work more with GPT-3. “Very strange times lie ahead,” he concludes. We agree.

Cynthia Murrell, August 6, 2020

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta