A few potential uses (and misuses) for GPT-3

Adriano Marques

GPT-3 is a 175 billion parameter autoregressive language model by OpenAI released in May 2020. It is a deep learning system that takes input in the form of human readable language and produces human readable output.

The OpenAI team tested GPT-3 in Few-shot learning mode. The model is given a verbal description of the task and a few examples of context and completion at inference time. Then the machine is given another instance of context and expected to provide the completion. Two variations of this setting are One-shot learning where the model is only given one example of context and completion, and Zero-shot where no examples are given. This is an improvement over past NLP systems, which required pre-training on a massive dataset and then still a pretty large dataset to fine-tune the training for a specific task.

The model has been in beta testing since July. In some of the demos created by researchers, GPT-3 is used to generate quotes in grammatically correct, intelligible English, programming code and visual interface designs. How will this new language model change the lives of developers and designers? How creative and robust can it be?

In examples we have seen so far, GPT-3 produces code for a simple application based on a verbal description. In this scenario it is essentially used as a code generator by an app developer, freeing the developer from the task of manually typing out the code. However, the software engineer is still required to perform the creative parts of the job, such as understanding the user requirements and designing application architecture.

GPT-3 can analyze and summarize content, so it can be used to make data discovery work more efficient. In other words, it could potentially provide a really simple to use interface for a human not proficient in data discovery to conduct a search in a large pool of documents by writing or speaking requests in plain language.

It is interesting to think whether a technology like GPT-3 can create art, or be used to create art. There probably isn’t a single straightforward answer to this. Art is subjective and there are various ways to define what art is. Some would say that art is something that sparks feeling. Another way to understand art is as something innovative and new. Artforms are always evolving and modern art is a constant revolution aspiring to do things that have never been done.

On the surface it would seem that the concept of machine learning is contrary to this idea. AI needs to be trained on a set of pre-existing data to learn to produce something similar. It is possible to imagine, though, that AI could be trained on a variety of artforms and styles to randomly produce images that have never been seen before. And these images could potentially cause people to feel strong emotions and be perceived as art.

In many cases art is an expression of human experience of having witnessed or lived through momentous social or personal events. From this perspective, art is a form of communication between humans, and AI could only imitate art. On the other hand, if AI is able to operate with a context of vast amounts of diverse human experience, is it positioned to create stronger, more expressive artworks?

Another interesting aspect is importance of intention in creation of an artwork. Indeed, all human artists have intention to create art. AI on the other hand works because a human turns it on and gives it some input. If an NLP system is used to create art, would the human providing the input be the artist, and AI simply the media the artist works with?

One of the nefarious uses that an advanced NLP system can be put to is acting as a bot on a social network. Today social networks can identify and remove large sets of bots because they use similar language. An advanced NLP system could be trained to impersonate a human online and employed as an indiscernible bot.

The NLP system could also be used to generate fake news articles to disseminate false information. In fact, in beta testing GPT-3 has been used to generate imitation news articles that many human readers were unable to distinguish from texts written by human journalists. Using a combination of deep fakes, voice synthesizers an NLP system to generate a monologue, political actors could try to discredit their opponents with a fake speech or public address.

OpenAI, which started as a non-profit, decided to make GPT-3 available to developers by request trough OpeiAI API, which is their first commercial product. This could stifle the adoption of this technology by start-ups but won’t slow down OpenAI’s giant competitors.

GPT-3 shows amazing improvements over the previous generations of the model. With powerful hardware becoming more and more affordable, it will become cheaper to train the model. However, fears of potential loss of jobs are unfounded. NLP systems like GPT-3 have great potential to enhance human productivity, but not to completely replace humans. If anything, they will push us to become more creative.


News, lessons, and content from our companies and projects.

Stay In The Loop!

Receive updates and news about XNV and our child companies. Don't worry, we don't SPAM. Ever.