The algorithm produces text that only seems to make sense, and in fact, it just connects data already created by people.
In May 2020, OpenAI introduced the third version of the GPT language model, which allows you to generate text that is not always distinguishable from what a human would write. In July, access to the neural network began to be issued to individual applicants, who were approved by the company, and developers in social networks began to admire the capabilities of the system.
we talks about the differences between GPT-3, what the neural network is capable of and why it is still far from real AI, but it can seriously change people’s lives.
What is GPT-3 and how it differs from its predecessors
At the time of this writing, GPT-3 is the most complex language model. Among other similar algorithms, it is distinguished by “training”: the system is trained on 1.5 trillion words, and its largest version takes about 700 gigabytes.
GPT-3 generates text based on 175 billion parameters – this value reflects its processing power. Depending on the number of parameters, the system evaluates the data better or worse and gives some of them more value, and some less.
The essence of the new algorithm has not changed compared to the previous version. The neural network analyzes huge amounts of data from the Internet and, based on them, tries to predict text word by word, but it still needs a starting point – some kind of query that you can work with.
Moreover, the more input data you give to the system and the more attempts it has, the more convincing the text can be. For example, if you give her the beginning of a famous poem in the style of one author, she can continue it in the style of another.
GPT-3 continues the OpenAI approach of GPT and GPT-2. Both of the first versions of the system were adaptations of Google’s Transformer algorithm that was first used in 2017. Its key function was “attention” – the ability to calculate the likelihood of a particular word appearing among other words.
OpenAI has evolved its models around this feature, constantly increasing the number of parameters. In 2019, GPT-2 was already working with 1.5 billion parameters and got into a scandal. Then the algorithm was taught to generate fake news and did not immediately release it to the public for fear of the harm it could cause.
In GPT-3, the parameters became a hundred times more, and for training the authors used twice as much data as in GPT-2. OpenAI claims that this allowed for “meta-learning”: for example, a neural network does not need to retrain every time to complete a sentence – if you give it an example, it will continue to complete all unfinished sentences.
GPT-3 cannot be freely used: while OpenAI gives access only to individual developers and researchers as part of a closed beta test – for this you need to fill out a voluminous application and wait for its consideration. But even after the end of testing, the neural network will not be released to the public – they plan to sell it for business by subscription.
Design, music, stories – what GPT-3 can do
Although only a few people got access to the system, in less than a month the algorithm was tested in a variety of scenarios: from songwriting to creating code and musical arrangements. According to one of the developers who tested the system, in most cases the system gives a convincing result, if not on the first, then on the second or third attempt.
Basically, GPT-3 was used to generate plain text: stories, songs, press releases, and technical documentation. But one of the developers went further and asked the neural network to write a text about itself – an article with the title “GPT-3 from OpenAI can be the greatest phenomenon after Bitcoin” turned out .
In the material, the author said that he trained the neural network on his own posts from the bitcointalk forum, and then published the entries generated by the algorithm, and no one noticed this. But in the end, the developer admitted that the story was invented and was written by GPT-3, although not on the first try.
Other developers have found that GPT-3 can generate any kind of textual information, including guitar tabs and computer code. So the developer Sharif Shamim showed that the system can work with HTML markup instead of natural language and create a layout based on text requests. For example, you can say what button and design a site should have – and the neural network visualizes them.
This is mind blowing.— Sharif Shameem (@sharifshameem) July 13, 2020
With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.
W H A T pic.twitter.com/w8JkrZO4lk
In addition, using GPT-3, Shamim created a simple React-based application generator . It is enough to write what the program should do: the algorithm itself will translate the request into a simple code.
I built a todo list app simply by describing it to GPT-3.— Sharif Shameem (@sharifshameem) July 18, 2020
It generated the React code for a fully functioning app within seconds.
I’m becoming more impressed and aware of its capabilities every single day. pic.twitter.com/QGrClar03s
Another developer has created a GPT-3-based plugin for Figma that allows you to create designs by simply giving a textual description to the neural network.
This changes everything. 🤯— Jordan Singer (@jsngr) July 18, 2020
With GPT-3, I built a Figma plugin to design for you.
I call it “Designer” pic.twitter.com/OzW1sKNLEC
Some asked GPT-3 for tough technical questions and got the correct explanation. So one of the developers asked a question on a medical topic, pointed out the correct answer to the neural network and received from it a detailed explanation of why this particular answer was correct.
So @OpenAI have given me early access to a tool which allows developers to use what is essentially the most powerful text generator ever. I thought I’d test it by asking a medical question. The bold text is the text generated by the AI. Incredible… (1/2) pic.twitter.com/4bGfpI09CL— Qasim Munye (@QasimMunye) July 2, 2020
One of the developers has created a fully functional GPT-3 based search engine. It gives a specific answer to any question and allows you to immediately find out the details on Wikipedia or in another resource related to the question.
I made a fully functioning search engine on top of GPT3.— Paras Chopra (@paraschopra) July 19, 2020
For any arbitrary query, it returns the exact answer AND the corresponding URL.
Look at the entire video. It’s MIND BLOWINGLY good.
cc: @gdb @npew @gwern pic.twitter.com/9ismj62w6l
In one example, an American student two weeks published in the blog texts about success and motivation generated by GPT-3. Out of curiosity, he launched a blog promotion and received 26 thousand visitors, of which almost no one guessed that the texts were written by an algorithm, and those who guessed were minus other users.
The authors of the free text quest game AI Dungeon also got access to GPT-3 and updated the application. They argue that after that the players got complete freedom of action: the system correctly responds to all requests and comes up with a world based on them.
Other examples include a language learning chatbot that allows you to speak your chosen language and corrects the user if they make a grammatical or stylistic mistake. In addition, the system can correctly calculate chemical reactions, talk about God, and allows you to access SQL using queries in natural language and not only – the developers have created a separate site where they collect examples of using GPT-3.
Another 🤯moment using GPT-3.— Albert Gozzi (@albertgozzi) August 4, 2020
Created a bot for people learning a new language that:
– Chats with you in your language of choice.
– Corrects you when you make a grammar (and even style) mistake.
Even works well with 🇪🇸/🇫🇷 (and I’m sure many more).
cc @gdb pic.twitter.com/McCQuTQ3yP
GPT-3 is not real artificial intelligence, but it can seriously affect the world
In recent years, it has become customary to call almost everything related to neural networks and machine learning algorithms artificial intelligence – this is easier for many journalists and users who are not associated with development. However, in fact, humanity is still far from real AI, and in GPT-3 there is, in fact, no “intelligence”.
Although many early users of GPT-3 said that the algorithm writes text indistinguishable from human text and produces meaningful sentences, in fact, inside it is still the same text on demand generator – with all the advantages and disadvantages. Many people mistakenly think that the system “understands” the context: in fact, it evaluates the connections between individual words and puts the most likely words one after another.
The versatility and accuracy of GPT-3 is the result of good engineering work, not the “cleverness” of the algorithm. At best, the neural network skillfully throws dust in the eye, producing text that looks like a human, but even such examples lack depth of elaboration: it is more like copying and pasting ready-made information than a meaningful approach.
As noted by AI researcher Julian Togelius, GPT-3 often behaves like a student who did not prepare for the exam in advance and now speaks all sorts of nonsense in the hope of being lucky. “Few known facts, a little half-truth, and a little outright lies, put together, at first glance seem like a coherent story,” – explained the developer.
The lack of “intelligence” in similar GPT-3 models is confirmed by research. In 2019, a team of scientists from the Paul Allen School and the University of Washington found that even the best language algorithms can be easily confused by asking increasingly absurd questions.
As a result, the researchers came to the conclusion that no neural network simply understands the context of the conversation, so it cannot respond normally to stupid questions. According to them, the performance of humans in phrase continuation is 95%, while that of machines is below 50% for any model, including Google Bert, which is similar to GPT-2.
As the scientists noted, the main task remained unresolved. Machines still cannot make logical conclusions from text and deduce one from another, like humans.
In this sense, GPT-3 is not too superior, for example, “neurocomments”, which could also continue the phrase for users. Sometimes it was lucky and it turned out funny, but more often it was absurd and inappropriate: you could click to the normal version for a long time.
The researchers believe that the approach to building up parameters and data for training models by itself may be wrong. Scientists estimate that if the algorithms themselves are not improved, it will take machines about 100,000 years to achieve human productivity in writing.
The creators of GPT-3 also agree with the criticism: the authors of the project wrote from the very announcement that the system has shortcomings, including in achieving significant accuracy in understanding the connections between the two sentences. According to them, the system handles such things “a little better than by accident.”
As explained in OpenAI, the creators themselves do not fully understand why GPT-3 fails to cope with some tasks, despite the increase in the number of parameters. In the end, the authors of the project came to the conclusion that expanding the model with large amounts of data and trying to predict the language may be fundamentally the wrong approach.
After the closed testing of GPT-3 began, one of the co-founders of OpenAI, Sam Altman, even tried to calm the hype around the neural network. He explained that the team is pleased to receive so much attention, but the system “has serious flaws and sometimes it makes stupid mistakes.”
Artificial intelligence will change the world, but GPT-3 is just a very early glimpse. We still have a lot to understand.
co-founder of OpenAI
However, the disadvantages of GPT-3 do not affect the usefulness of the neural network as an applied tool. Although the algorithm cannot yet replace humans, it can seriously simplify the lives of people in general.
Even the first examples from a small part of developers show how many different uses GPT-3 can be found and how many complex tasks it can facilitate. Already in 2020, OpenAI will launch a commercial version of the tool, which means that very soon ordinary users will come across products based on it.
It is difficult to say how far the implementation of the system will go, but it can be used in almost any activity: from teaching languages and programming to simplifying the daily routine. For example, it is easy to imagine that in a year it will be enough to “feed” this material to some bot based on GPT-3 and get ten headlines in response better than the current one in a split second – you just have to choose the most suitable one.