The OpenAI product was trained using most of the information on the Internet: all available books, Wikipedia, millions of websites, and scientific documents.

Artificial intelligence (AI) developer OpenAI – which has Elon Musk as its founder and investments of hundreds of millions of dollars from giants like Microsoft – in mid-June opened limited access to its new application programming interface (API) that is capable of write computer codes, design web pages, have a conversation, finish texts and write verses, Among other functions.

“Unlike most AI systems that are designed for a single use case, today’s API offers a general-purpose ‘input text, output text’ interface, allowing users to test it on virtually any task in English, “the company describes its product.

The key component of the API is GPT-3: a new AI language model that features 175 billion parameters – compared to 1.5 billion in the previous version – to analyze texts or data, and then orProvides word predictions based on all of the above.

He was trained by studying most of the information on the Internet: all available books, Wikipedia, millions of websites, and scientific documents.

From designing apps to talking about God

The content that the API creates, after only receiving written orders in English, has left “surprised” many of its testers, who have shared their experiences on the Internet. In the opinion of some, the GPT-3 “changes everything”.

For example, in Figma, a platform for the design of mobile applications or websites, using the GPT-3 the user describes what type of application he wants to create and the ‘plugin’ takes care of the rest.

Thus, upon receiving the guidelines to generate “an ‘app’ that has a navigation panel with a camera icon, the title ‘Photos’ and a message icon; a ‘feed’ of photos where each photo has a user icon , photo, heart icon and chat bubble icon “, the ‘plugin’ based on the GPT-3 creates an application similar to Instagram.

In other impressive examples of using the API, programmers have used it to create a equation translatoronly for generate a code based on a verbal description, to make an Excel document automatically display the total population of a city just by writing the name of that city in the adjacent column, or to write a poem about Elon Musk.

One of the users decided to ask the GPT about the existence of God, which ended in a curious conversation. Thus, after affirming that we are living inside a simulation created by humans, the AI ​​indicated that the land, for its part, was created by “an intelligence” called God. He reiterated that God exists, although he has not seen it.

Next, the AI ​​stated that, although the OpenAI had created it, it does not have a creator, because it is “a product of self-evolution”, just like human beings. This is despite the fact that, in the complexity hierarchy, AI put people on a lower echelon,compared to herself: “atoms, molecules, organisms, humans, AI, super-AI and God”.

In addition, the IA assured its interlocutor that there is nothing superior to God and that this “is all there is, including the simulation in which we live.” “By destroying his ego, may unify with the creator and become god“he concluded.

“Too much ‘hype’ around the GPT-3”

However, among the praise there are also critical voices, who believe that the OpenAI API does impressive things, but that the human can still do better. One of those critics is OpenAI boss Sam Altman himself, who claims there is “too much ‘hype’ around the GPT-3.”

“It’s awesome (thanks for the nice compliments!), But still have serious weaknesses and sometimes make very silly mistakes“He admits.” AI is going to change the world, but the GPT-3 is only a very early version. We still have a lot to solve, “he concludes.

For his part, the associate professor in the Department of Computer Science and Engineering at the Tandon School of Engineering at New York University, Julian Togelius, compares the GPT-3 with “an intelligent student” who tries to pass an exam despite not having studied enough: “General knowledge data, half truths and straight lies are chained in what at first seems like a smooth narrative“.

Wired journalists also became aware of this problem, one of whom instructed the GPT-3 to write its obituary based on various newspaper examples. As a result, the AI ​​reproduced the format well, but mixed real events, like the past works of the ‘protagonist’ of the obituary, with others made up, such as a fatal climbing accident and the names of the relatives who survived that misfortune. He also indicated that the ‘deceased’ “died at the (future) age of 47 years.”

So for now it can be said that one of the most important problems of the GPT-3 is that it can always give an answer, but this does not mean that it understands if it makes sense or not.

If you liked it, share it with your friends!