Loading player In November 2022 the US company OpenAI presented ChatGPT, an artificial intelligence different from all the others that manages to simulate human conversations with users and which has had enormous media success. How OpenAI got to this moment is not very clear: what is known is that within a short time Sam Altman, co-founder and CEO of OpenAI, ordered to open ChatGPT to the public with a certain urgency. Everything happened so fast that the program that was presented was based on a sophisticated technology that was already two years old, the GPT-3 linguistic model, while a large part of the company was already working on GPT-4, the last version which then came out this month. Despite this, in its first version, ChatGPT reached one million users in just five days, proving that Altman’s intuition was correct and in a certain sense affirming the primacy of OpenAI in research and development on artificial intelligence. However, the story of how OpenAI got to this point is very particular and at times controversial: born as a non-profit organization, within a few years and after an alliance with Microsoft it would in fact have arrived – according to many commentators – to distort the mission original with which it was founded, among others, by Elon Musk, who is now the first to criticize its approach. OpenAI was born in 2015 from an idea of Sam Altman, a well-known Silicon Valley investor and president of Y Combinator, an influential Californian startup accelerator, and Elon Musk, head of Tesla and other technology companies (such as SpaceX and, recently, Twitter). Both shared a concern about artificial intelligences. Already in 2014, at a meeting organized by MIT in Boston, Musk had defined them “the greatest threat to our survival”, saying he was in favor of “some regulatory supervision, perhaps at a national and international level”. In particular, Musk said he was concerned by the excessive power achieved in research in the sector by a specific company whose name he did not mention. In December 2015, Musk and Altman co-founded OpenAI, a non-profit organization with the goal of promoting and developing “friendly artificial intelligence” (“friendly AI”) towards humanity. A few months after the company was founded, Musk attended a public event organized by the Recode technology site, in which the well-known industry journalist Walt Mossberg asked him to specify which company in particular worried him: «I won’t name names but there are just one,” he replied. Subsequently, however, Musk revealed that the company that, according to him, was acquiring a competitive advantage over the sector was Google, or rather DeepMind, a British company that Google had acquired in 2014. The presence of Google (more precisely Alphabet, the group of which is part of the company) in Musk’s thoughts and fears is an important point for understanding the events that then conditioned the evolution of OpenAI. For the first few years of the nonprofit’s life, Musk continued to treat artificial intelligences with a mixture of amazement and awe, claiming that humanity was “summoning a demon” and that OpenAI would be the only one capable of avoiding accidents in the path. 2018 was a breakthrough year for OpenAI: Altman cemented his prominent role within the company and Musk became increasingly distracted by Tesla. In the first months of the year, according to an informed reconstruction of the Semafor site, Musk complained to Altman that their company had fallen far behind Google: he therefore proposed to take control of the operation but Altman and other founders (as well as many employees) declined Musk’s offer, and Musk then exited the company. OpenAI explained this move in terms of a conflict of interest: “As Tesla continues to focus more and more on artificial intelligences, this will eliminate the risk of future conflicts for Elon”. (Actually, however, Tesla had already hired one of the best minds of OpenAI, Andrej Karpathy, already the year before the confrontation between Musk and Altman). Musk’s departure also meant the disappearance of the funds needed by the company, of which Musk had paid only a tenth of the promised billion dollars. OpenAI found itself alone having to cover the astronomical costs necessary for the training of artificial intelligences, which require enormous computational powers and therefore infrastructures and energy. Before Musk left OpenAI, Google Brain, a division of Google dedicated to artificial intelligence, had presented an innovative technology called Transformer, which predicted that language models could improve with very little human intervention, using huge amounts of data, texts and images. Google’s presentation of Transformer was the event that materialized Musk’s fears of falling behind, accelerating his exit from OpenAI and forcing the company to change its strategy and adopt a new technology, which is still essential today for the functioning of ChatGPT (the acronym that gave its name to the GPT linguistic model stands for Generative Pre-trained Transformer, confirming the importance of Google’s discovery). To do so, however, large investments were needed in so-called training, i.e. the acquisition of large archives of data, texts and materials, and their analysis by increasingly complex information systems. Transformer was therefore the beginning of a long domino effect that led OpenAI to the strategic alliance with Microsoft signed in 2022. In order to find new funds, in March 2019 OpenAI announced the creation of a division of the company called OpenAI LP, presented as «a hybrid between a non-profit and a for-profit company, what we call a “maximum profit” company» (in which, according to the company website, «investors and employees can obtain a maximum profit if we succeed in our mission», which allows OpenAI LP to behave in a similar way to a startup). In 2019 OpenAI also signed an agreement worth a billion dollars with Microsoft, or rather with Azure, the division of the company that deals with web infrastructure and which has since provided the computing power necessary for AI. It was the beginning of a relationship that after the success of ChatGPT became a “multibillion-dollar agreement”, and which led Edge, the Microsoft browser, to incorporate the artificial intelligences of OpenAI in its latest version. According to some, however, the Microsoft-OpenAI alliance represents yet another proof of how the original mission of the company has been forgotten, even before betrayed. In recent months, Musk has also underlined this problematic aspect, noting how a non-profit company that was supposed to serve as a counterweight to Google has become a company whose market value is estimated at around 30 billion dollars and is “to all effects controlled by Microsoft”. OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all. — Elon Musk (@elonmusk) February 17, 2023 Microsoft’s influence could also prove to be a problem given the growing and widespread concerns about artificial intelligences, their capabilities and the effect they could have in the field of disinformation. Fears that this week prompted some of the most important experts in the sector to request a temporary halt to their development, in an open letter written by the Future of Life Institute, an organization that deals with technology and its impact on the future of humanity : in a short time it had more than a thousand signatories, including many industry experts (as well as Musk himself). The proposal calls for a six-month collective shutdown for AI systems “more powerful than GPT-4” to allow their manufacturers to focus on their safety and reliability. What makes OpenAI’s relationship with its own innovations so problematic is the fact that Altman himself seems to agree in considering these technologies potentially dangerous: the entrepreneur has declared in recent weeks that he is “a little afraid” of artificial intelligence, including those of OpenAI. A position that seems to clash with the frenetic pace with which the company is presenting new services related to GPT-4, and which does not convince everyone: «Why not collaborate with AI ethicists and with legislators before making these models available , so as to give society time to create the right protections?” wondered Carissa Véliz, professor of philosophy and ethics at the University of Oxford. The doubts about the company’s ethical and moral stability are aggravated by the fact that OpenAI was born expressly with the aim of having a different influence in the sector, made up of collaboration, caution and academic openness to one’s own research and that of others, with the aim of avoiding uncontrolled and unregulated developments of artificial intelligences. On closer inspection, however, doubts have been circulating for some time about the company’s true intentions: when OpenAI published its articles of association in 2018, it also did so to reiterate that, despite the great changes underway at the time, the company had always cherish its original mission to «make sure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans in most economically important jobs – benefits all of humanity» . Among the principles envisaged by the act were the importance of “distribution of benefits” in order to avoid centralization of power, “long-term security”, “technical leadership” in the sector and “orientation towards cooperation” , or the willingness to collaborate with other research centers and institutions in the sector. However, according to many personalities in the sector, the recent actions of OpenAI have betrayed a good part of these promises: the reference is above all to the release of the GPT-4 linguistic model, which took place last March, which turned out to be a closed system, despite the spirit « open» that has always inspired the company – and its own name. Above all, what was striking was the lack of information regarding the contents on which the new linguistic model was trained and formed: the size of the archive, the origin of the documents and their nature. “I think we can say goodbye to ‘Open’ AI: the 90-page document presenting GPT-4 proudly declares that there will be no information regarding the contents of the training set,” Ben Schmidt of industry firm Nomic wrote on Twitter. TO THE. I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set. pic.twitter.com/dyI4Vf0uL3 — Ben Schmidt / @[email protected] (@benmschmidt) March 14, 2023 For years, OpenAI had touted an open, academic approach to the subject to prevent a single company from developing such powerful artificial intelligence to be considered an AGI, of which it could lose control or could be used in an unfriendly way, to use the lexicon dear to the company. This fear had translated into an open-source approach (in which the source code is public, available to all and open to possible modifications by the community), which characterized the first years of the company but on which it seems to have changed drastically idea: «We were wrong» declared Ilya Sutskever, co-founder of OpenAI, to the website The Verge, adding that «if you think that an AI – or an AGI – can become extremely, incredibly powerful, then it makes no sense to open it source”.
