Geoffrey Hinton is an artificial neural network luminary. Now he has resigned from his job at Google. The dangers in the development are “scary”.
Geoffrey Hinton, leading AI developer at the US group Google, has resigned and warned in the “New York Times” on Mondaythe advances in the field of AI meant “serious risks for society and for humanity”.
Hinton was involved in the development of artificial neural networks. These form the basis for today’s AI systems of the large tech companies.
Fear of misinformation
Hinton told the New York Times that competition is driving tech companies to develop increasingly advanced AI “at a dangerous pace.” This increases the risk of spreading false information: one of his immediate concerns is that the Internet will be flooded with fake photos, videos and texts and that the average person “can no longer know what is true”.
Hinton also fears the loss of many jobs. Currently, AI would be used as support, such as translators and chatbots like Chat-GPT, but soon they could take over many tasks entirely: they could replace paralegals, translators and others who do routine tasks. It could take away more than that.” IBM CEO Arvind Krischna told Bloomberg on Monday that he expects around a third of the company’s administration jobs to be replaced by AI and automation over the next five years.
Google and the company OpenAI – the start-up that developed the well-known chatbot Chat-GPT – began last year to develop learning systems that use a much larger amount of data than before. Hinton told the New York Times that these systems would eclipse human intelligence in some respects due to the sheer volume of data.
According to the newspaper, the developer quit his job at Google last month. His boss at the company, Jeff Dean, said in a statement to US media that he thanked Hinton for his work. Dean emphasized that Google was one of the first companies to publish guidelines for the use of AI. Google continues to feel “obligated to use AI responsibly. We are constantly learning to understand emerging risks while boldly innovating».
Demands for a research moratorium
After the San Francisco start-up OpenAI released a new version of Chat-GPT in March, more than 1000 technology experts and researchers signed an open letter, in which they called for a six-month moratorium on the development of new AI systems. “AI systems with an intelligence that makes people competitive can pose great risks for society and humanity,” they warned. “Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks controllable.”
Hinton explained to the BBCthat he is particularly afraid of the collective intelligence of the interconnected artificial neural networks: «I have come to the conclusion that the kind of intelligence that we develop is very different from the intelligence that we have. We are biological systems and these are digital systems. And the big difference is that with digital systems you have many copies of the same set of weights, the same model in the world. And all these copies can learn separately, but immediately share their knowledge. So it’s like having 10,000 people, and when one person learns something, everyone automatically knows it.” That’s why the dangers of AI chatbots are “pretty scary”. It is also difficult to imagine how to prevent people with bad intentions from using AI for bad things.
On the other hand, the possibilities of AI for the development of new treatment and diagnostic methods are particularly promising in medical research: The AI Alpha-Fold can predict protein folds very precisely. This could lead to new breakthroughs in cancer research, where protein malfunctions are of central importance.
In the open letter to stop AI development, the signatories referred to a sentence by OpenAI founder Sam Altman, according to which an “independent” review would be necessary at some point before training new systems could begin. “We agree,” write the authors of the letter. “The time is now.”
Found a mistake?Report now.