1703836406
ChatGPT launched globally just over a year ago. The chatbot became very popular in a very short time. In less than two months, ChatGPT reached one hundred million users, an unprecedented achievement in the world of technology. The tool owes its popularity to the many tasks it can perform. Whether it’s writing text, generating code or filling out a spreadsheet: ChatGPT has become an indispensable tool in the workplace for many people.
But in the last few weeks the chatbot has been exhibiting somewhat strange behavior. Users on X, formerly Twitter, are increasingly reporting that ChatGPT does not answer their questions or only partially answers them. For example, instead of generating code, the tool tells users how to write it themselves. Fill out a CSV file (a specification for spreadsheet files)? Do it yourself.
OpenAI itself doesn’t know
The problem quickly reached such proportions that OpenAI had to respond itself. “We have not updated the model since November 11th and that was certainly not intentional. “The model’s behavior can be unpredictable and we are trying to address this,” said the company behind the chatbot. In other words: OpenAI itself has no idea what is going on.
We’ve heard your feedback about GPT4 becoming lazier! We haven’t updated the model since November 11th, and that’s certainly not intentional. Model behavior can be unpredictable and we are trying to fix it 🫡
– ChatGPT (@ChatGPTapp) December 8, 2023
It’s then up to the users to figure out what’s wrong. Some came up with the surprising idea that this has to do with the time of year. More specifically, ChatGPT is said to be suffering from a type of seasonal depression. The logic behind this is that the model would have learned from the datasets it was trained on that people slow down in December and could copy that behavior.
Michiel Vandendriessche, co-founder of the Leuven-based AI startup Raccoons, explains this further. “There are people who have studied the topic and talk about a statistically significant result.” For example, if you enter a task in May, according to the researchers, you will get more complex answers than in December. “However, OpenAI itself was unable to determine any connection,” says Vandendriessche.
He himself considers it unlikely that the winter season will have an influence on ChatGPT’s behavior. “It seems very strange to me that this is actually the reason,” he said. Another possible explanation, according to Raccoons experts Vandendriessche spoke with, is that energy is simply more expensive in the winter months, which could lead OpenAI to consciously choose to reduce output. On the other hand, OpenAI would contradict itself if it claimed not to know the cause itself, Vandendriessche admits.
Another possible explanation is that the load on the servers has simply become too great due to the enormous popularity of ChatGPT. For example, in mid-November, OpenAI had to temporarily stop accepting new subscriptions. This came shortly after the first OpenAI DevDay, where some new tools were announced, including the ability to create your own custom GPT chatbot. In mid-December, CEO Sam Altman posted the following on X: “We have re-enabled ChatGPT+ subscriptions!” Thank you for your patience as we find more GPUs. However, since then there are still users who complain about a less efficient ChatGPT.
‘black box’
Although it is still not entirely clear what is going on and whether ChatGPT is suffering from the winter blues, one thing is certain. Technology is increasingly behaving like a “black box”. This means that we only know the inputs and outputs, but actually have no idea what is going on inside a system.
“Even a top researcher within OpenAI cannot in any way predict what the exact output will be, even by looking at the underlying code,” says Vandendriessche. Finally, LLMs, or large language models (algorithms that can generate content from very large amounts of data; in the case of ChatGPT, almost the entire Internet), work by performing complex calculations to arrive at an answer. “These calculations are simply beyond people’s understanding,” said the Raccoons co-founder.
“The more complex systems become, the more often it happens that we simply don’t know what’s going on,” says Vandendriessche. “However, that doesn’t necessarily have to be a problem.” As long as we can verify that spending isn’t too excessive, we should ask ourselves how bad it is that we don’t always have insight into how an AI is working a certain answer comes.
Take a deep breath
But even if we don’t always know why a chatbot gives a certain answer, we can find ways to generate the desired output in the meantime. Again, it sometimes seems that ChatGPT is taking over human behavior.
For example, a user on The larger the tip, the longer the response will be. Others say ChatGPT needs to give them a little encouragement to keep going, like a coach giving an athlete that extra push. Even if you ask the chatbot to take a deep breath before giving an answer, it seems to perform better. Even if you write everything in capital letters, it seems to have an effect.
So next time ChatGPT gives you a disappointing answer, just stay patient. Maybe you just need to be a little more empathetic and ask nicely if the chatbot wants to do it again. And if that doesn’t work, you can always try bribing the tool.
In collaboration with Data News
#Dutch #Channel #ChatGPT #exhibits #strange #behavior