laitimes

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

author:Heart of the Machine Pro

Machine Heart report

Editors: Chen Ping, Xiao Zhou

According to the feedback of the majority of netizens, GPT-4 seems to have really become stupid.

Almost 4 months have passed since OpenAI first released GPT-4. However, as time went on, recently, there began to be some skepticism online that GPT-4, the most powerful in the world, had become less powerful.

Some industry insiders believe that this may be related to OpenAI's major redesign of the system.

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

In fact, in recent weeks, we have seen GPT-4 users complain about its performance degradation more or less online, with some users saying that the model has become "lazier" and "dumber" compared to its previous reasoning ability and other outputs.

Not only that, in the comments on Twitter and OpenAI's online developer forums, users expressed their dissatisfaction with this issue, such as the weakening of GPT-4's logical ability, the increase in incorrect answers, and the loss of the ability to track the information provided...

How GPT-4 has become, let's see what netizens feedback.

GPT-4 "got stupid" and complained

One user who used GPT-4 for website development wrote: "The current GPT-4 is very disappointing. It felt like you've been driving a Ferrari for a month and it suddenly turns into a battered pickup truck. If it goes on like this, I'm not sure I'm willing to pay for it."

Another user said: "I have been using ChatGPT for a while, and I have been a paying GPT Plus user since the release of GPT-4. Over the past few days, GPT-4 seems to have had trouble with what it had done well before. When I was using GPT-4, in the past, it seemed to understand my requests very well. Now, its ability to track information has declined, it gives me the wrong information, and it often misunderstands my questions."

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

Peter Yang, head of product at Roblox, claimed on Twitter that the GPT-4's output speed has become faster, but the output quality has been worse. For example, simple questions such as making it output clearer, more concise, and more creative text tasks. The results given by GPT-4 seem to me to have a decrease in quality:

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

"GPT-4 started looping through code and other information over and over again. Compared to before, it was like brain death. If you haven't really seen what it was capable of before, you probably won't notice. But if you had used GPT-4 before, you would have noticeably felt that it had become more stupid." Another user complained.

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

"I have the same problem with the quality of my response to GPT-4, does anyone know of a way to rule this out or correct this?"

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

"I did notice that. At certain times of the day, it seems to remember only the most recent tips. But during the whole day of use, the GPT-4 performance seems to fluctuate, and when I try it at different times, I feel that the performance is different."

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

Through the feedback of netizens, it seems that everyone feels the fact that GPT-4 has become stupid.

Once it was slow and expensive, now it's fast but not accurate

Late last year, OpenAI shocked the entire AI community with the release of ChatGPT, which initially ran on top of GPT-3 and GPT-3.5. In mid-March, GPT-4 was released and quickly became the model of choice for developers and others in the tech industry.

GPT-4 is considered the most powerful AI model widely available, with multimodal capabilities that can understand image and text inputs. According to Sharon Zhou, CEO of Lamini startup, it's slow but very accurate.

However, a few weeks ago, things started to shift, and while GPT-4 became faster, performance dropped significantly, which sparked discussion across the AI community, which, according to Sharon Zhou and other experts, means a major change is underway.

They believe OpenAI is creating several smaller GPT-4 models that function like larger models but cost less to run.

A paid subscription article published by SemiAnalysis a few days ago also touched on this. The article mentions OpenAI's ability to keep costs reasonable through the use of hybrid expert (MoE) models. They used 16 expert models in the model, each with approximately 111B parameters. Two of these expert models are routed to each forward pass.

"These smaller expert models are trained for different tasks and domains. There may be a mini GPT-4 for biology, as well as other small models that can be used in physics, chemistry, etc. When a GPT-4 user asks a question, the new system knows which expert model to send the query to. The new system may decide to send queries to two or more expert models and then combine the results." Sharon Zhou said.

Developer and hacker George Hotz described the GPT-4 as an 8-way hybrid model in a recent podcast.

Everyone is complaining that GPT-4 has become "stupid", which may be the fault of architecture redesign

It is worth mentioning that Oren Etzioni, the founding CEO of the Allen Artificial Intelligence Institute, saw this information on the Internet and sent an email to Business Insider to write: "I 'speculate' that these guesses are roughly accurate, but I have no evidence."

Oren Etzioni believes that the MoE approach is mainly used to make the generative model output higher quality, less costly, and more responsive.

eEtzioni adds: "The right use of hybrid models can indeed meet these needs, but there is often a trade-off between cost and quality. In this case, there are rumors that OpenAI is sacrificing some quality to reduce costs, but that's just anecdotal."

In fact, in 2022, OpenAI President Greg Brockman co-authored an article on the MoE approach with several colleagues.

"With the MoE approach, the model can support more parameters without increasing computational costs," the article mentions.

Sharon Zhou said: "The performance degradation of GPT-4 in recent weeks is likely related to the training and launch of small expert GPT-4 models by OpenAI. When users test it, we ask a lot of different questions. It won't answer well, but it will collect data from us and it will improve and learn."

https://www.businessinsider.com/openai-gpt4-ai-model-got-lazier-dumber-chatgpt-2023-7