[Netizens are angry: GPT-4 has recently appeared "reduced intelligence", suspected of official cost reduction and efficiency] AI Singularity Network reported on July 18丨Recently, more and more users on foreign Reddit communities began to complain, thinking that the experience of ChatGPT (GPT-4), the industry benchmark of chatbots, is getting worse and worse, and even bluntly regretted paying a $20 subscription fee.
It is widely believed that since May, the response speed of GPT-4-generated content has indeed improved and become faster, but the quality of generated content has declined. Some users posted that the content they generated in June compared to the content generated with the same prompt two months ago, and believed that the capabilities of GPT-4 can now only be called "GPT-3.6" at best.
For the reasons for the decline of GPT-4 capabilities, netizens basically reached a consensus: AIGC needs great computing power support, in order to save operating expenses, OpenAI is creating a number of small GPT-4 models with lower operating costs to form a "hybrid expert model", which is a small GPT-4 "expert model" with certain achievements in a vertical field (such as: mathematics, physics, chemistry) to replace some complex AI calculus.
This model architecture can provide a better response experience and a more cost-effective response cost. If this assumption holds, OpenAI may have made a little sacrifice in content quality in terms of cost reduction and efficiency.