laitimes

Stop inciting anxiety, GPT-4 can't replace you

Stop inciting anxiety, GPT-4 can't replace you

GPT update, this time the performance is too strong. /unsplash

GPT-4 can even fix the bugs in the code by itself, so is it not afraid that it will evolve itself and eventually produce autonomous consciousness to replace humans? The answer is: not currently. But it may only be a matter of time before GPT-4 reaches "perfection."

Programmers are going to be eliminated, which may not be alarmist.

This profession was originally almost a "golden rice bowl"-like existence, with high technical thresholds, testing creativity, and high income. But the advent of ChatGPT has changed this inherent situation.

In the past two days, GPT has also been upgraded to the fourth generation - GPT-4.

You know, the previous generation of ChatGPT already has super semantic understanding ability. Although this ability is limited to word processing, there are already companies using ChatGPT-related technology to replace the work of some junior programmers.

GPT-4 not only has higher text comprehension ability, but also has super picture comprehension ability, and even some Internet memes can be read.

And that's not all the GPT-4 is all about. In a demo video released by GPT developer OpenAI, GPT-4 can generate a website page in less than 10 seconds based on a very scrawled demo image.

10 seconds is a speed that humans can't achieve anyway, but AI can. And in just over 100 days, it jumped to this level.

How powerful will the next generation of GPT be? It's both anticipatory and exciting, but also feels a little chilly in the back.

GPT-4, omnipotent?

The previous generation of GPT was not the first generation, and the version number was actually "3.5". However, because its text comprehension ability is relatively strong, especially the dialogue and communication performance is very "smooth", it is called "ChatGPT".

Compared to many AI assistants like "artificial intellectual disabilities", ChatGPT gives a more human-like answer, but it is not without loopholes. Some users have asked ChatGPT: "What is the matter with Lin Daiyu's three dozen white bone essences?" ”

ChatGPT really gave a story to nonsense, which looks a bit like a plot extracted from the creation of netizen two.

What happened to Lin Daiyu's three dozen white bone essences? / Zhihu Answer @ Lian Shilu

But ask GPT-4 the same question, and it will remind users not to mix the plots of "Dream of Red Mansions" and "Journey to the West".

ChatGPT is a bit like getting drunk and talking nonsense, while GPT-4 stays awake. / Zhihu Answer @ Lian Shilu

If you think that GPT-4's text comprehension skills only stop at the level of "sentences", you are mistaken. GPT-4 can also understand "code language".

In the GPT-4 product video, the OpenAI president sent a 10,000-word piece of code directly to GPT-4 to fix the bug. As a result, GPT-4 found the bug in just a few seconds, and the sub-points gave the solution. In other words, if the programmer goes to find it, it will never be this efficient.

With this upgrade and update, OpenAI has also expanded the upper limit of GPT-4 to 25,000 characters, which is 8 times the upper limit of ChatGPT. This means that GPT-4 can handle a much larger amount of information.

GPT-4's more prominent capabilities are reflected in picture understanding.

In the product video released by OpenAI, the developer showed GPT-4 a picture of "charging the iPhone with the VGA computer interface" and asked GPT-4 "where is the laugh", GPT-4 came to a conclusion after analyzing the strands:

"The funny thing about this image is that it plugs a large and outdated VGA port into a small, modern smartphone charging port, which is ridiculous."

GPT-4 is humorous.

What's more, it can also combine the understanding of pictures and text, and synchronize it.

For example, by analyzing some statistical charts and graphs of GPT-4, GPT-4 can efficiently draw conclusions, find problems, and quickly feed back to users. Even uploading a screenshot of the entire paper directly to GPT-4 can summarize the core content of the paper in a matter of seconds.

In the future, GPT-4 can analyze the financial reports and sales business data of large companies, even if the chart content is complex, the content dimension is more, and the amount of data is large, as long as it does not exceed the limit of 25,000 characters.

In this way, it seems that GPT-4 can almost kill a considerable part of humanity, programmers, financial analysts, data analysts, music creators (GPT-4 can write their own songs) will all be laid off.

At the very least, junior programmers will be eliminated. /Weibo screenshot

GPT performance leapfrogging, leaving not many opportunities for mankind?

The question is, GPT-4 can even fix the bugs in the code by itself, so is it not afraid that it will evolve itself and eventually produce autonomous consciousness to replace humans?

The answer: not at the moment, because GPT-4 is not flawless either.

Although it is a multimodal model, it can already handle two "modalities" of text and images, which is a lot better than the "single-modal model" that ChatGPT can only handle text, but it is limited to this.

At least, for now, it can't analyze sound and video. Subsequent versions will inevitably cover these "modalities" to enhance the AI's ability to understand.

In fact, the understanding of sound will be more complicated, such as the tone and pause in a sentence contain information. In addition, there are non-verbal sounds, such as wind sounds, impact sounds, and the identification of sound distance and orientation, which are all points that need to be broken. Video images will be more complex, and tackling these technologies will inevitably take longer.

Of course, this is only a matter of time.

Multimodal, similar to sensory fusion. /pexels

At present, it also has an obvious problem that it is easy to produce "AI illusions", generate wrong answers, and make logical errors.

The so-called "AI illusion" refers to the overconfidence of the model, the generated content is irrelevant or disloyal to the source content provided, and sometimes there are answers that sound reasonable, but incorrect or absurd, such as the "Lin Daiyu three dozen white bone essence" Q&A mentioned above.

He Baohong, director of the Institute of Cloud Computing and Big Data of the Chinese Academy of Information and Communications Technology, once pointed out in an interview, "The illusion of ChatGPT comes from two aspects, one is the training data itself, and the other is the training method. AI is trained on massive amounts of data, so this drawback is the same as the problem with big data: the data is precise but wrong."

If you rely too much on AI, then the inaccuracy caused by this illusion will obviously affect the normal operation of some businesses. This means that the manual must review the analysis of its feedback, which is a responsibility and necessity that AI cannot replace.

AI illusion, big data can not avoid the bug. /pexels

Fortunately, GPT iterations have been limited to passive response so far. That is to say, whether it is ChatGPT or GPT-4, their information reading ability is only feedback on human questions, and will not actively judge or analyze.

This "passive" attribute is critical, on the one hand, it can put AI in a subordinate position, and will only passively solve the problems raised by humans. This point should become an insurmountable red line for the AI industry, otherwise it will face a lot of trouble, such as the self-evolution of AI, the rights and welfare of AI, AI crossing the firewall to monitor, and even deriving hacking...

On the other hand, this also puts forward higher requirements for humans, such as the ability to ask questions.

Obviously, GPT-4 is analyzed according to instructions, that is, GPT-4 can only give targeted answers if the instructions or questions are clear and clear. Otherwise, if you ask a shallow question, the feedback you get will only be the same superficial answer as the encyclopedia.

The leap forward of AI has put forward higher requirements for human insight capabilities. /pexels

One wake-up call is that Morgan Stanley, a well-known financial services company, has used GPT in its actual business.

Morgan Stanley staff are faced with a vast library of hundreds of thousands of pages of investment strategies, market research and commentary, and analyst insights. Most of this information is distributed on internal websites in PDF format, requiring a lot of information to navigate to find answers to specific questions, which is time-consuming and laborious to retrieve.

The efficient information processing capability of GPT has effectively eased this part of the workload.

But this also means that this part of the information processing junior positions will be eliminated, and those who remain must have stronger insight and be able to ask good questions to guide GPT.

Therefore, for now, human beings do not have to worry about being replaced by AI, but they must not lie down, but must continue to enhance their level, so that AI cannot replace themselves.

Author | Titan

Edit | Yan Fei

Proofreading | Lai Xiaoni

Typesetting | Han Bofei

Read on