laitimes

The turning point in artificial intelligence has arrived, is it time to press a pause button?

There is a "shoe shiner" theory in the stock market, which says that if even the shoe shiner is talking about the stock market and recommending stocks to others, it must not be far from a crash. Shoe shiners are not easy to find nowadays, but they do talk about GPT all over the city. Unlike the stock market, artificial intelligence is not only not in danger of crashing, but is setting off a race.

Artificial intelligence is advancing at superhuman speed. Since the release of ChatGPT last November, I have learned how to prevent overcooking salmon. During the same period, OpenAI's chatbots learned how to instantly turn a hand-drawn sketch into a working website, build a video game similar to the classic Pong in 60 seconds, win a bar exam, and generate recipes based on photos of food left in the refrigerator. Artificial intelligence is particularly reminiscent of the concept of a thinking machine. But no machine can think, and no software is truly intelligent.

While I'm not convinced that ChatGPT is intelligent — large language models don't produce real knowledge and just give the illusion of intelligence, OpenAI's ChatGPT, Google's Bard, and Microsoft's Sydney are machine learning wonders. Roughly speaking, they take large amounts of data, look for patterns in it, and become more and more adept at generating statistically possible outputs—such as seemingly human-looking language and thoughts. These programs hailed as the first ray of light on the horizon of general artificial intelligence (AGI) — a long-prophesied moment in which robotic thinking surpassed the human brain not only in processing speed and memory size, but also in intellectual insight, artistic creativity, and various other unique human abilities.

On March 22, researchers from Microsoft published a new paper in arXiv claiming that "in addition to mastery of language, GPT-4 can solve novel and difficult tasks involving mathematics, coding, vision, medicine, law, psychology, and more without any special prompts." Moreover, in all of these tasks, GPT-4 performed surprisingly close to human levels, and often greatly surpassed previous models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe it can reasonably be considered an early (but still incomplete) version of the AGI system."

Over the past 30 years, a handful of products have truly disrupted the tech industry, making previous products look like bulky dinosaurs. ChatGPT is something comparable to these epoch-making products, just as smartphones and social networks were when they first appeared.

First, ChatGPT is the general public's first first-hand experience of just how powerful modern AI has become. For years, virtual assistants like Siri and Alexa have also used artificial intelligence, but they're the focus of jokes because they're not particularly useful. While ChatGPT's capabilities are amazing, many people are seriously considering how to integrate these tools into their daily lives and careers.

So far, most of the progress in AI applications has been in the background. ChatGPT delivers value to consumers in a more visible way, not just for back-end developers or digital marketing professionals. Because the system can be programmed and perform other business tasks, it lowers the barrier to entry for many things while increasing the productivity of experts at a tipping point where the nature of work is transformed.

Secondly, the anthropomorphism of ChatGPT also helped the popularity a lot. Say hello to the latest chatbot using one of the most advanced AI algorithms, and you'll wonder from time to time if you're using ChatGPT to talk to a person or a chatbot. It apparently has a memory, and the robot can recall previous comments made in conversations and retell them to you. According to many user reports, both ChatGPT and Sydney, a Bing robot built by Microsoft that integrates language-based AI technology such as ChatGPT, seem to show emotion during the chat.

Social science research shows that people tend to interact with technology just like humans. The reason why ChatGPT is tested on a large scale for the whole society is to provide user feedback to OpenAI in order to improve the algorithm. Text generated through a process known as RLHF (reinforcement learning from human feedback) is more likely to be evaluated by humans.

Third, ChatGPT brings with it a deep-seated fear – if AI improves so quickly, whose jobs are safe, and when will we be massively replaced by robots? "Artificial Intelligence Passes U.S. Medical Licensing Exam." "Despite ChatGPT's mediocre performance, it passed the law school exam." "Will ChatGPT get an MBA from Wharton?" Headlines such as these have recently touted (and often exaggerated) the success of ChatGPT, which follows a long tradition of comparing the capabilities of artificial intelligence to human experts, such as AlphaGo's defeat of Lee Sedol in a Go tournament in 2016. The subtext implicit in these recent headlines is even more alarmist: AI is coming for your job.

We have reached a tipping point in artificial intelligence, and now is a good time to pause and evaluate. How can we use these tools ethically, safely, and without human dignity? Wiener, author of Cybernetics, said: "The first industrial revolution, the revolution of the 'dark Satanic factory', was the devaluation of man's arm by mechanical competition... The modern industrial revolution is equally doomed to devalue the human brain, at least in simpler and more conventional decisions. "Is the era of human brain devaluation coming?

Will machines make most people redundant? Or will it give all people more free time and a fuller life? When AI finally arrives, Wiener believes that "the answer is to build a society based on human values, not buying and selling."

(The author is a professor at the School of Journalism and Communication, Peking University)

Published in the 1086th issue of China Newsweek magazine on April 3, 2023

Magazine Title: Does GPT-4 herald the advent of the era of human brain devaluation?

Author: Hu Yong

Read on