laitimes

AI Explosion: Is Humanity Ready?

author:AI self-sizophistication

Thousands of plateaus of science and technology

Artificial intelligence is no longer a myth, it has long been integrated into people's daily lives, and many thoughts on the future have pushed the pros and cons of artificial intelligence to the forefront of global national discussions. And as ChatGPT became the fastest-growing bot ever, public anxiety began to spread.

AI Explosion: Is Humanity Ready?

From artificial intelligence to intelligence that may surpass the human brain. © AP - Frank Augstein Photography

A video that went viral online in China shows a woman angrily smashing a welcoming artificial intelligence robot in the lobby of a hospital in Xuzhou, Jiangsu Province, for some reason, in the lobby of a hospital in Xuzhou, Jiangsu Province. The intelligent robot was scattered all over the ground under the blow of the woman's club, and the front desk reception staff on the side evaded when they saw this. The woman looked emotionally broken, pointing at the intelligent robot while yelling at it, accusing it of something.

Although the cause of this incident cannot be determined, people still jokingly said that the woman "started the first battle between humans and artificial intelligence."

According to some accounts, the woman cried sadly because her family was hospitalized, and after seeing it, the artificial intelligence robot rushed to comfort the woman and asked her if she wanted to hear the joke, an untimely reassurance that triggered the woman's uncontrollable anger. Many people who left messages to the video in addition to watching the excitement, also pointed out that "this is why artificial intelligence can never replace humans, because machines have no emotions, and even if they do, they are given by humans and need human programming"; "When it sees someone crying, the intelligent robot will step forward to comfort it, but its intelligence cannot make a more specific judgment on why the crying human is crying for the time being, so as to further make actions consistent with the scene." However, will intelligent robots remain emotionally inferior or dependent on human training?

Also not long ago, a technology company in Beijing launched a "smart artifact" that can solve the pain of long-distance lovesickness, focusing on intelligent remote transmission of real people's kiss strength, methods, and even body temperature and sound, providing people with bionic interactive intimate contact. Users need to download an app, match with the other party, and then feel the other party's real-time kiss through the silicone accessories they hold in their hands. The users of this smart "artifact" are not limited to couples, strangers can also be randomly matched in the social square, and invite strangers who are chatting speculatively to kiss. The incident sparked heated discussions on various Chinese social platforms, and the heat was also reported by Reuters, and it also triggered people's thinking about ethics and personal biodata privacy.

The issue of biodata privacy breaches has already raised amazing cases in real life. This spring, police in the Chinese city of Baotou unveiled an AI fraud in which the victim was a legal representative of a Fuzhou technology company who was asked by a friend to transfer 4.3 million yuan. Since he had already checked the friend's identity through a WeChat video, he did so. Afterwards, by phone, I found out that I had been deceived. It was the scammers' use of smart technology such as face changing and onomatopoeia that dispelled the doubts of victims on the other end of the WeChat video.

In the face of the proliferation of human contact with artificial intelligence, and the many problems raised by this explosion, many governments have begun to act. In the United States, a recent Reuters-Ipsos poll showed that a majority of Americans surveyed believe that the rapid development of artificial intelligence technology may jeopardize the future of humanity; More than two-thirds of Americans surveyed are concerned about the negative impact of AI; 61% of Americans surveyed believe that artificial intelligence may threaten human civilization.

AI Explosion: Is Humanity Ready?

At a U.S. Senate hearing in May, OpenAI's CEO Sam Altman warned U.S. senators that "if AI technology goes wrong, it can be terribly wrong."

We want to be outspoken about this risk and want to work with the (U.S.) government to prevent that from happening."

Recently, the U.S. federal government is soliciting opinions on AI regulations, hoping to control the dangers of AI through laws and determine how to reduce threats to privacy, freedom, and due process;

The UK plans to share responsibility for managing AI among its human rights, health and safety, and competition regulators, with Prime Minister Sunak saying the government needs "sovereign capabilities" over AI to manage the security risks facing the country;

The Chinese government is seeking to initiate AI regulations domestically, and in April it unveiled draft measures to govern generative AI services.

The Chinese government expects tech companies to submit safety assessments to the government before launching their products to the public. At the same time, Beijing will also support leading companies to build artificial intelligence models that challenge ChatGPT; In the EU, key lawmakers have agreed on stricter draft rules to control generative AI and proposed banning facial surveillance. The European Parliament also voted on a draft EU artificial intelligence bill this month.

The rapid development of artificial intelligence has made some experts who have studied it up close become vigilant. Jeffrey Hinton, a computer scientist widely known as one of the "godfathers of artificial intelligence," has worked at Google for 10 years and has now left Google.

Hinton's work is considered crucial to the development of contemporary AI systems: a joint paper he published in 1986 was seen by the industry as a milestone in supporting the development of the AI technology "neural networks."

In 2018, he was awarded the Turing Award for his breakthroughs in the field of artificial intelligence. Recently, he joined a growing number of tech leaders in publicly warning that AI machines will gain more intelligence than humans and potentially control the planet.

Specifically, Hinton said: "For the last 50 years, I've been trying to make models on computers that can learn to understand how the brain learns. And I've always believed that the brain is better than the computer models we have, and I've always thought that by making computer models more like the brain, we can improve computer systems.

A few months ago, however, I had an epiphany, and I suddenly realized that maybe the computer models we have now are actually better than the brains. If that's the case, then maybe soon, computers will be better than us.

So the idea of superintelligence may come a lot earlier than I expected, and it won't be very far in the future."

AI Explosion: Is Humanity Ready?

"For example, if you want to go to Europe, you first create the sub-goal of going to the airport, and then you solve the sub-goal. For any complex target you want to create, you can create child goals first. It turns out that for almost everything you want to do, there's one sub-goal that pays off: to gain more control. If you gain more control, it's easier to achieve other goals. So once AI is able to create sub-goals — and they're very smart — I think these machines will soon realize that if they get more control, they can achieve their goals more easily. And once the machine wants to take control, things start to go against people."

"AI will be very skilled manipulators because they will learn from all the manipulation that humans have ever done. So AI will be much better at manipulating humans than humans. They will be able to manipulate us and do whatever they want. It's as simple as you manipulate a two-year-old, and it's not a problem that can be solved by keeping the robot away from the deadly button."

"Like climate change, you could say that if you stop burning carbon, everything will be fine." For some, this is not very politically satisfactory, but at least you can see the general outline of the solution. If we can get really efficient solar energy, then we can stop burning carbon and everything will be fine. But I don't see a similar solution on AI risk. I don't see solutions that say, 'Just do this and everything will be fine,'"

"For the idea of threatening the existence of humanity, the development of artificial intelligence could wipe us all out, just like nuclear weapons. And precisely because nuclear weapons have the potential to wipe out everyone, people around the world can work together to prevent this from happening. As for the threat of AI, I think maybe the United States, China, Europe and Japan can work together to avoid this existential threat. But the question is, what should they do? In any case, I don't think it's feasible to stop development."

AI Explosion: Is Humanity Ready?

Other AI experts believe that talking about existential threats from AI now distracts people from more pressing issues. Julia Stoyanovich, associate professor of computer science and director of the Center for Artificial Intelligence at New York University, said: "In the field of artificial intelligence, we have a lot of wrong trust, or wrong mistrust. It also proves the success of these technologies because they look and really sound like humans, as if they have human intentions."

"However, if humans are morally exempted from responsibility for controlling artificial intelligence, there will be big problems. Some people say, 'Now that I created this intelligent being, it now has its own ideas, so it can make its own decisions, it's no longer controlled by me'. It's a wrong, very, very dangerous line of thinking."

"We need everyone to work together to control AI systems, and one of the important components is regulation and law, and we need regulation to put guardrails on the use of these systems." Before deploying AI systems, we need to think very carefully about who will be affected and in what way... For example, in areas such as recruitment, employment, predatory lending, access to housing, access to development opportunities, etc." (from https://www.rfi.fr/)

Read on