laitimes

[Science Power] is smarter than the smartest, can AI do it?

author:China's well-off network

Recently, Musk, a well-known American entrepreneur, issued his radical prediction in the field of artificial general intelligence (AGI): by the end of next year or the year after, artificial intelligence (AI) may be smarter than the smartest humans.

[Science Power] is smarter than the smartest, can AI do it?

Source: Xinhua News Agency

Looking back on the past year and a half, the AI field has ushered in several major breakthroughs represented by chatbots and video generation tools, which have pushed AI technology to unexpected heights.

AI will be "smarter than the smartest"

Elon Musk, a well-known American entrepreneur, said on social media on April 8 that by the end of next year or 2026, new artificial intelligence models may surpass human intelligence and be "smarter than the smartest."

Musk's interview with Nikola Tangen, CEO of Norges Bank's Investment Management Fund, was published on the social media platform X on the same day. Musk said in the interview that he predicts that the intelligence of general artificial intelligence may surpass that of humans within two years or even next year, which is much earlier than the point in time in 2029 that he predicted last year.

At the same time, he mentioned that last year, especially the shortage of chips, including Nvidia, was the main constraint on the development of AI. It has also hindered the training of the Grok 2 AI model from his AI startup, xAI.

But he also added that the chip challenge is easing. But a new challenge has emerged, and that is the power supply, which will be a major bottleneck for the development of the AI industry in the next year or two.

Musk predicts that if the supply of electricity and hardware can keep up with technological developments, it will not be a problem for the intelligence of the new generation of AI models to surpass that of individual humans by the end of next year. Musk introduced that the upgraded AI chatbot Grok-1.5 is expected to complete training by May, and threatened that it should be better than OpenAI's GPT-4.

The "singularity" may appear in 5 years

Currently, the most advanced AI systems developed by scientists are considered "narrow AI" because, based on training data, they may be more capable than humans in one area but unable to surpass them in a broader field. These narrow AI systems include machine learning algorithms, ChatGPT-like large language models (LLMs), and more, which have a hard time reasoning and understanding context like humans.

However, American computer scientist futurist Ben Gozel points out that AI research is in a period of exponential growth, and there is evidence that artificial general agents (AGIs) are achievable, with human-like capabilities. This hypothetical point in the development of artificial intelligence is called the "singularity".

According to the data, the "singularity" is regarded as the starting point of the birth of the universe, which was transformed into majestic mass and energy through the Big Bang, and then evolved into everything in the world today. In recent years, with the rapid development of artificial intelligence technology, many people have realized that the technological "singularity" may be on the horizon.

Gozel argues that humanity is most likely to make the first AGI in 2029 or 2030, but it could also be achieved as early as 2027. If such an AI is designed to be accessible and rewrite its own code, then it may evolve into a higher-order artificial superagent (ASI), that is, an artificial intelligence that has all the cognitive and computational power of human civilization.

LEE, a professor at the Department of Electrical and Computer Engineering at Seoul National University, also mentioned at the Boao Forum for Asia in March this year that a questionnaire survey of AI scientists was conducted in 2017, and most respondents predicted that the arrival of the singularity would be between 2045 and 2090, with 2060 being the most likely. "But now if you ask the same question again, the answer may be different, and I think it will be five years from now. He said that today, almost all resources such as funds, talents, and policies are tilted towards artificial intelligence, which is unprecedented.

Yuan Hui, chairman and CEO of Xiaoi Group, also believes that the singularity is accelerating: "Weak artificial intelligence assists a little human basic work, and strong artificial intelligence can generate videos, generate music and even generate code." In the last three years, I have been faced with scenes that are beyond my imagination, and everyone generally feels that this time is coming faster and faster. ”

[Science Power] is smarter than the smartest, can AI do it?

Source: Kale Pictures Photography/Ning Ying

Humans and AI "co-evolve"

Despite the rapid development of artificial intelligence, it is still limited by related bottlenecks.

According to the Financial Times, the speed of AI development has been affected by the bottleneck in the supply of microchips, especially those produced by Nvidia in the United States, which are essential for training and running AI models. Musk said that while these constraints are easing, the new model is testing the equipment and grid performance of many data agencies.

Zhu Rongsheng, a special expert at the Center for Strategic and Security Studies of Tsinghua University, said that the power consumed by the human brain is much lower than the power consumed by artificial intelligence during operation, and there is still a gap between artificial intelligence and human intelligence. However, in the long run, compared with the human brain, artificial intelligence has many deficiencies in algorithms and data that may eventually surpass human intelligence, and the hardware and software required to achieve this goal are chips and algorithms, respectively.

In addition, there are many problems and risks arising from artificial intelligence. For example, the use of AI to change faces, AI to resurrect the deceased, onomatopoeia technology to carry out fraud, etc., the "double-edged sword" effect of advanced technology is gradually emerging.

According to a 2023 survey of scientists conducted by the British journal Nature, 30% of respondents admitted to using AI tools to help with writing. According to the website of Popular Science, many professional journals are full of "gibberish" generated by artificial intelligence tools, and many articles have obvious traces of using artificial intelligence.

Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences and an expert from the United Nations High Advisory Body on Artificial Intelligence, said that the real meaning of artificial intelligence has not yet arrived. He believes that humans and artificial intelligence should "co-evolve". In the future, human beings need to make changes not only to make artificial intelligence safe, but also to learn and reflect on history and nature. Otherwise, it is not artificial intelligence that poses catastrophic risks to humanity, but humanity itself. "But I'm still optimistic about the future, because a super-intelligent AI should also be super altruistic, so I believe that we may still have a chance at that time. ”

(China Xiaokang Network Comprehensive Xinhua News Agency, China Youth Network, Shangguan News, China Business News, etc.)

END

Source: China Xiaokang Network

Author: Fenghua

Review: Gong Zimo

Keeping an eye on Science Power is the best decision you'll ever make today!