laitimes

Does AI need self-awareness to be superintelligent? Experts have divergent views

author:TechMind

The rise of artificial intelligence proves one thing, and that is that the technology may be smarter than the best-known experts imagined.

Microsoft researchers were surprised to find that GPT-4, ChatGPT's most advanced model of language to date, was able to come up with ingenious solutions to puzzles, such as how to stack a book, nine eggs, a laptop, a bottle and a nail in a stable way. One of the researchers told Wired that he was surprised when he asked GPT-4 to draw a unicorn in an obscure coding language.

Another study showed that AI avatars can manage their own virtual towns with little human intervention.

Does AI need self-awareness to be superintelligent? Experts have divergent views

These capabilities may offer a glimpse of Artificial General Intelligence (AGI): the ability of technology to enable complex human capabilities like common sense and awareness. While AI experts interviewed by Insider have different views on what AGI really looks like, they agree that progress is being made toward a new form of intelligence.

Ian Hogarth, co-author of the State of AI annual report and invested in dozens of AI startups, defines AGI as "god-like AI" consisting of a "superintelligent computer" capable of "learning and developing autonomously" and understanding context without human intervention. He told Insider that, in theory, technology with AGI could foster self-awareness and "become a force that we can't control or understand."

An AGI researcher at OpenAI told Insider that AGI has the potential to resemble the killer robot in the 2023 sci-fi movie M3GAN, in which a realistic, AI-powered doll refuses to shut down, pretends to sleep, and develops its own moral complexity.

Does AI need self-awareness to be superintelligent? Experts have divergent views

But Tom Everitt, an AGI security researcher at DeepMind, Google's artificial intelligence unit, said machines don't have to be self-aware to be superintelligent. He told Insider: "I think one of the most common misconceptions is that 'consciousness' is necessary for intelligence. "Model 'self-awareness' is not a prerequisite for these models to match or enhance human-level intelligence." ”

He defines AGIAGI as an AI system that can solve any cognitive or human task in a way that exceeds its training limits. In theory, AGI could help scientists develop treatments for diseases, discover new renewable energy sources, and solve "humanity's biggest mystery."

Everitt said: "If done correctly, AGI can be an extremely powerful tool to drive breakthroughs and change our daily lives. AI experts are divided on when AGI will become a reality.

Does AI need self-awareness to be superintelligent? Experts have divergent views

Geoffrey Hinton, known as the "godfather of AI," told CBS that AGI could be available within five years. Earlier this month, he told the BBC that AI chatbots could soon be smarter than humans.

Hogarth said no one knows exactly how far industry is from creating god-like AI. He added that we really don't know what might happen with tools like OpenAI's GPT-4, Google DeepMind's Gato, or the open-source project AutoGPT. Wired magazine notes that AutoGPT is a virtual agent running on GPT-4 that can order pizza and run marketing campaigns on its own.

He said we're already seeing some hints of AGI, such as malicious use of deepfake technology and machines that can outperform chess grandmasters.

But these are only hints: "AI systems still lack long-term planning capabilities, memory, reasoning, and understanding of the physical world around us." "We still have a lot of work to do to figure out how to give these systems these capabilities." ”

If its risks are not addressed, AGI can make humans obsolete.

Does AI need self-awareness to be superintelligent? Experts have divergent views

Experts say an important part of establishing an AGI involves understanding and addressing its risks so that the technology can be deployed safely.

An AI study found that when researchers increased the amount of data fed into language models, those models were more likely to ignore human instructions or even express reluctance to shut down. This finding suggests that AI may become so powerful at some point that humans will not be able to control it.

If that happens, Hogarth predicts AGI could "lead to human extinction or annihilation."

That's why researchers like Everitt are studying AGI safety to predict "existential questions" about how humans maintain control over AGI. Google's DeepMind takes ethics and safety research very seriously, he said, to "ensure that we take a responsible approach as we develop increasingly powerful AI."

To develop AI technology responsibly, Hogarth said regulation is key. "Regulators should keep an eye on projects like OpenAI's GPT-4, Google DeepMind's Gato or the open-source project AutoGPT," he said. He also said that many AI and machine learning experts have called for AI models to be open-sourced so that the public can understand how they are trained and how they work.

Does AI need self-awareness to be superintelligent? Experts have divergent views

"We need to discuss these big issues early and welcome different perspectives and ways of thinking," Everitt said. He stressed that embracing diverse perspectives is essential to address the issue.

In conclusion, although there are still many unknowns about the specific form of AGI, AI experts agree that we are starting to see some signs of AGI. In the process of establishing AGI, understanding and addressing its risks is critical to ensure that the technology can be applied safely. Regulation and open-source transparency are responsible ways to develop AI, but they also require extensive discussion and diverse thinking.