laitimes

OpenAI CEO: Musk gave me important inspiration

author:Fortune Chinese Network
OpenAI CEO: Musk gave me important inspiration

In an unseated speech in London, OpenAI co-founder and CEO Sam Altman was met with both protests from the crowd and admirers demanding selfies. CREDIT: WIN MCNAMEE—GETTY IMAGES

Sam Altman, co-founder and CEO of OpenAI, is about to speak in the 985-seat underground auditorium at University College London, where people line up several stairs from the doorway up the street and then wind through a city block. Further on, you'll see six young men holding signs calling on OpenAI to abandon its efforts to develop general-purpose artificial intelligence — an AI system that can match the intelligence of the human brain in most cognitive-related tasks. One protester with a megaphone accused Altman's messianic complex (wanting to save himself by saving others and manifesting his worth by playing the role of a savior), saying he risked destroying humanity in order to achieve self-worth.

Accusing Altman of having a messianic complex may be a bit overkill. But in the auditorium, Altman was treated like a rock star. After the presentation, he was surrounded by admirers, asked to pose for selfies, and asked for his opinion on how startups could better create "moats" (structural competitive advantage). "Is this normal?" As we stood in the crowd around Altman, an incredulous reporter asked OpenAI's spokesperson questions. "That's pretty much the case everywhere we went on this trip," the spokesperson said.

Altman is currently on an OpenAI "world tour" — from Rio and Lagos to cities like Berlin and Tokyo — to discuss OpenAI's technology and the potential impact of AI in the broader field. Altman has made such a world tour before. But this year, with the viral popularity of the AI chatbot ChatGPT, it has become the fastest growing consumer-facing software product in history. Therefore, going on a "world tour" has the feeling of going around the field to celebrate victory. Altman will also meet with key government leaders. After his speech at University College London, he will have dinner with British Prime Minister Rishi Sunak and will meet EU officials in Brussels.

What did we learn from Altman's presentation? Among other things, Altman credits Elon Musk with making him aware of the importance of investing in deep technology, he also believes that advanced AI will reduce global inequality, and he compares educators' fear of OpenAI's ChatGPT to previous generations desperate for the advent of calculators, but he's not interested in colonizing Mars.

Altman called for government regulation of AI in his testimony before the U.S. Senate and recently co-wrote a blog post calling for the creation of an organization similar to the International Atomic Energy Agency to oversee the development of advanced AI systems worldwide. He said regulators should strike a balance between the traditional laissez-faire approach to U.S. regulation of new technologies and the aggressive regulatory stance taken by Europe. He said he would like to see open source development of AI flourish. "There are calls to stop the open source movement, and I think that would be a real shame," he said. But "if someone cracks the code and develops a super-AI (no matter how you want to define it)." He warned that "it makes sense to possibly set global rules." ”

"For the largest systems that could develop superintelligence, we should at least take it as seriously as nuclear material," Altman said.

The OpenAI CEO also warned that with the help of technologies such as his own company's robot ChatGPT and text-generating image tool DALL-E, it is easy to generate a lot of misinformation. Rather than generative AI being used to scale up existing disinformation campaigns, Altman is more concerned that the technology has the potential to generate tailored, targeted disinformation. He noted that OpenAI and other companies that develop proprietary AI models could build better guardrails to prevent such activity, but he said open-source development could undermine that effort because open-source development allows users to modify software and remove guardrails. While regulation "could help," Altman said people need to become critical consumers of information and compare that to a period when image processing software Adobe Photoshop was first released when people were worried about digitally editing photos. "The same thing will happen with these new technologies," he said. But I think the sooner we make people aware of this, the better, because it resonates more emotionally more generally. ”

Altman's view of AI is more optimistic than ever. While some believe that generative AI systems will exacerbate global inequality by leading to lower wages for the average worker or mass unemployment, Altman said he believes the opposite is true. He noted that AI can help achieve global economic growth and increase productivity, thereby lifting people out of poverty and creating new opportunities. "I'm very excited about this technology, which is able to restore the productivity lost over the past few decades and go beyond catch-up," he said. He makes the basic point: the two biggest global "limiting factors" are intellectual costs and energy costs. If the costs of both can be significantly reduced, he said, they should help the poor more than the rich. "AI technology will change the whole world." He said.

Altman also mentioned that he believes there are different versions of AI, including super AI. Some, including Altman, have said in the past that while this futuristic technology could pose a serious threat to humanity, it could actually be controlled. "My view in the past about where superintelligence is going is that we're going to build an extremely powerful system," he said. He noted that such a system is inherently very dangerous. "I now think we've found a way forward: tools that can be created that are increasingly powerful, and that billions and trillions of copies can be widely used around the world. They can help individuals be more efficient and thus able to accomplish more, and an individual's output may increase significantly. The role of super AI is not just in supporting the largest single neural network, but also in all the new science we're discovering and everything we're creating.

When asked what he learned from different mentors, Altman mentioned Elon Musk. "Of course, I've learned from Elon what can be done, and you don't need to accept the fact that you can't ignore the importance of hard research and hard technology, which is valuable," he said. ”

Altman also answered the question of whether he thinks AI could help humans settle on Mars. "Listen, I don't want to live on Mars, it sounds scary, but I'm happy that other people want to live on Mars," he said. Altman suggested that robots should first be sent to Mars to help engineer Mars and make it more habitable for humans.

Outside the auditorium, protesters continued to chant against the OpenAI CEO. But when attendees stopped to ask about their protests, they also stopped to have serious conversations with curious attendees.

"What we're trying to do is raise awareness that AI does pose a threat and risk to humanity, including jobs and the economy, bias, misinformation, social polarization and rigidity, but also creates a slightly longer-term, not really long-term, and more existential threat." Alistair Stewart, then a 27-year-old graduate student in political science and ethics at University College London, who helped organize the protests, said.

Stewart cited a recent survey of AI experts that found that 48 percent believe that advanced AI systems have a 10 percent or greater chance of causing human extinction or other serious threats. He said he and others protesting Altman's attendance at such events called for a moratorium on the development of AI systems that are more powerful than OpenAI's GPT-4 large-scale language model until researchers "solve the alignment problem" — a term meant finding a way to prevent future super-AI systems from taking actions that could cause harm to human civilization.

The call to suspend development echoes calls from thousands of signatories of the open letter, including Musk and some prominent AI researchers and entrepreneurs, published by the Future of Life Institute in late March.

Stewart said his organization wants to raise public awareness of the threat posed by AI so they can pressure politicians to take action to regulate the technology. Earlier this week, protesters from a self-proclaimed group called Pause AI also began protesting in front of the London office of Google's DeepMind, another advanced AI research lab. Stewart said his organization is not affiliated with Pausing AI, although the two organizations share many of the same goals and objectives. (Fortune Chinese Network)

Translator: Zhong Huiyan - Wang Fang

At Fortune Plus, netizens have expressed many in-depth and thoughtful views on this article. Let's take a look. You are also welcome to join us and share your ideas. Other hot topics today:

Check out the highlights of "Beijing Top Crown for 4 Consecutive Weeks"

Check out the wonderful views of ""Kneeling for Water" Party Reported for Suspected Illegal Fundraising, Police Have Intervened in Investigation"

OpenAI CEO: Musk gave me important inspiration

Read on