laitimes

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

Focus:

  • Sam Altman, CEO of 1OpenAI, has said that if used incorrectly, AI can kill everyone.
  • 2 While advocating for the possibilities of AI, Altman urged policymakers to set rules and regulate to mitigate the dangers that AI can pose.
  • 3 Ultraman advocates a universal basic income plan to reduce inequality, hoping that artificial intelligence will increase people's overall income.
  • 4OpenAI's revenue last year was about $28 million, and Altman said the company is not for profit, and future revenue growth will be lower than widely expected.

Tencent Technology News June 23 news, the most striking figure in the current artificial intelligence boom is OpenAI's CEO Sam Altman. Edward Felsenthal, former editor-in-chief of Time magazine, recently interviewed Ultraman. The 38-year-old serial entrepreneur has recently become known for talking about the risks of artificial intelligence, but the focus is on the possibilities of the risks that artificial intelligence poses. This new technology, capable of responding naturally to human verbal commands, is revolutionary, and Ultraman even envisions an application scenario for it — eventually like the movie Star Trek.

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

Ultraman's company, OpenAI, is only 7 years old and currently employs fewer than 500 people. In the airy, light-filled lobby of OpenAI's headquarters in San Francisco's church district, which carries some flutes and whales through pipes, it would almost be mistaken for a spa. But in the space of 6 months, the company brought artificial intelligence into the public eye with its viral product, ChatGPT. Few doubt that OpenAI is the spearhead of this revolution. Ultimately, this revolution will change the world, for better or worse, both of which are possible.

The fastest adopted product

ChatGPT is almost certainly the fastest adopted product in the history of technology. It's also one of the more feature-rich products, able to respond to a large number of user prompts, from "tell me a joke" to "draft 10 slides with ideas to increase salon revenue." "It can write poetry, explain scientific concepts. Altman says he now uses ChatGPT for everyday tasks, like pulling primary messages from his numerous email inboxes or "drafting a tweet that I have a hard time processing." "Essentially, ChatGPT is a super-powerful autocomplete tool, but it has its limitations, including the disturbing inability to distinguish between real and fictional. OpenAI's warning about this, placed below the text input box, hasn't stopped people from using it for homework, investment advice, or even therapy.

Consumer-facing AI products have been available before, but ChatGPT's text-to-text conversational interface has caused a stir. In the days following its release on November 30, 2022, OpenAI employees were glued to their computer screens. As the number of uses increased, they posted ever-increasing user data on the company's Slack channel. Diane Yoon, vice president of people at OpenAI, said: "It's just going up at a steeper and steeper angle. According to Similarweb, two months later, the number of unique visitors to ChatGPT exceeded 100 million. In comparison, it took Instagram 30 months to reach this level.

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

Trigger an arms race among tech giants

It has also sparked an arms race. Subsequently, Google issued an internal "red alert" and merged its two artificial intelligence labs - Google Brain and DeepMind - into one company. Microsoft, which has invested $3 billion in OpenAI, has followed by an additional $10 billion. In March, OpenAI raised the stakes again, releasing a more powerful tool called GPT-4.

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

Tempering all commitments is the real fear. There is no doubt that AI, like other new technologies, will make more jobs disappear, if it can create new ones. It also empowers the bad guys to inundate us with fake content disguised as truth and fake voices that sound like those we love. Can we believe what we see or hear? Altman acknowledges the uncomfortable truth that the answer is probably no. "You can no longer believe what you hear on the phone," he said. "We just need to start telling people that day is coming."

If they fall into the wrong hands, these tools could cause more serious problems, launch cyberattacks or bring disaster to financial markets. If AI can make plans on their own and put them into action – especially if those plans are not "aligned" with human values – then it is conceivable that they will see humans as an obstacle to achieving their goals. Ultraman himself recently joined dozens of tech leaders and scientists in signing a statement recognizing the development of AI as an equal risk to pandemics and nuclear war. He said earlier this year that the worst-case scenario would be "everybody dying."

This has become Altman's calling card, urging policymakers to set rules to mitigate the dangers while advocating for the possibilities of AI. "I'm a Midwestern Jew," said Altman, who grew up in St. Louis. "I think that fully explains my exact mental model – being very optimistic and prepared for any time things get super bad." Ultraman's success comes from the ability to adapt itself to new environments. Throughout his career, adaptability was part of guiding him to great wealth in his 20s and 30s. He helped start thousands of new companies as a partner and later became president of Y Combinator, a well-known startup accelerator. It also led Ultraman to believe that, as a species, humans can avoid the worst-case scenarios that AI can bring. "Society is able to adapt because humans are much smarter and savvier than many so-called experts think," he said. "We can do that."

Advocate for the Universal Basic Income Program

While emphasizing the risks of AI, Ultraman believes in moving forward no matter what. He is an outspoken advocate of AI regulation, and he has his own opinions on what should apply to his company. He is an open capitalist who says he has no stake in OpenAI, structuring his company to limit profits for investors; Because many believe that AI will exacerbate inequality, Altman also advocates universal basic income programs to reduce inequality. While he and his colleagues acknowledge their limited insight into how the technology will evolve, he expressed confidence in the models' ability to continually improve. Helen Toner, a member of OpenAI's board of directors, said: "Even the people who create them don't know what they can and can't do. I expect it may be a few years before we really understand everything GPT-4 can and cannot do. ”

The extent to which we can trust humans who are "tuning" these powerful machine algorithms – including their intentions and capabilities – will be one of the big questions that recurs in the coming years. In conversations with employees across OpenAI, awareness of the dangers of AI was an almost universal topic. That's a far cry from the playbook of tobacco, fossil fuel, and social media executives who spent years denying possible harms but were eventually forced to acknowledge reality.

Diane Yin, OpenAI's vice president of human resources, said OpenAI didn't use the word "competitor," an affirmation of the importance of working with others in the field to avoid bad outcomes. When asked about the AI arms race, a company spokesperson rejected the analogy, saying "the whole arms race was not started by us." ”

Of course, it's hard to deny that OpenAI didn't play a big role in triggering industry development. "It's a race," said ethicist Tristan Harris, co-founder of the Center for Humane Technology, but collaboration between key players will be key. "We need coordination because it's not about making OpenAI more secure. This won't do anything, because everyone else will only grow faster and faster. Harris worries that "advances in capabilities are exponential and advances in security measures are linear" and that "the launch of AI services is commercially motivated rather than a conscious consideration of the world we want to see." ”

Altman believes that the ChatGPT interface is an improvement over the iPhone interface, and says it was inspired by his love of texting as a child. Altman said that giving ChatGPT a "terrible" bot name was deliberate; He worries about the temptation to anthropomorphize AI tools, which could weaken the distinction between humans and machines. This is another duality: ChatGPT is trained to alert users that it doesn't have an opinion of its own. However, ChatGPT's human qualities – its conversational interface, its free use of first-person pronouns – were key factors in its rapid popularity.

Despite Microsoft's heavy investments and shift to a monetization model that delivers 100x returns even with a cap, OpenAI still sees itself as a research lab dedicated to its original mission of ensuring that general AI "benefits all of humanity." The company's culture is defined by it. Brad Lightcap, chief operating officer of OpenAI, said: "If this project had started 60 or 70 years ago, it would probably have been a government-funded project."

OpenAI's revenue last year was reportedly about $28 million, less than half of the average car dealership's revenue. But Altman said he didn't immediately feel pressure to elevate the company's commercial success to a level commensurate with its level of influence. Asked how much time he spent worrying about competition, he said, "You wouldn't believe me, but almost nothing." He says he doesn't stay up at night with stiff competition from language models like Google's LaMDA, Meta's LLaMA and Anthropic's Claude. It's completely different from who gets more or less market share," Altman said. "We have to figure out how to manage this and make it work."

Lightning travel across six continents

This Jew will rule AI? Time Interview with OpenAI CEO: AI can make people more money

Shortly after Time magazine's exclusive interview with Ultraman, a five-week lightning trip across six continents began. He said the purpose of the trip was to get him out of his Silicon Valley office. In a way, it's also a triumphant journey — an attempt to encourage and influence global AI regulation as nation-states realize the power of the technology he's leading. During the visit, Altman spoke to the U.S. Senate and met with the prime ministers of the United Kingdom and India to comment on the upcoming EU AI bill.

Altman speaks in a lecture hall at a London university on May 24. The crowd winds along the road, rounding a corner. After the speech, Ultraman did not disappear into the background, but jumped into the crowd, surrounded by students and journalists. He took a selfie pose and bravely answered questions. After passing through the revolving door, he had a brief discussion with the protesters. One of them held a sign that read "Stop the Suicide AGI Race." Next to Altman, there were no bodyguards or publicists, a very different approach from the likes of Mark Zuckerberg.

Like tech companies before him, there are some commonalities between what Ultraman says and what's going on behind the scenes. At the event in London, Altman told reporters that OpenAI may decide to "cease operations" in the EU due to the imminent introduction of the EU's artificial intelligence bill. In a meeting with EU officials last year, OpenAI pushed back against wording that requires "general" AI models such as ChatGPT to comply with the same rules as AI tools deemed "high-risk" by the EU.

In an exclusive interview with Time magazine, Altman expressed deep optimism about society's eventual ability to adapt to AI risks. For example, to ensure that the person you hear on the phone or see on video is who they say they are, he foresees the use of a mix of technology and social initiatives, such as passwords or keys to verify identity. He sees the prospect of AI finally completing a large number of routine tasks that occupy our daily lives, while grappling with prompts such as "finding a cure for cancer." Altman said: "The exciting parts are almost too long to enumerate. ”

Altman also pondered whether OpenAI did "some really bad things" when creating ChatGPT. Ultraman has long been reported to be a doomsday prophet – always ready with guns, medicine, and gas masks. He rolled his eyes, dismissing the description as exaggerated, but added that he did find survivalism "a fun hobby." "Listen, if something goes wrong with AGI, no bunker can help anyone," he said. "The scary thing is that putting this lever into this world will definitely have unpredictable consequences."

Here's the full text of an interview with Ultraman by former Time magazine editor-in-chief Edward Felsenthal:

Q: What do you do with ChatGPT in your daily life?

A: One of the things I do with it every day is to help with induction. I can't really keep an eye on my inbox anymore, but I did a little thing to help it summarize for me and pull important content from unknown senders that was very helpful. I stick it there every morning. I used it to translate an article for someone to meet next week in preparation for that. It's a fun thing that I use to help me draft a tweet that I have a hard time understanding.

Q: Were you surprised that ChatGPT gained popularity with users around the world after its release?

A: We think it's going to get people excited, and a lot of people are really excited about the technology. So in a sense, it's like, "Wow, these numbers are going crazy. This looks crazy. "But I remember a lot of the discussion in the first week was, why hasn't this happened before?

Q: Other AI products have appeared before.

A: I think user experience is very important. Not just the user interface, but the way we tweak the model to have a specific conversational style. It is largely inspired by text messages. I've been using SMS for a long time and I'm a super user of SMS.

Q: As technology becomes more deeply integrated into our lives, what will the interface of the future look like?

A: You can do this with two-way voice, and it feels instantaneous. You will be able to talk like two people, and that will be very powerful. You can (eventually) imagine a world that, like you put it, is like a holographic deck from Star Trek. But I think the most important thing is how much of what you want to happen can happen from a relatively small amount of conversation. As these models get to know you better and have the ability to do more, you can really imagine a world where you have fairly simple and brief conversations with the models, and then a lot of things will be done in your name.

Q: Is it through our phones or everywhere?

A: I think it's everywhere, all of a sudden. Right now, people are still in the stage of saying, "I'm an AI company." But soon, we'll expect some intelligence to be incorporated into all the products and services we use, just like today's mobile apps.

Q: You describe this technology as the greatest threat to human survival and the greatest potential advancement for humanity.

A: Without a doubt, one of the most confusing aspects of this technology is its overall capabilities – both good and potentially bad. I think there are many things we can do to maximize the good and manage and mitigate the bad, but the scary thing is that putting this lever into the world will certainly have unpredictable consequences. The exciting parts are almost too long to enumerate, but I think it's changing the way people work. It is changing the way people learn. It will change the way people interact with the world. At a deep level, AI is the world, and it's the technology people have always wanted. Science fiction has been talking about artificial intelligence for a long time.

Q: As parents, one thing that seems scary when you think about it is how we know that our children are really our children. You get a call saying, "I need money, I need help." ”

A: This is going to be a real problem, and it will soon be a real problem. This is not just as parents, but also considering our own parents, who have been disproportionately victims of these extortion calls. I think we all need to start telling people that this day is coming. You can no longer trust what you hear on the phone. Society is able to adapt because people are much smarter and savvier than I think many so-called experts think.

Q: Do I need to speak code words to my child?

A: I think it will be a superposition of many solutions. People use videos to verify code words. This may play a role for a while. Technical solutions can help. People also exchange keys and many other ideas. We just need a combination of technological and social solutions that work differently. I'm worried, but we'll adapt. We're good at this.

Eric Schmidt and Jonathan Haidt believe that AI will make our social media problems worse. Are you worried?

A: I think social media is in a very volatile period right now, and I'm certainly nervous about that. I can also see many ways AI can make it better. I think these things are hard to predict.

Q: You said that the worst-case scenario for AI is that everyone dies?

A: We can manage AI well, and I'm very confident about that. But if we're not very vigilant about risk, we won't be successful in managing it if we don't talk frankly about how bad it can become.

Q: There have been reports in the past that you are a doomsday prophet.

A: If something goes wrong with generative artificial intelligence (AGI), no bunker can help anyone. So I think it's being exaggerated and satirical. I do think survivalism is an interesting hobby. Interestingly, early in the pandemic, a lot of people thought, "Maybe that's a good idea." "But I didn't spend much time and effort on that. I think AGI will develop very well. I think there are real risks that we have to deal with, but I don't think bunkers have anything to do with that.

I'm a Midwestern Jew, and I think that fully explains my exact mental model, being very optimistic and prepared for any time things get super bad.

Q: Why should the public trust a for-profit company like OpenAI?

A: First of all, we are not a for-profit company. We are a non-profit organization with a subsidiary with a profit cap. We thought carefully and designed a structure in which our nonprofit had full control and governance over a profit-capped subsidiary that could make a certain amount of money for its investors and employees, allowing us to do what we needed to do. Developing these models is very expensive.

Q: Setting a profit back cover still has to be profitable.

A: Absolutely. And I don't think it's bad to make a profit. I am very much in favor of capitalism. I think it's the least bad system we've invented so far. So I'm totally in favor of people profiting from it. I think that's good. I just think that the development of this technology requires a different incentive system than ordinary ones.

Q: Can you provide some details on what you think is the government's role in managing AI?

A: I think we need to regulate models that exceed a certain power threshold. We can define it in terms of ability, which is the best. Or we can define it in terms of the computing power that created the model, which is the easiest, but I don't think it's perfect. I think models like that need to be reported to the government. They should be regulated by the government, should be audited by external organizations, and should be required to pass some security assessment. It would be a very good policy. I hope this will become global at some point.

Q: You talked about a global body where we need an oversight board, just as we look at atomic energy.

A: Yes. I'm not an expert in this. Also, you should be skeptical of any company that requires its own regulation. But I think what we're calling for is what affects us the most. We say you have to have the strictest management of people at the border. But I think these systems are already very powerful and will become even stronger. We have come together before as a global community in search of very powerful technologies that pose great risks that we must overcome in order to reap great benefits.

Q: Any other specific suggestions?

A: There are some small short-term things that I think hope is not controversial. I think everything generated should be marked as generated. The fact that we couldn't even agree on that seemed like a mistake seemed like a mistake. I can also talk about other details that I think will be beneficial in the short term. But I think what the world really needs is international coordination similar to the IAEA on very powerful training hardware, which takes a while and is very important. AI advocates need to start advocating for this. This has not really happened in a meaningful way since the International Atomic Energy Agency.

Q: What do you think can be done in the United States?

A: I think we can definitely complete short-term AI regulation. Let's identify all the generated content this way. Let's request an independent audit of the security standards of these systems. I think it's doable. I'm a little optimistic that long-term cooperation is also possible.

Q: It was recently reported that OpenAI's partners Microsoft and Google have lobbied in the EU for some rules that do not apply to AI general. How do we ensure that regulations are comprehensive and authentic when they occur?

A: We have a responsibility to educate policymakers and the public on what we think is happening, what we think is likely to happen, and to bring the technology out into the world so that people can see it. We think deploying something like ChatGPT is very important to our mission so that people can gain some experience and feel the capabilities and limitations of these systems. The role of our institutions and society is to find out what our society wants.

Q: You support universal basic income and express concern that AI will deepen already deep inequality in the world. But you also said to me that you think there is still a long way to go before we have the political will [to achieve this].

A: I hope AI can reduce inequality and, more importantly, I hope it can dramatically increase people's overall incomes. If no one falls into poverty, if everyone's life is improving year by year, if we can really raise people's living standards dramatically, I have no problem with the trillion-dollar rich people in this world. I realize that not everyone agrees with this. But I think AI is very natural, and we've seen that time and time again in this long technological revolution we've all been through, and it's going to dramatically boost people's average incomes.

Q: Is there any difference between you now and before the launch of ChatGPT?

A: It's not easy to be busy. I've also had a lot of people's anxiety projected onto me, which is also hard, but I've always thought you can surprise yourself with something you can fit into. The ability of humans to adapt is incredible. So it feels like the new normal. Apart from too many emails, I'm used to everything else.

Q: What did we do wrong?

A: I think one of the mistakes people make in frameworks is, is this a tool or a creature? Even those who know it as a tool, because it is easily anthropomorphized, caught up in excessive creation. I think this is a mistake. I think this leads to wrong thinking. This is a tool.

Q: However, it will take years for the real policies to be put in place.

A: That's true. I used to go to Washington a few times a year for this. The politicians there are very friendly. They smile at you and say, "Oh yes, this AI thing, might be important." "But they don't care. One of the reasons we believe in this strategy is that it's not enough for people to take it seriously, really engage with it, understand it, and just tell them. You have to show people that people need to use the technology themselves and understand its limitations, benefits, and risks. The dialogue really needs to start now, because it will take a while to solve this problem, but every government is paying serious attention now.

Q: You wrote in your Moore's Law article a few years ago that this would dramatically accelerate inequality, and you talked about the need to redistribute income in some form. When? Companies are already making a lot of money from artificial intelligence. What are the signs of knowing when this should happen?

A: It's funny that when I wrote that article, I was criticized. "You're crazy, this kind of stuff doesn't make any sense. Completely impossible. Now these people are saying, "You're not doing enough." We need to do this right away. We need to put these things in place. "My feeling is that we still have many more years, I don't know how many years, but at least many years before AI can impact the economy, and we need and are politically capable of doing something like this." But I don't think we still have decades to go.

I still hope that basic income will emerge. I think it's just a good policy, but it doesn't look politically feasible right now.

Q: In the early days of generating AI, we heard everyone using social media analogies. What do you think?

A: Actually, the two are completely different. In terms of how we have to think about it, the analogy is closer to nuclear materials or synthetic biology. Social media has a strong social effect. A person who uses social media without anyone listening has extremely limited impact on the world. But a person who has nuclear material or someone who makes pathogens can have a huge impact on the world. Now, one person can also do a lot of good deeds; One person can cure cancer or a small percentage of people can cure cancer. [But] it's not an innate social experience.

Q: What happens after six months, one year later?

A: There will be a lot going on. We'll put images, audio, and video in at some point, and the model will get smarter. But I think what people really rejoice about is that right now, if you try something 10,000 times and pick the best one out of 10,000 answers, that's fine for most questions, but not all the others. GPT-4 has knowledge most of the time, but you don't always get the best answer to it. How can we give you the best answer at all times? If we can figure this out, it's like an open research puzzle, and that's going to be a big deal.

Q: You talked about a possible slowdown.

A: That's 100 percent thing.

Q: After Microsoft's investment, OpenAI's valuation has skyrocketed? What kind of pressure did turning on the revenue engine model put on you?

A: Not much. We are a company focused on our mission.

Q: Will OpenAI focus on revenue?

A: I think so. But we won't make as much revenue as others think. Our revenue will grow much slower than it should.

Q: What pressure is the current explosion of investment in AI and startups putting on you?

A: You may not believe me, but almost nothing. I've been saying that for at least years. This is unlike anything else. Society will change radically. This is completely different from who gets a little more or less market share. We have to find a way to solve this problem and make things work smoothly. (Mowgli)