laitimes

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

author:Taste play
Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

For AI large model technology, 2023 is definitely an important node. The outbreak of the AI wave is inseparable from Microsoft and OpenAI. Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman were recently interviewed by The Circuit to share their views on the changes that AI has brought to people's lives, which was compiled into a video called "Microsoft & OpenAI CEOs: Dawn of the AI Wars".

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

Nadella said in the video that the disruption brought by AI is as huge as the Internet, and it will profoundly affect the way people work, live and even educate. He stressed that people should have more control and position AI as a helper.

The following is a transcript of the conversation:

Host: Do you often play AI?

Nadella: yes, I found some of those things interesting.

Moderator: What did you find?

Nadella: My email response is now very detailed and very polite.

Moderator: Does this mean that AI has been observing us?

Nadella: It's interesting, just like our colleagues in the office. When I replied to his email, he said, "What's wrong with you, man? Why are you so polite? ”

It's really easy for people to get into the habit of using AI, and even if I only use it once, you'll start to get used to chatting with it. Most of the time I just navigated, using search as a navigation tool. But once you get used to it, you feel, I have to have these accessibility features.

Host: Microsoft has been studying AI for decades, and chatbots are not a new thing. But all of a sudden, everyone was fascinated by it. Why do you think it's the golden age of AI?

Nadella: AI is now very popular, and we can see it in many everyday applications, such as search, news aggregation, recommendations, YouTube, e-commerce, and TikTok, which are all applications based on AI technology. However, they are basically automated forms of modern AI, like a black box that controls our attention.

In the future, we hope that the new generation of AI technology will change from "autonomous driving mode" to "collaborative work mode", and we can actively guide it through prompts. This shift will make AI more human, flexible, transparent, and more in line with the human mindset and way of working. This collaborative working model would make AI our powerful assistant, not just an automated tool. This is the most exciting development direction for the next generation of AI technology.

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

Moderator: How much do you think this will change our work?

Nadella: I think the biggest change is likely to be in work communication.

You'll find that the most important database for any company is the one that underpins all the productivity software, only at the moment it's siloed.

And now I can say, "I'm going to meet this client soon, can you tell me when I last saw them?" Can you find out all the documents about this client and make a summary so I can understand what I need to prepare. ”

Moderator: How do you make sure he's not a paperclip assistant 2.0? But really helpful to users? The kind that doesn't make me want to turn it off right away.

Nadella: You know, there are a lot of precedents in our industry, from paperclip assistants to the current generation of assistants, etc., their performance is relatively weak. I think these things are really tools, and we need to use them to learn eventually. Like every time someone sends me a draft, I review the draft instead of accepting it directly.

Host: In 1995, Bill Gates sent a memo saying that the Internet is a trend, it will change all the rules, it is critical for every department, do you think AI can change that much?

Nadella: yes, I even think ChatGPT first came out like Mosaic [the world's first browser to display images] was first introduced in 1993. I do feel that way, and I think this time it's like Bill said in his memo in 1995.

Moderator: This is as big a change as the Internet has brought?

Nadella: I think so, although those of us in the tech industry always overhype everything.

I hope it makes such a big difference, at least it gives me the motivation that drives me. I think I can achieve what I think is the goal that all people in the technology industry share, which is to make AI more accessible.

Host: How much market share do you think Microsoft can grab from Google in the AI industry? What are your predictions?

Nadella: I'm very happy that we got into search, and right now we're a very small player in search, but I expect every step forward as a big improvement.

Moderator: You're moving into search, they're moving into the office. They're now adding AI to their Google Docs, Sheets, and Gmail, and we'll see you and Sundar outdoing each other in this AI race every week?

Nadella: At the end of the day, for this industry, the joy of competition is innovation. And competition, is a very good thing for users and the industry. Google is a very innovative company, we have a lot of respect for them, and I expect us to compete in multiple areas.

Moderator: Microsoft is said to have just laid off a team focused on ethics and accountability, and the Center for Humanitarian Technology says the AI race is a reckless competition. What is your response to this?

Nadella: It's no longer a small thing for Microsoft, right? Because in a sense, whether it's design, consistency, safety, ethics for AI, it's like talking about quality, performance, and core design.

But I think debate, dialogue and scrutiny about whether this pace of innovation is really good for society is absolutely necessary. Indeed, I welcome this dialogue.

In this context, it also makes us realize why we have never asked ourselves what the AI that already exists in our lives is doing. There is a lot of AI that I don't even know what it's doing. So why don't we really understand the application of AI in our lives and consider how to use it safely and harmoniously?

Moderator: I often worry about my children, because AI has something that I don't have, which is that I can draw a lot of time to spend with them. These chatbots appear very friendly, but this can quickly turn into an unhealthy relationship. AI may be guiding children to make bad decisions. Are you, as a parent, concerned about these issues?

Nadella: That's one of the reasons I feel like AI is moving from automatic mode to assisted mode, giving us more control over it. Not just as a parent, but also in the control of your child.

We should certainly be wary of what may happen, but I believe that this generation of AI and robots is more likely to shift from mere communication to the motivation to bring us more active learning.

Moderator: I would like to ask a question about work. Clearly, AI-made software can help people get work done. I wonder if the software that incorporates AI technology will put some people out of work? Sam Altman has expressed the idea that AI will create a utopia and generate enough wealth for everyone to have a decent income, but also lose some jobs. Do you agree with this opinion?

Nadella: You know, from Canes to Sam Altman, they've talked about a two-day workweek. I'm looking forward to this NIO, but the problem is that there will be changes in the work, and in some places there will be wage pressures. But as productivity increases, there will also be opportunities for wage increases. We should look at these issues holistically and be vigilant about issues that may pose a risk of unemployment.

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

As a key figure in the development of AI, Sam Altman promised that the introduction of AI into the labor market will lead to a new way of life, but he also expressed the need to be vigilant about AI technology. For many people, the potential for AI's performance to have an impact on their careers has caused a great deal of discussion, and there has even been such a thing as a Hollywood screenwriters strike march.

Altman has been giving talks in different parts of the world this summer, describing the future that AI will bring while also reminding people to be wary of AI. After he returned to San Francisco, he also shared his views on the problems caused by AI.

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

The following is a transcript of the conversation:

Host: You've been traveling this summer, what is your daily life like during your travels? Eat, sleep, meditate or yoga?

Sam Altman: There was almost no meditation or yoga or exercise throughout the journey. Because it was hard, but I slept well.

Moderator: Is the goal of this trip more listening or explaining?

Sam Altman: The goal was more about listening, but we spent more time explaining than we expected. We met with many leaders and discussed the need for regulation across the country. This requires more telling.

But listening was so valuable that I came back with about a hundred pages of handwritten notes.

Moderator: Handwritten notes? What happens after these handwritten notes?

Sam Altman: I'll distill it down to the top 50 pieces of feedback from our users and what we need to do. But when you're talking to people face-to-face or at drinks, banquets, etc., people will tell you directly what they think. They'll give direct feedback on what you've done wrong and what they'd like to change.

Moderator: How has ChatGPT changed your behavior?

Sam Altman: There are a lot of small ways, and I also have a big idea.

For example, in this trip, the translation function was a lifesaver. And I use them when I write things. In fact, I often write, but never publish, writing will help my own thinking, I find that I write faster and think more, ChatGPT is like a good tool to solve difficult problems.

And I had the bigger idea that I saw its way to become my super assistant in cognitive work.

Moderator: Super assistant, you know we talked about the relationship with chatbots. Do you think people become emotionally dependent on this? What are your thoughts on this?

Sam Altman: I think people are indeed becoming emotionally dependent on language models, and I have mixed thoughts on this matter. I don't want that, I find it weird and worrisome to me. I don't want to be the kind of person who warns people that they can only do something with technology, and I feel like that's something we need to be careful with.

Host: People have a lot of anxiety and fear about AI, but compared with nuclear weapons and biological weapons, do you think these anxieties are fair or too dramatic?

Sam Altman: People have a lot of anxiety and fear, but I think it's more excitement. I think for any advanced technology, whether synthetic biology or nuclear technology, including AI, we need to deal with its potential negative effects in order to reap its positive benefits. For this technology, I expect their advantages to be much greater than anything we've seen, but the potential drawbacks are also very bad, so we do need to manage these technologies.

We've sped it up a lot about how to do this effectively. I started out quite optimistic about this on this trip, and I ended with quite optimism.

Moderator: Yes, so is your shelter ready for the end of AI?

Sam Altman: If there's an AI apocalypse, shelters aren't going to help anyone. But I know the journalists seem to like the story very much.

Moderator: I do like the story.

Sam Altman: I'm not overly correcting what people think about what I'm preparing for survival. I was a Boy Scout, and I didn't like these things. But this really has nothing to do with AI.

Moderator: Someone talked about the "kill switch", which is the big red button.

Sam Altman: I hope you understand that it's just a joke.

Moderator: Obviously it was a joke. But can you really turn it off if you want?

Sam Altman: Absolutely, I mean we can shut down our data centers or other equipment, but I don't think that's what people say.

I think what we can do is, in fact, how to build AI systems safely, how we want to safely design security testing methods, there are external audits, internal audits, red teams and so on. Instead of the dramatic one in the movie there is a hold switch or cut off the power or anything. We've developed and continue to develop these strict safety practices, which is what a cut-off switch really looks like, but it's not that dramatic.

Moderator: Now there's a new playing field. OpenAI is clearly the frontrunner, but who are you worried about?

Sam Altman: It's not just a competitive environment, I think it's probably the most competitive environment in tech right now. So we're following everyone, but given my background in startups, I'm more worried about people we don't yet know to follow, who might come up with some really new ideas that we're missing out.

How would you describe your relationship with Satya Nadella? How much control do they have over OpenAI? I've heard people say that Microsoft is going to buy OpenAI, and you're just making tech companies bigger.

Sam Altman: OpenAI is not for sale, I don't know how to say it to be clearer, we have a good relationship with them. Important collaborations between big tech companies tend to be less successful, but our collaboration is an example of success that we are very grateful for.

Host: Have you talked to Elon Musk behind the scenes?

Sam Altman: Occasionally.

Moderator: What are they all talking about?

Sam Altman: We talk very broadly, from important things to very trivial little things.

Moderator: What do you think of his frustration, that he has attacked AI almost to some extent?

Sam Altman: You should have asked him.

Host: What aspects do you think AI should never touch?

Sam Altman: My mom always said, "Never say never, never say never." I think that's a good piece of advice, and if I make predictions now, I'm definitely wrong in some way.

I think AI will affect most aspects of our lives, but some parts will remain the same. However, such predictions are prone to error.

Moderator: What do you think children should learn now?

Sam Altman: Resilience, adaptability, quick learning, creativity, and of course, familiarity with tools.

Moderator: So should children still learn to code? Because I've heard people say that you don't need to learn programming anymore, you just need to learn math and biology.

Sam Altman: I have some opinions on that because I love programming and I think you should learn to code. Although I rarely write code now, learning to program is a great way to learn to think.

I think programming will still be very important in the future. It's just going to have some changes because we have a new tool.

Moderator: If we really have nothing to do, what should we do?

Sam Altman: I don't think we're really going to run out of things, I think our work might change. You know, things like what you and I do now may not have been real work for people thousands of years ago. But we can find new goals, new tasks, things that provide value to others and a sense of accomplishment.

This writing is never trivial, and I hope you and I can make the world see hundreds of years from now, and in the future, people will exclaim: "Wow! Those people are doing so well, I can't believe they call these trivial things work. ”

Moderator: So we won't just lie on the beach and eat candy.

Sam Altman: Some of us do it, and for those who want to, it's more empowering for them.

Moderator: Do you really think the world will become fairer and more just?

Sam Altman: I think so, I think technology is inherently an equal force, and it requires the cooperation of society and our institutions to make it happen.

If we can do that, I will have a bigger outlook. My vision for the next decade is that the cost of AI and energy will be much lower. If these two things happen, it will help everyone.

Host: So where do you want to take OpenAI next?

Sam Altman: We want to continue to make better, more capable models and make them more widely and cheaper.

Moderator: What about AI as a whole?

Sam Altman: There are a lot of people doing this, so I can't decide where the field is going. But we are very pleased with our contribution. We think we've moved this space forward and we're proud of it. Of course, we are also doing new things.

Moderator: What's new?

Sam Altman: Still doing it, can't say.

Moderator: Is there room for startups in this world?

Sam Altman: Of course, we were a startup not long ago.

Moderator: But you're almost an established company.

Sam Altman: Absolutely, but when we first started, you probably asked the same question. In fact, people did ask, and I wondered if it was possible to challenge Google and DeepMind, or if they had already won.

Moderator: Apparently they haven't.

Sam Altman: Yes, I think there's a lot of opportunity in this space. As startups, we're always prone to self-denial, but startups continue to do their thing.

Moderator: Well, no one denies you, so I guess that's a good thing.

Sam Altman: I think so.

Moderator: On this trip you signed a 22-word statement warning of the dangers of artificial intelligence. Mitigating the risk of extinction caused by AI should be a global priority, as should other social-scale risks such as pandemics and nuclear war. Please sort out the connection for us. What happens in between, from an advanced chatbot to a human terminator?

Sam Altman: AI is our hope, but there are scary winds. I guess AI can go wrong in many ways, but don't we often use powerful technology that is dangerous? I think our development over the decades has provided us with good security system practices. It's not perfect, but these things will never be perfect.

Everything can go wrong, and I think the most important thing is this technology. GPT4 is not as risky as you might say, but how sure are we that CPT-9 is not risky? Even with a small number of possibilities, it can become very bad, and this is when we need to be very careful.

Host: If there is such a small probability, why continue to do this? Why not stop?

Sam Altman: There are many reasons. First of all, I think the merits of AI are huge, you know, it's very meaningful to give everybody on the planet the opportunity to have a better quality education than anyone can get today, and it's not good to stop at this time. At the same time, the future transformation of the medical field and its global adoption will also be a historic breakthrough.

We're going to see scientific progress, and I'm a firm believer that truly sustainable quality of life improvements come from advances in science and technology, and I think we'll have more of that.

We have all the obvious benefits, you know, AI can help us end poverty. But we have to manage risk to get there, and I don't think any company can stop it at this point.

Host: Even you will admit that you now have great power, but why should people believe you?

Sam Altman: You know I don't like public speaking, I prefer to work in the office, but now people should have enough time to ask their questions, and I'm trying to do that.

What's more, you shouldn't trust just one person in this area. I think it's important that the board has the right to fire me, and I think the board needs to democratize it gradually, and there are many ways to do that.

We believe that the benefits, access, and stewardship of AI technology should belong to all of humanity. AI is a very powerful technology, and you shouldn't trust just one company, and you shouldn't trust just one person.

Nadella and Ultraman's latest interview, how much the two people who "achieved" ChatGPT were changed by AI

Read on