laitimes

Ultraman's latest interview: GPT-4o makes me fall in love with it, and the future is the era when general-purpose models shine

Ultraman's latest interview: GPT-4o makes me fall in love with it, and the future is the era when general-purpose models shine

Tencent Technology

2024-05-16 17:05Posted on the official account of Hebei Tencent News Technology Channel

Tencent Technology News reported on May 16 that according to foreign media reports, OpenAI recently released its latest large language model GPT-4o, once again leading another wave of innovation in the field of artificial intelligence. On this momentous occasion, the company's CEO, Sam Altman, was interviewed by renowned podcast host Logan Bartlett to unveil the fascinating story behind the launch and provide his predictions for the future of artificial intelligence to a global audience.

In this exclusive interview, Altman not only elaborates on OpenAI's grand blueprint, but also explores the timeline for the realization of AGI (Artificial General Intelligence) and the far-reaching societal impact that humanoid robots may bring. At the same time, Altman also expressed his excitement and concern about the prospect of artificial intelligence personal assistants, and emphasized the biggest opportunities and risks in the field of artificial intelligence today.

The following is the full text of the conversation between Ultraman and Bartlet:

01 Leading OpenAI makes it difficult for me to be "transparent" anymore

Bartlet: Let's start with a lighter topic! What are some of the most unusual changes your life has experienced as the leader of OpenAI over the past four or five years? In other words, which shifts are most noticeable to you?

Ultraman: A lot of things have changed. But the most amazing thing about it is that I can no longer be "transparent" in public. If I had thought about this even a little before, I might say that it was more peculiar than I thought. But I really didn't think much of it at the time. It's like a very special kind of "quarantine" that makes me a little confused.

Bartlet: You're a big believer in the power of AI and OpenAI, so when running a company like this, did you anticipate the ripple effects it could have?

Ultraman: I didn't expect that. I didn't expect so many other things to be involved, such as the company growing into a truly impactful business. I didn't even realize that I wouldn't be able to eat out freely even in my own city, which really struck me as a little strange and incredible.

02 Announcing Multimodal AI: A Leap Forward in Technology

Bartlet: Earlier this week, you successfully released GPT-4o, a large multimodal model that enables seamless interaction between text, speech, and vision. Can you talk about why this breakthrough is so important?

Altman: It's definitely a revolutionary leap forward in the way computers are used. We've had a vision of controlling computers with voice for a long time, with early products like Siri. But for me, the experience of using them has never really reached the point of natural fluidity. However, GPT-4o is very different from its predecessor in terms of user experience. It behaves very naturally, thanks to a combination of factors: the richness of its features, its quickness to blend in with other models, the natural flow of its intonation, and the variety of things it can do, such as the ease with which you can say "Hey, speak faster" or choose another voice. This fluidity and flexibility – whatever we want to call it – makes me fall in love with the new model.

Bartlet: Please share some of the use case scenarios that you prefer right now.

Ultraman: Even though I've only been using it for a week, there's one use case that surprised me. When I'm engrossed in my work, I can simply put my phone on top of my desktop without having to switch windows or interrupt my workflow as frequently. It was as if the phone had become another bridge between me and my information.

For example, when I'm working on a task, I used to have to stop, go to a different tab to search for something or click on another link. Now, I can continue with what I'm doing, ask questions directly and get immediate responses, without having to take my eyes off what I'm currently working on on my computer. It's a truly seamless experience.

Bartlet: It sounds like behind all of this is the evolution of the technology architecture, especially the leap in computing power?

Altman: Indeed, from a technical point of view, this is based on the accumulation that we have accumulated over the past few years in a number of fields. We've been exploring audio models, visual models, and trying to blend them together. At the same time, we are also exploring more efficient ways to train our models. It's not that we suddenly have a revolutionary new feature, but it's a clever combination of technical elements.

Bartlet: Given the latency issue, do you think there is a need to develop specialized models on the device to ensure smooth interactions?

Altman: Network latency is really a concern for video. I've always been looking forward to the future of AR glasses and other devices that can communicate with the world in real time and sense how things change. But network latency can really be a stumbling block to this vision. However, in practice, a delay of two or three hundred milliseconds is fast enough, and in many cases it can even surpass the reaction speed of humans.

Bartlet: You recently mentioned that GPT-4o might not be the name of the next big version, like GPT-5. This seems to mean that you have taken a more flexible and iterative approach to model development. Should we see the future in this way?

Ultraman: The big model that will be released in the future will not be an iconic big version, such as GPT-5, because we can't know for sure yet. I think one of the things that I've learned from this is that AI doesn't always fit perfectly with traditional release models. Tech companies typically follow an established product release model, but we may need to adopt a different strategy now. We can certainly take a name like GPT-5 and release it in a new way, or we can consider using something else. But I think we're still figuring out how to name and brand these products.

The way the naming from GPT-1 to GPT-4 is logical to me, and GPT-4 has clearly made significant progress. We are also wondering if there will be a basic model similar to a "virtual brain", which in some cases may show a deeper "ability to think". Alternatively, we may explore different models, but users may not care about the differences between those models. So I think we're still exploring how to bring these products to market.

Bartlet: Does that mean that in order for models to progress, we may need less computing power than we have historically?

Altman: I think we're always eager to take advantage of as much computing power as possible. However, now that we are witnessing phenomenal efficiency gains, this is undoubtedly crucial. One of the highlights of the recent release was the voice mode, but perhaps more importantly, we were able to run it so efficiently that we could offer this service to users around the world, and it performed at the level of the world's top models. For users who want to experience ChatGPT for free, you will find that GPT-4o has a significant improvement in efficiency in certain use cases compared to the previous GPT-4 and GPT-4 Turbo. And, I think we have a lot of potential for improvement in this area.

03 Natural language will become the main way of communication between humans and AI

Bartlet: You mentioned that ChatGPT itself isn't really changing the world, it might just change people's expectations of the world.

Ultraman: Yes, I totally agree with that. If measured by any economic metric, you'd be hard-pressed to find definitive evidence that ChatGPT has indeed increased productivity or produced other direct economic benefits. There may be some manifestation in customer service or some specific areas, but if you look at the trend of global GDP, can you clearly detect its impact when ChatGPT is released? I'm afraid not.

Bartlet: Do you think there's a point in time where we can be sure that GDP growth is being driven by ChatGPT?

Altman: I'm not sure we can attribute this growth directly to a particular model. But I think if we look at the historical data decades from now, we'll see how a series of models are progressively driving the field as a whole, and ChatGPT is just one part of that.

Bartlet: Which applications or areas do you see showing the most promise over the next 12 months?

Altman: Because of my personal work background, I naturally have a preference for the field of programming, which I firmly believe is a crucial field.

Bartlet: You've talked at length about the difference between a deeply specialized model, which is trained on specific data and used for a specific purpose, and a general-purpose model, which is capable of real inference.

Ultraman: I bet the future is the time when general-purpose models shine.

Bartlet: What do you think is the most important?

Altman: For models that are limited to a single dataset and the narrow domain that is closely related to it, if they have the ability to generalize inference, then no matter what new data type they are facing, they can quickly adapt and run by simply inputting the corresponding data. But such capabilities are not acquired by piling together a bunch of specialized models. So, I think the most important thing is to figure out the real reasoning ability so that we can apply it to a variety of scenarios and tasks.

Bartlet: When imagining the future of AI in terms of communication and creativity, what do you think will be the main means of communication between humans and AI in the next two years?

Altman: Natural language is undoubtedly a very effective way to communicate. I'm interested in the idea that we can design a mechanism that can be used by both humans and AI so that they interact in the same way. Therefore, I am more interested in humanoid robots than other forms of robots. Because the current world is very much designed for humans, I don't want to reconfigure the world in pursuit of some so-called "efficiency". I'm leaning towards the idea that we use the language that humans are used to communicating with AI, and they might even communicate with each other in the same way. Although I can't predict the future, I think it's an interesting direction to explore.

04 In the future, AI systems will become cheaper and easier to use

Bartlet: You've mentioned that models may be commoditized over time, but before that happens, should the model be personalized to each individual first?

Altman: I'm not sure about that, but I think it's a direction worth considering.

Bartlet: In addition to personalization, do you think the business user interface and ease of use will play an important role in ultimately winning over users?

Ultraman: Of course, these factors have always been crucial. As you can imagine, in some cases, market or network effects can also be a key factor. We want to be able to communicate effectively between intelligent subjects, and there are different companies in the app store that offer various services. But I think the rules of business usually apply. Whenever new technology comes along, people always say that these rules no longer apply, but such a view is often inaccurate.

In my opinion, while open source models are approaching the benchmark in terms of performance, traditional ways of creating value still play an integral role. I'm optimistic about the rise of open source models. As many schools of technology demonstrate, open source has a unique place, and so do hosting models, and this diverse landscape is fantastic.

Bartlet: I'm not going to go into the specifics of implementation. However, as we all know, the news of investing in FAB (semiconductor manufacturing facilities) and artificial intelligence infrastructure has been widely reported by a number of authoritative media outlets. Companies such as TSMC and Nvidia are aggressively ramping up production to meet the surging demand for AI infrastructure. You recently mentioned that the global demand for AI infrastructure far exceeds the current supply capacity of companies like TSMC, NVIDIA, etc., what observations or data did you base to this conclusion?

Altman: First of all, I'm sure we can find a way to dramatically reduce the cost of existing AI systems. Second, as costs fall, the demand for AI systems is set to surge. Third, by building larger, more advanced systems, we will further stimulate demand growth. What we all look forward to is a world where "smart" is extremely abundant and cheap, and where people can use smart technology to accomplish a wide variety of tasks. You don't even have to think about whether I want it to help me read and respond to all my emails, or if I want it to help me treat my cancer. Of course, you'll choose to treat cancer, but ideally it will be able to do both at the same time. My focus is on making sure we have enough resources so that everyone can enjoy the benefits of smart technology.

Bartlet: I'm not asking you to comment on your individual efforts, but if you'd like to share, I'd like to ask you what you think about the different physical device assistants that companies like Humane, Limitless, and others are using. What do you think are the shortcomings of them, or why they haven't reached the level of popularity that users would expect?

Ultraman: I think that's just the beginning. As an early adopter of a wide range of computing devices, I had a wealth of experience. I used to own a compact TC1000, which I loved as a freshman in college, and it wasn't as advanced as the current iPad, but it was definitely going in the right direction. Later, I owned a Treo, and in college, I was a bit of a fashion kid, but my Palm Treo was definitely a cool presence at the time, and although it was a far cry from the iPhones that came later, we both witnessed a leap forward in technology. These devices are all indicative of a promising future, but it takes time to polish and iterate on technology.

Bartlet: You mentioned recently that many companies that build their businesses on top of GPT-4 will inevitably be obsolete by future iterations of GPT. Can you elaborate on this point further? In addition, which companies with AI features do you think will survive the wave of GPT?

Altman: One of the frameworks that I've found to work is that when you're building a business, you're really making two choices: either you're betting on the mediocre performance of one model, or you're betting that the next model will make significant progress and benefit from it. If you've put a lot of effort into making a use case work efficiently that is currently beyond the capabilities of GPT-4, once GPT-5 or later comes along and goes beyond that, you may feel unworth your previous efforts. But if you have a comprehensive and viable product that people will naturally seek to use, and you're not overly invested in making that product unique, your product will be even better when GPT-5 or other more advanced models come along.

My point is that most of the time, you're not building a pure AI business, you're building a business, and AI is just one technology that you're employing. In the early days of the app store, there were a lot of products that filled some of the obvious gaps, but then Apple solved the problem, and we're now no longer looking at flashlight apps in the app store because their functionality has been integrated into the operating system. This will be the possible direction of development of AI business in the future. As for companies like Uber, they have risen on the rise of smartphones, but they have built a very solid and long-term viable business model, and I think that's exactly what we should be looking for.

Bartlet: I see what you mean, and I can imagine a lot of companies that are applying your technology ideas and that fit into that framework in some way. So, can you give a specific example or a new type of concept that fits the pattern we discussed earlier? For example, a company like Uber doesn't have to be a real business, even if it's a hypothetical company, a toy concept, or just an idea that you think will be implemented that way.

Altman: In that regard, I tend to bet on start-ups that are emerging. For example, when people want to build AI doctors or AI diagnostic systems, they say, "I don't want to start a business in this space because you know there's a Mayo Clinic or other hospitals that do that." "But I'm actually more bullish on startups that are trying new approaches in this space.

05 Without AGI, can OpenAI's valuation reach trillions of dollars?

Bartlet: What advice would you give to entrepreneurs who are interested in proactively navigating this type of disruptive change?

Altman: I would say that it's necessary to believe that smart services will improve year over year and that costs will come down, but it's not enough to ensure your success. While it takes time for big companies to get to this point, and you can use that to outperform them, other startups that are also aware of this will do the same. As a result, you still need to dig deeper into the long-term sustainability of your business. Today, we are facing an environment that is more open than ever, full of exciting new opportunities, but don't let that blind you to the hard work that goes into creating value.

Bartlet: Given the rapid development of AI, can you predict the new types of roles that could emerge or become mainstream in the next five years? These positions may be little known or do not yet exist.

Ultraman: That's a new and thought-provoking question that I haven't been asked before. People are always focused on which professions will disappear, but the new career categories are just as fascinating. What I'm trying to think about is, for example, new areas that 100 million or 50 million people might be in. This could involve new art forms, forms of entertainment, and a greater focus on human connection. While I don't know the exact names of these positions, and I can't say for sure if we'll be able to get to that scale in five years, I'm confident that people will appreciate a unique human experience like face-to-face. I don't know how we're going to define it, but I can foresee that it's going to be a new and extremely important area for us to grow in the future.

Bartlet: Recently, OpenAI was valued at about $90 billion in the process of raising funds. In your opinion, what are the key events or milestones that could propel OpenAI to a trillion-dollar valuation even though it hasn't reached AGI levels yet?

Altman: I think as long as we can continue to improve the technology at the current rate, figure out how we can continue to build great products with it, and make sure that revenue grows steadily, as it is now, then I firmly believe that we will have great success. I can't really predict the exact numbers, but I'm confident in our future.

Bartlet: Is the current business model (probably referring to subscription models like ChatGPT) a key factor in how you think OpenAI can reach a trillion-dollar valuation?

Altman: This subscription model has worked very well for us and has exceeded my expectations. I didn't expect it to be so successful, but it turned out to work pretty well.

Bartlet: In your opinion, will the business model change when AGI (however the concept is defined) is implemented?

Ultraman: That would be a different story.

Bartlet: Regarding OpenAI's current structure, while we may have noticed some improvements, I don't think we need to go into that further. You've made that abundantly clear. You mentioned that adjustments will be made as you move forward, so what does a suitable structure look like in your opinion?

Altman: I think we're ready to talk about that. We've been having all sorts of discussions and brainstorming. I hope that this year, this calendar year, we will be able to formally discuss this issue.

Bartlet: One of the particularly interesting topics about the general perception of AI is your view of monetization models. We've heard you say that first it replaces manual labor, then white-collar work, and finally creative work. Obviously, this is in some ways subverted by reality. Is there anything else that goes against your gut instinct? For example, I thought it would be the case, but it turned out to be the opposite.

Ultraman: It was a huge surprise to me. In addition to the point you mentioned, there are a few other things, like I didn't expect AI to be so good at legal work and show such capabilities so early. Because I've always thought that legal work is very precise and complex.

Bartlet: Can you elaborate on AI and your reservations about the word "AGI"?

Ultraman: Because I know that AGI is no longer a definite point in time. Obviously, when you start a company, you have a lot of naïve ideas, especially in such a fast-growing field. At the beginning of OpenAI, I also naively thought that it would be a real leap forward for us to start from the era when there was no AGI and then achieve AGI. I still think it's possible to make a real leap, but overall I think it's more of a continuous exponential curve where what matters is the rate of progress that is made each year. You and I may not agree on a specific moment to reach AGI in a given month or year. But we can come up with other tests that we would agree with, however, it's harder than it sounds.

GPT-4 certainly didn't hit the threshold of what I thought was AGI, and I don't think our next big model will reach that level. But I think we might just lack some of the less obtrusive ideas, and a little bit bigger, to get to a certain level of people's attention.

Bartlet: So, is there a more modern Turing test that measures whether AI has reached a certain threshold?

Altman: I think that when a system can do better than all the OpenAI researchers combined, even if it's just over one of the OpenAI researchers, that's going to be a very important milestone, and it feels like it can, and even deserves, to be considered a breakthrough. But whether such progress is imminent is uncertain at this time, but I won't completely rule it out.

Bartlet: In your opinion, what are the biggest challenges in achieving AGI? You seem to think that the current law of expansion will remain in force for years to come.

Altman: I firmly believe that the biggest obstacle to achieving AGI is new research breakthroughs. Ever since I switched from the field of internet software to artificial intelligence, I have come to appreciate that research does not always follow a preset schedule. This usually means that it may take longer, but sometimes, it can progress much faster than anyone expected.

Bartlet: Can you elaborate on why research isn't progressing as linear as engineering?

Altman: When talking about the nonlinear nature of research progress, I would like to illustrate it with historical examples. Although I may discrepance on some specific numbers. Neutrons, for example, were theoretically mentioned in the early 20th century and first detected in the 20th century. And the study of the atomic bomb began in the 30s and was successfully realized in the 40s. From ignorance of neutrons to being able to build an atomic bomb and upend our understanding of physics, the process has shown incredible speed.

There are other examples, such as the less purely scientific field. The Wright brothers had predicted that the flight would take place 50 years later, and they successfully made their first flight in 1908. There are many examples like this in the history of science and engineering. Of course, there are also many of the progress we predicted that never materialized, or was realized much less quickly than we expected. But sometimes, progress can be incredibly fast.

Bartlet: Where do we currently stand in this area of explainability? How critical is it to the long-term development of AI?

Altman: There are many levels to interpretability, including whether I understand how each layer of the network works on a mechanical level, or whether I can point out the logical errors that exist in it by looking at the output. I'm looking forward to OpenAI and other institutions looking forward to their work on explainability. I think explainability as a broader field has great potential and exciting prospects.

06 Over-policing current AI models is a mistake

Bartlet: As expectations for AGI grow, so do concerns about organizations like OpenAI unilaterally exploiting it and making decisions. This has prompted some government agencies to step in and hope that elected leaders will make these decisions, rather than relying solely on companies like OpenAI.

Altman: I think it would be a mistake to over-regulate the current AI model. However, some level of regulation becomes even more important when these models begin to pose significant catastrophic risks to the world. At the moment, how to set the thresholds for these risks and how to test them effectively really requires careful weighing. It would be a huge loss to limit the vast benefits of this technology by being overly concerned about potential risks, and to discourage those looking to train models in their own basements. But then again, if we use international nuclear weapons rules as a reference, I think some form of regulation of AI is reasonable.

Bartlet: In terms of the usual regulatory measures, do you think government agencies are not aware of the potential risks inherent in AI?

Altman: I don't think they really delved into AGI. Some of these people have been vehemently opposed to AI regulation, dismissing it as nonsense (not for everyone, of course, but I understand their position). I understand their concerns, because regulation has really had a negative impact on the tech sector, like looking at the current state of the tech sector in Europe. But I think we're getting closer to a tipping point, beyond which things could be very different.

Bartlet: Do you think there are some aspects of the danger inherent in the open source model?

Ultraman: At the moment, no. But I can imagine that such a model may appear in the future.

Bartlet: I remember you mentioned that security is somewhat of a false framework because it's more about what we're explicitly accepting.

Ultraman: Exactly. In the case of airlines, safety is not a black and white concept. People choose to fly because they think it's relatively safe, even though they know that there are occasional plane crashes. How to define whether an airline is safe or not is a question worth discussing, and different people will have different opinions. The aviation industry has become extremely safe, but safety does not mean that absolutely no one dies on board. Similarly, in the medical field, we also take side effects very seriously because some people have adverse reactions to medications. In addition, there is a hidden side to security, such as the possible negative effects of social media.

Bartlet: Is there a specific situation or factor in terms of security that would make you change your current strategy of going all-in on AI research?

Altman: We have a concept called the "Preparation Framework," and it's aimed at that. In specific categories and levels, our action strategies vary to address potential risks and challenges.

Bartlet: Considering the many rapidly emerging use cases, I think one of the main bottlenecks that we face right now is the lack of AI infrastructure. Suppose that some researcher makes some kind of breakthrough improvement to the existing neural architecture Transformer, so that the amount of data and hardware required is drastically reduced, even close to the level of the human brain, do you think this will accelerate the "technological leap"?

Ultraman: It's a real possibility. And this improvement doesn't necessarily require a complete overhaul of the existing architecture. Although I don't think it's the most straightforward path, I wouldn't rule it out completely. It is important that we take this into account in the various situations that may occur.

I think that even if technological development is accelerating, the process will be gradual. I don't think we're going to go from relatively advanced AI to true superintelligence overnight. But even if a technological breakthrough happens within a year or a few years, it's still fast to some extent.

Another consideration is that even if we have a really strong AGI, its impact on society in the short term is limited. My guess is that most of the time, it won't have a big enough impact in a year or two, but in ten years, the world will definitely change dramatically. In this regard, the inertia of society may actually be a positive factor.

07 I lamented that people have strong adaptability and want to enjoy the rural life after retirement

Bartlet: I think you've noticed that people are skeptical of questions that you don't want to answer. Like questions about Elon Musk, equity and board structure, these are all things that you get asked a lot. So, which of these questions do you like to answer the least?

Ultraman: I don't hate answering any of these questions, but I really don't have anything new to share about them.

Bartlet: Well, I guess I won't specifically ask about equity because you've answered it in more than enough ways, although people still seem to have reservations about the "it's good to have enough money" answer.

Ultraman: yes, even if I do make a trillion dollars and give it all away, it may still not meet some people's expectations or conventional wisdom. Some have even tried to do so in some way.

Bartlet: What motivates you to pursue AGI? Equity aside, I'm sure most people would find it comforting to be compensated accordingly, even if they are pursuing a higher mission. So, what motivates you to come to work every day? Where do you get the most satisfaction?

Ultraman: I always tell people that I'm willing to make a lot of sacrifices and compromises in other aspects of my life, because the cause I'm currently exposed to is what I think is the most exciting, the most important, the most beautiful. This is a time of change, and I know it won't last forever. You know, one day I'm going to retire and enjoy the idyllic life, and I'm going to miss everything I'm now, but I'm going to laugh and say, "Oh, those days were so long and stressful, but also pretty cool." "I couldn't believe it was happening to me, it was amazing.

Bartlet: Was there a particular moment that made you feel as if you were in a surreal situation? For example, the fame we mentioned earlier, such as not being able to move freely around your city.

Ultraman: Every day there are things that amaze me. After that, as various things happened, like that week (ousted by the board of directors last November), I received 10 to 20 text messages from some of the world's most important people, such as presidents, national prime ministers, etc., but that wasn't the weird part that struck me. What really weird to me is that when all of this happens, I feel like I'm responding to them normally, sending a message like "thank you" and it feels like everything is very natural.

We had four and a half crazy and stressful days, I barely slept and didn't eat much, but I was surprisingly in good spirits, with a clear mind and extreme focus. It's like your body is in some kind of bizarre adrenaline charge for a long time. It all happened the week before Thanksgiving, and it was a crazy experience. By Tuesday night (November 21, 2023), everything was settled. Then that Wednesday, I drove to Napa and stopped at a small restaurant where the food was very delicious.

So, it was an impressive moment. I've learned that humans are far more adaptable to almost anything than we think. You can quickly take anything as the new normal, for better or for worse. I've learned this many times over the past few years. But I think it's just a good thing about human beings, and that's a good thing for us.

08 In the future, AI assistants should be separated from human ontology

Bartlet: I'm surprised that these psychological effects do come so often. In your opinion, as humans and large models begin to take on more tasks that were previously only done by humans, what traits or abilities will remain unique to humans?

Ultraman: I think that many years from now, humans will still care deeply about other humans. I just went through some stuff online and everyone was saying things like "everybody is going to love ChatGPT". But I bet that won't happen. I think we've been caring about other human beings for a long time, in a variety of different ways, and that's going to be there forever. Our concern for others is almost an inherent obsession. While you may have heard many conspiracy theories about me, you probably haven't heard many conspiracy theories about artificial intelligence. If you heard, I probably wouldn't have paid much attention. I don't think we're going to make watching robots play football as our main entertainment hobby.

Bartlet: As the founder of OpenAI, you've set a lot of rules or frameworks for how the company does business, but you've also broken a lot of rules. Is the type of employee you hire in this industry different from other consumer internet companies, B2B software companies, or other types of companies?

Altman: Researchers and product engineers do differ significantly in most cases, and executives are equally unique.

Bartlet: Has OpenAI brought in different types of executives? Or do you pay particular attention to certain traits when hiring executives?

Altman: I'm not usually inclined to hire executives from the outside, but I think it could be a mistake if the company always promotes executives only from within, because it can lead to a monolithic company culture. I think the company needs to bring in some new senior talent to inject vitality. But here, we're mostly relying on homegrown talent, which is actually a good thing considering the uniqueness of what we're doing.

Bartlet: Did you make a decision during OpenAI's journey that felt crucial at the time? How did you come up with this decision?

Altman: It's hard to point to just a single decision, but what we can mention is that we decided to go with what is known as an iterative deployment strategy, where we don't secretly develop AGI and then bring it out to the world all at once. It was a common opinion and plan for many companies and people at the time, but I think it was a very critical decision and it really seemed very important at the time.

Bartlet: I've actually always been curious about the story behind betting on language models. How did this decision come about?

Ultraman: At the time, our team was working on multiple projects, including robot development and video games. However, in this diverse context, a relatively small but enthusiastic team began to devote itself to the study of language modeling. Ilya Sutskever, former chief scientist at OpenAI, is convinced of this direction and is a firm believer in the potential of language models.

So, we set out to develop GPT-1 and GPT-2, and delved into the law of scaling, and then upgraded to GPT-3. It was on the basis of this experience and research that we made this bold decision to focus in this direction. Although in retrospect, these choices may seem obvious, at the time, they were indeed deliberate decisions.

Bartlet: When you talk about using personal AI, you mention the difference between the two. But could you please elaborate further? Because I think that has a profound impact on your thinking about future AI use cases.

Ultraman: Absolutely. When I receive your text messages in the next five years, I want to be able to know clearly whether this is something you sent to me directly or if your AI assistant forwarded it on my behalf.

I think there's value in keeping that clear boundary, rather than making everything blur as if AI is just an extension of our bodies or minds. I want to be able to clearly tell if I'm talking to Sam himself or Sam's AI assistant. I think this distinction is necessary and is an experience that I hope to be able to achieve in the future.

Bartlet: When planning for AGI and its subsequent development, you mentioned that the first AGI was just a starting point on the intelligence continuum, which we talked about before. Where do you think the progress of AGI is likely to accelerate? Have you ever stopped to think deeply or imagine what the future will look like? Or is it too abstract to be concretely depicted?

Ultraman: I don't think it's too abstract, but I wouldn't imagine it as a futuristic city scene in Star Wars. But I've imagined how cool it would be when one person could do something on their own that would have required hundreds, if not thousands, of people to work together, or when we could actually reveal all the mysteries of science. (Compiler/Golden Deer)

Read on