laitimes

OpenAI CTO In-depth Interview: Spoiler GPT-5 Release Time

OpenAI CTO In-depth Interview: Spoiler GPT-5 Release Time

Compile | Chen Junda

Edit | Panken

Zhidong reported on June 25 that recently, OpenAI CTO Mira Murati (Mira Murati) conducted a 50-minute in-depth interview with Jeffery Blackburn, a former Amazon executive and now a trustee of Dartmouth College, at the event of the graduation season of Dartmouth's Thayer School of Engineering.

OpenAI CTO In-depth Interview: Spoiler GPT-5 Release Time

▲Murat at Dartmouth College (Source: Dartmouth College)

In this interview, Murati shared his rich career from the aerospace industry, the automotive industry, VR/AR to joining OpenAI, and analyzed AI governance, the impact of AI on education, and the impact of AI on work based on what he saw and heard at the forefront of the industry.

In the interview, she revealed that a doctorate-level intelligent system will appear next year or the year after, which may refer to GPT-5. She even threw out a highly controversial argument that there are creative jobs that shouldn't exist and that AI will soon take their place. This opinion caused an uproar online, arguing that OpenAI smashed the pot after eating and did not understand what creativity meant.

Murati believes that OpenAI's achievements are inseparable from the superposition of three factors: deep neural networks, large amounts of data and large amounts of computing power, although they are still studying the principles behind them, but practice has proved that deep learning really works.

She said that AI security and AI capabilities are two sides of the same coin, and that smart models can understand the guardrails we set for them. From an engineering point of view, the improvement of AI capabilities does not reduce the security of the model. OpenAI bears a lot of responsibility for the security of the model, but to achieve effective risk management, the participation of society and the government is also essential. OpenAI is actively working with governments and regulators to address AI security issues.

The audience also threw pointed questions at Murati. In response to the audience's question about model values, Murati mentioned that OpenAI has integrated human values into the AI system through reinforcement learning with human feedback, but the focus in the future will be to provide customers with a highly customized model value system on the basic value system.

Viewers also asked Murati for his thoughts on OpenAI's recent alleged infringement and licensing and compensation for content creators. Murati once again emphasized that OpenAI did not deliberately imitate Scarlett's voice, and that her decision-making process for choosing voices was completely independent.

As for copyrighted content, OpenAI is currently trying to allow creators to provide copyrighted content to the data pool in the form of an aggregated data pool, evaluate the contribution of creative content to the performance of the model as a whole, and give corresponding remuneration. However, this technology is quite difficult, and it will take some time for it to be implemented.

Unlike OpenAI CEO Sam Altman, Murati had a low level of public awareness before. She was born in Albania in 1998 and went on to study in Canada and the United States.

She joined OpenAI in 2018 and was one of the early members of OpenAI. As the CTO of OpenAI, she led OpenAI's development in ChatGPT, DALL· E, Codex, and Sora, while also overseeing the company's research, product, and security teams.

Microsoft CEO Satya Nadella commented on Murati as she has both technical expertise, business acumen and a deep understanding of OpenAI's mission.

The following is a complete compilation of Muratty's in-depth interview at Dartmouth College (to improve readability, Zhidong has adjusted the order of some of the questions and answers, and made certain additions, deletions, and modifications without violating the original meaning):

1. I have worked in aerospace, automotive, VR/AR and other industries, and found that I am most interested in AI

Jeffrey Blackburn: It's fascinating that everybody wants to hear about how you're up and what you're building. Maybe we should start with your story, though. You went to Tesla for a while after you graduated, and then OpenAI. Can you briefly describe that period and your story of joining OpenAI in the early days?

Mira Murati: I actually worked in the aviation field for a short time after graduating from university, but then I realized that the development of aviation is quite slow. I was very interested in Tesla's mission and the innovative challenges it takes to build a sustainable transportation future, so I decided to join Tesla.

After working on the Model S and Model X, I realized that I didn't want to work in the automotive industry either. I wanted to do something that would really move society forward while solving some very difficult engineering challenges.

When I was at Tesla, I was interested in technologies like self-driving cars, computer vision, AI, and their application to self-driving cars. At that time, I wanted to learn more about other areas of AI. So I joined a startup, where I led engineering and product teams applying AI and computer vision to spatial computing, working on the next interface in computing.

At the time, I thought that the interface for computing would be VR and AR, but now I think differently. At that time, I thought that if we could interact with very complex information with our hands, whether it was formulas, molecules, or topological concepts, we would be able to understand these things more intuitively and expand our knowledge. However, it turned out that it was too early to talk about VR at the time.

But it gave me a lot of opportunities to learn about AI technology in different fields. I think my career has always been at the intersection of technology and application. This gave me a different perspective on how far AI has evolved and what it can be applied to.

Jeffrey Blackburn: So in Tesla's self-driving research, you see the possibilities of machine learning, deep learning, and you see where it's headed.

Mira Murati: Yes. But I didn't see it very clearly.

Jeffrey Blackburn: Have you ever worked for Musk?

MIRA MURATI: Yes, especially in the final year. But at that time, we didn't know exactly where AI was headed. At that time, we were still only applying AI to specific application scenarios, not general ones. The same goes for VR and AR. And I don't want to just apply these techniques to specific problems. I wanted to do more research, understand the principles behind it, and then start applying these techniques to other things.

That's when I joined OpenAI, and OpenAI's mission was very appealing to me. At the time it was a non-profit organization. Now the mission has not changed, but the structure has changed. When I joined 6 years ago, it was a non-profit organization dedicated to building secure AGI (Artificial General Intelligence). At the time, OpenAI was the only company outside of DeepMind doing research. This was the beginning of my journey at OpenAI.

2. Three major technological advancements have made ChatGPT possible, and practice has proven that the model can deeply understand data

Jeffrey Blackburn: Got it, so you've been building a lot of stuff since then. Maybe you can provide some basic knowledge of AI to the audience. From machine learning to deep learning to AI today, these concepts are all interconnected, but they are also different. How did these shifts happen, and how did ChatGPT, DALL· What about a product like E or Sora?

Mira Murati: Actually, our products are not completely new, in a sense our products are based on the joint efforts of mankind over the past few decades. In fact, AI started at Dartmouth College.

Over the past few decades, the combination of neural networks, large amounts of data, and massive amounts of computing power has led to truly transformative AI systems or models that are capable of performing general tasks. We don't know why it works, but deep learning works. We also try to understand how these systems actually work through research and tools. But based on our experience in the past few years of working on AI technology, we know that this path will work. We have also witnessed their gradual progress.

Take GPT-3, for example, which is a large language model that was deployed about three and a half years ago. The goal is to predict the next token, which is basically the prediction of the next word. We found that if we gave this model the task of predicting the next token, and trained the model with a lot of data, and gave it a lot of computing resources, we could still get a model that really understands the language, and its level of understanding is similar to that of humans.

It has developed its own understanding of patterns in these data by reading a lot of books and information from the internet, rather than simply memorizing them. We also found that this model can process not only language but also different types of data such as code, images, video, and sound. It doesn't care what data we enter.

We've found that the combination of data, compute, and deep learning works very well, and the performance of these AI systems continues to improve by increasing the amount of data and the amount of computation. This is known as Scaling Laws. It is not an actual law, but a statistical prediction of the improvement of the model's ability. That's what drives today's AI advancements.

Jeffrey Blackburn: Why did you choose a chatbot as your first product?

Mira Murati: In terms of products, we really started with APIs, not chatbots. Because we don't know how to commercialize GPT-3. Commercializing AI technology is actually very difficult. We initially focused on technology development and research, and we thought that once a good model was built, business partners would naturally use it to build products. But then we found out that it was actually very difficult, which is why we started developing the product ourselves.

So we started building a chatbot ourselves, and we tried to understand why a successful company couldn't turn this technology into a useful product. We eventually found out that this was actually a very strange way of building a product - starting with the technology, not the problem to be solved.

Third, model capabilities and safety complement each other, and only smart models can understand the guardrails set by humans

Jeffrey Blackburn: Intelligence seems to be developing in a linear fashion as computing power, data, and as you add those elements, it gets smarter. How fast has ChatGPT evolved over the past few years? And when will it achieve human-level intelligence?

Mira Murati: Actually, these systems have reached a similar level to humans in some areas, but there are still gaps in many tasks. Depending on the trajectory of the system's development, a system like GPT-3 may have toddler-level intelligence, while GPT-4 is more like a smart high school student. In the coming years, we will see them reach PhD-level intelligence on specific tasks. The pace of development and progress is still very fast.

Jeffrey Blackburn: Are you saying that there will be such a system in a year?

Mira Murati: A year and a half. Perhaps then there will be AI systems that can surpass human performance in many fields.

Jeffrey Blackburn: This rapid development of intelligence has sparked a discussion about security. I know you've been very concerned about this topic, and I'm glad to see that you're looking into these issues. But you'd really like to hear your point of view.

Suppose that in 3 years, when the AI system becomes extremely smart and can pass every bar exam anywhere and every test we design, is it possible for it to decide on its own to access the Internet and start acting autonomously? Will this become a reality, and as the CTO of OpenAI and the person leading the product direction, will you think about these questions?

Mira Murati: We've been thinking about this for a long time. We will inevitably have behavioral AI systems that can connect to the internet, talk to each other, complete tasks together, or work alongside humans to collaborate seamlessly with humans.

As for the security issues and social impacts of these technologies, I don't think we can solve them until they have arisen, but we need to embed solutions to problems into the technology as the technology evolves to ensure that these risks are properly handled.

Model capabilities and security go hand in hand, and they go hand in hand. It's much easier to tell a smart model not to do something than it is to get an unsmart model to understand the concept. It's like the difference between training a smart dog and a not-so-smart dog. Intelligence and security are inextricably linked. Smarter systems better understand the guardrails we set.

There is currently a debate about whether more security research should be done or AI capabilities. I think this view is misleading.

Because when developing a product, of course, safety and guardrails are taken into account, but when it comes to R&D, they actually complement each other. We think it's important to look at this very scientifically, try to predict what capabilities the models will have before they're trained, and then prepare guardrails along the way.

But so far, this has not been the norm in the industry. We train these models, and then what's called emergence. These abilities are popping up out of nowhere, and we don't know if they will appear or not. While we can see performance gains in the data, we don't know if this improvement in data means that the model is doing better in translation, biochemistry, programming, or something else.

Doing a good job of scientific research on the capabilities of predictive models helps us prepare for what's to come. All safety research and technology are in the same direction and must be achieved together.

Fourth, the risk of deepfakes is unavoidable, and only through multi-party cooperation can the problem be solved

JEFFREY BLACKBURN: Mira, now there's also an AI fake video of Ukrainian President Volodymyr Zelensky saying "we surrender," or a dentist ad for Tom Hanks' video, what do you think about that kind of thing? Is this an issue that your company should control, or does it need to be regulated by relevant regulations, and what do you think about it?

Mira Murati: My view is that it's our technology, so we're responsible for how users use it. But it's also a shared responsibility with people, society, governments, content producers, media, etc., and we need to figure out how to use these technologies together. But in order to make it a shared responsibility, we need to lead people, give them access, give them the tools to understand these technologies, and give the right guardrails.

I don't think it's possible to be completely risk-free, but the question is how do you minimize the risk and give people the tools to do so. In the case of the government, it is very important to study with them and expose them to things in advance, so that the government and the regulators know what is going on in the business.

Perhaps the most important thing ChatGPT does is to make the public aware of the existence of AI, to give people a real intuitive sense of what the technology is capable of and what it is risky. When people try AI technology and apply it to their own business, it becomes clear that it can't do certain things, but it can do many other things, and understand what it means for themselves or the labor market as a whole. This allows people to prepare.

Fifth, cutting-edge models need more supervision, and it is the key to do a good job in predicting model capabilities

JEFFREY BLACKBURN: That's a great point, these interactive interfaces that you've created, like ChatGPT, DALL· E gives people an idea of what the future holds. I would like to make one last point about the Government. You want some regulations to be laid out now, not a year or two later when the system becomes very smart, even a little scary. So what exactly should we do now?

Mira Murati: We've been advocating for more regulation of cutting-edge systems, and these models are very capable, but because of that, the negative impact that can be caused by misuse is also greater. We have always been very open to policymakers and working with regulators. For smaller models, I think it's good to allow a lot of breadth and richness in the ecosystem, and don't discourage people from innovating in this space because they have fewer computing or data resources.

We've been advocating for more regulation of frontier systems, where the stakes are much higher. And, instead of trying to keep up with the changes that are already happening quickly, you can anticipate what's coming up in advance.

Jeffrey Blackburn: You probably don't want the U.S. government to regulate the release of GPT-5, do you? Let them point fingers at you.

Mira Murati: It also depends, a lot of the security work that we're doing has been codified by the government in AI regulatory guidelines. We've done a lot of work on AI security, and we've even provided the U.S. government and the United Nations with principles for AI deployment.

I believe that in order to do AI security well, it is necessary to really participate in AI research, understand what these technologies mean in practice, and then create regulations based on those understandings. That's what's happening right now.

To get regulation ahead of these cutting-edge systems, we need to conduct further research in the field of model capability prediction in order to come up with the right regulation.

Geoffrey Blackburn: I hope the government has people who can understand what you're doing.

Mira Murati: It seems that more and more people are joining the government, and these people have a better understanding of AI, but it's not enough.

6. All knowledge-based jobs will be affected by AI, and AI will make the "first draft" of everything easier

Jeffrey Blackburn: You should be the companies in the AI industry, and the world at large, that have the best visibility into how this technology is going to impact different industries. It has been used in areas such as finance, content, media, and healthcare. Looking ahead, what industries do you think will change dramatically because of AI and your work at OpenAI?

Mira Murati: It's a lot like the question that entrepreneurs asked us when we were just starting to build a GPT-3-based product. People will ask me, what can I do with it? What is it for? I would say, anything, you can just try it. I think it's going to affect all industries, and there's no field that won't be affected, at least not in terms of knowledge jobs and knowledge labor. Maybe it will take a little time to get into the physical world, but I think everything will be affected.

Now we are seeing a lag in the adoption of AI in high-risk areas such as healthcare or the legal sector. This is also very reasonable. First of all, you need to understand and use it in low-risk, medium-risk use cases, make sure that those use cases are handled safely, and then you can apply it to high-risk things. There should be more human supervision initially, and then gradually switch to a higher degree of human-robot collaboration.

Jeffrey Blackburn: What are some use cases that are emerging, coming soon, or that you personally prefer?

Mira Murati: My favorite use case so far is that the first step of everything that AI has made it easier for us to do everything, whether it's creating a new design, code, article, email, it's become easier.

The "first draft" of everything is much easier, it lowers the barrier to doing something, allowing people to focus on the more creative and difficult parts. Especially on the code side, you can outsource a lot of the tedious work to AI, like the documentation side. On the industry side, we've seen a lot of applications. Customer service is definitely an important application area for AI chatbots.

The same goes for the analytic class, because now we have a lot of tools connected to the core model, which makes the model much easier to use and more efficient. We now have code analysis tools that can analyze large amounts of data, can dump all kinds of data into it, and it can help you analyze and filter the data. You can use the image generation tool, and you can use the browse tool. If you're preparing for a dissertation, AI can make your research work faster and more rigorous.

I think that's the next level of model productivity advancement – adding tools to the core models and bringing them to a deep blend. The model determines when to use the analysis tool, search, or code tool.

7. Some creative work should not exist, and models will be excellent creative tools

Jeffrey Blackburn: As the model gradually watches all the TV shows and movies in the world, will it start writing scripts and making movies?

Mira Murati: These models are a tool, and as a tool it certainly accomplishes these tasks. I look forward to working with these models to push the boundaries of our creativity.

And how do humans perceive creativity? We think it's something very special that only a few talented people can access. And these tools actually lower the bar for creativity and boost people's creativity. So in that sense, I think models would be a great creative tool.

But I think it's really going to be a collaborative tool, especially in the creative space. More people will become more creative. Some creative work may disappear, but if the content it produces isn't of high quality, maybe they shouldn't have existed in the first place. I truly believe that AI can be a tool for education and creativity, and it will elevate our intelligence, creativity, and imagination.

JEFFREY BLACKBURN: People used to think that CGI and things like that were going to ruin the film industry, and they were very scared. AI is definitely more influential than CGI, but I hope you're right about AI.

8. The impact of AI on jobs is not yet known, but the economic transformation is unstoppable

Jeffrey Blackburn: There are concerns that many jobs could be at risk of being replaced by AI. What exactly is the impact of AI on people's work, and can you talk about that as a whole? Should people really be worried about this? What types of jobs are more dangerous, and how do you see it all developing?

Mira Murati: Actually, we don't really understand what impact AI will have on employment. The first step is to help people understand what these systems can do, integrate them into their workflows, and then start predicting and predicting impact. And I don't think people realize that these tools are already being used a lot, and that there isn't enough research on that.

Therefore, we should look at what has changed in the nature of current work, the nature of education, which will help us predict how to prepare for the improvement of these AI capabilities. Specifically, I'm not an economist, but I do expect a lot of jobs to change, some jobs to disappear, some jobs to pop up, we don't know exactly what that will look like, but you can imagine a lot of repetitive jobs that are strictly repetitive going to disappear. People don't grow at all when they do these jobs.

Geoffrey Blackburn: Do you think there will be enough other jobs created to compensate for the jobs that are gone?

Mira Murati: I think there will be a lot of jobs created, but how many jobs will be created, how many jobs will be changed, how many jobs will be lost, I don't know. I don't think anyone really knows about it now, either, because this issue hasn't been carefully studied, but it really needs to be taken seriously.

But I think the economy is going to be transformed, and these tools are going to create a lot of value, so the question is how to use that value. If the nature of work really changes, then how do we distribute economic value to society. Is it through public welfare? Is it through Universal Basic Income (UBI)? Is it through some other new system? There are a lot of issues that need to be explored and solved.

9. AI can realize the inclusiveness of higher education and provide customized learning services

JEFFREY BLACKBURN: Maybe higher education played an important role in the work that you're described. What role do you think higher education will play in the future and evolution of AI?

Mira Murati: I think it's important to figure out how to use AI tools to advance education. Because I think one of the most powerful applications of AI will be in education, to enhance our creativity and knowledge. We have the opportunity to use AI to build ultra-high-quality education that is very accessible, ideally free to anyone in the world, presented in any language, and reflects the nuances of cultures.

With AI, we can provide customized education to anyone in the world. Of course, in an institution like Dartmouth, the classrooms are smaller and students get a lot of attention. But even here, it's hard to have one-on-one tutoring, let alone anywhere else in the world.

In fact, we don't spend enough time learning how to study, and this usually happens very late, like in college. But learning a skill is a very basic thing, and a lot of time is wasted if you don't master this skill. With AI, the curriculum, curriculum, question sets, everything can be customized to the student's own learning style.

Jeffrey Blackburn: So you think AI can complement education even in places like Dartmouth?

Mira Murati: Absolutely, yes.

10. User feedback has formed the basic value system of the system, and a highly customized system is being developed

Jeffrey Blackburn: Why don't we start with a Q&A session?

Mira Murati: Okay, no problem.

Audience 1: As one of Dartmouth's first computer scientists, John Kemany once gave a lecture about how every computer program built by humans embeds human values, whether intentional or unintentional.

What human values do you think are embedded in GPT products? Or to put it another way, how should we embed values like respect, fairness, justice, honesty, integrity, etc., into these tools?

Mira Murati: That's a good question, but it's also a very difficult question. We've been thinking about these questions for a long time. A lot of the values in the current system are basically embedded through data, i.e., data on the Internet, permissioned data, and annotated data by human annotators. Each has specific values, so it's important that this is a collection of values. Once we get this product into the world, we have the opportunity to get a broader set of values from more people.

Now we offer the most powerful system to people for free on ChatGPT, which has been used by more than 100 million people around the world. All of these people can provide feedback to ChatGPT. If they allow us to use the data, we will use it to create this aggregated value that makes the system better and more in line with people's expectations.

But this is the default underlying system. What we really want is to have a custom layer on top of the underlying system so that each group can have their own values. For example, a school, a church, a country, or even a state, they provide their own more specific and precise values on top of this default system of basic human values, and build their own system.

We're also looking at how to do that. But it's really a difficult question. Because there are human problems of our own, because we can't agree on everything. And then there's the technical issues to be solved. On the technical side, I think we've made a lot of progress. We use methods like reinforcement learning for human feedback to allow people to input their own values into the system. We've just released the Model Specification Guide, Spec, which provides greater transparency and allows people to understand the values in the system. We're also building a feedback mechanism where we collect input and data to evaluate the progress of the Spec, which you can think of as the constitution of the AI system.

But this "constitution" is constantly changing, because our values have also evolved over time. It will become more precise. That's what we're focusing on. Now we are thinking about basic values, but as the system becomes more complex, we will have to think about more nuanced values.

Geoffrey Blackburn: Can you prevent it from getting angry?

Mira Murati: No, that's actually up to you. As a user, if you want an angry chatbot, you can have one.

11. The Red Team did not find any problems with Sky's voice during the exercise, and is studying how to provide remuneration to creators

Audience Member 2: I'm really curious how you think about copyright issues and biometric rights (like voiceprints, fingerprints, etc.). You mentioned earlier that some creative work may no longer exist, and many people in the creative industry are thinking about licensing and compensation for the use of data. Because whether it's a proprietary model or an open-source model, the data is taken from the internet. I'd really like to know what you think about licensing and compensation, because that involves copyright issues.

There is also one thing that is the right to biometric metrics, such as the right to voice, likeness, etc. OpenAI has recently seen controversy over Sky's voice, and this election year is also threatened with deepfakes, what do you think about these issues?

Mira Murati: Okay, I'll start with the last part. We did a lot of research on voice technologies and didn't release them until recently because they come with a lot of risks and problems.

But it's also important to be socially acceptable, to give people access to it while setting up safeguards and controlling risks, while also moderators and others to research and make progress.

For example, we're working with agencies to help us think about how AI interacts with humans. Now the model has sound and video, which are very emotionally resonant modalities. We need to understand how these things are going to play out and be prepared for these situations.

In this case, Sky's voice is not Scarlett Johansson's, and it is not intended to be hers. I'm in charge of choosing the voice, and our CEO is having a conversation with Scarlett Johansson. These two processes are completely parallel and do not interfere with each other.

But out of respect for her, we took it down. Some people hear some similarities, but these things are very subjective.

I think this kind of problem can be dealt with in the form of red team drills (generally referred to as cyber combat attack and defense exercises), and if the voice is considered to be very similar to a well-known public voice, then we don't choose that voice.

However, in our red team exercise, this problem did not arise. That's why it's important to conduct a broader red team drill to catch these issues in advance.

But our overall approach to biometrics is to initially give access to only a few people, such as experts or members of the red team, to help us understand the risks and capabilities well, and we build solutions based on that.

As we become more confident about these measures, we provide access to more people. We don't allow people to use this technology to make their own voices because we're still working on the risks of this issue and we don't yet have the confidence to be able to deal with the abuse in that area.

However, we are still satisfied with the security measures of the current few voices in ChatGPT, which can better prevent abuse. We started with a small-scale test, essentially an extended red team exercise. And then when we scale up to 1,000 users, our alpha version will work closely with those users to gather feedback and understand the extremes so that we can be prepared for those situations when we scale to 100,000 people. Then 1 million people, then 100 million people, and so on. But it's all done under tight control, which is what we call iterative deployment.

If we don't feel that these use cases are safe enough, then we won't release them to users. Or we'll feature the product in some way for those specific use cases, because capabilities and risks coexist.

But we're also doing a lot of research to help us deal with the provenance of content and the authenticity of content, giving people the tools to understand whether something is deepfake, or false information, and so on.

We've been working on the problem of disinformation since the early days of OpenAI. We've built a lot of tools, like watermarks, content policies, and more, that allow us to manage disinformation. This year in particular, given that this is a global election year, we have intensified this work even more.

However, this is an extremely challenging area, and we as technology and product manufacturers have a lot of work to do, but we also need to work with people, society, media, and content producers to figure out how to solve these problems.

When developing technologies such as voice and Sora, we start by working with members of the red team to study the risks. Then it's time to look at this with content creators and see how this technology will help them build a product that is safe, useful, and really drives society. We at DALL· Similar studies have been done on both E and Sora.

The issue of compensation and licensing is important and challenging. We've done a lot of work with media companies and also given people a lot of control to decide how their data is used in the product. If they don't want their data to be used to improve the model, or for us to do any research or training, that's totally fine. We don't use this data.

And then for the creator community in general, we put them in use of these tools early so that we can hear from them first, understand how they want to use it, and build the most useful product based on that information.

Also, these are research previews, so we don't need to spend a fortune building a product. We only invest heavily in the development of this technology when we are sure it will be of great use.

We're also trying to create tools that allow people to be compensated for their data contributions. It's very tricky from a technical standpoint, and it's hard to build a product like this because you have to figure out how much value a particular amount of data creates in a trained model.

It's hard to estimate how much value individual data actually creates. But if we can create pools of aggregated data that people can feed into, it might be easier to measure that.

So we've been trying these approaches for the last two years, but haven't actually deployed anything yet. We experimented with the technology and made some progress, but it was a very difficult problem.

12. If you want to reduce the pressure on yourself when you go back to your student days, it is important to expand your knowledge

Audience Member 3: My question is fairly simple. If you were back in college now, back to Dartmouth, what do you want to do, what are you going to do? What would you major in, or would you be more involved in?

Mira Murati: I think I'll learn the same things, but maybe it's a little less stressful. I think I'll still be studying math, but I'll also be taking more computer science courses. But I'm going to be less stressed and learn in a more curious, happier way. It's definitely more productive.

When I was a student, I was always a little stressed about what would happen next. Everyone would tell me not to be stressed, but somehow I was always stressed. When I talk to my seniors, they always say to enjoy learning, dedicate yourself to it, and feel less stressed.

I think in terms of course selection, it is better to learn more subjects and know a little bit about everything. I found this to be a great thing both in school and after graduation. Even though I'm working at a research institute now, I'm always learning and never stopping. It's good to have an understanding of everything.

JEFFREY BLACKBURN: Thank you so much, because I know your life is stressful. Thank you for being here today, and thank you for the very important and valuable work you are doing for society. Thank you to everyone in Dartmouth. The advice to the students just wrapped up the conversation, and I want to thank everyone again for coming and enjoying the rest of your graduation weekend. Thank you.

Source: Dartmouth College

Read on