laitimes

Silicon Valley god Reid Hoffman's latest interview: AI can't replace people, and people who understand AI are more competitive

author:Wall Street Sights

On April 3, LinkedIn co-founder Reid Hoffman and Columbia Business School dean Costis McCloris discussed the future of AI and the digital economy at Columbia's new Distinguished Speaker series. Hoffman is a well-known Internet entrepreneur and venture capitalist in the United States, co-founded the professional networking site LinkedIn, and is also a founding board member of PayPal, in addition, Hoffman is also a partner of Greylock Partners, a well-known venture capital company in Silicon Valley, investing in Facebook, Airbnb, Zynga and other well-known technology companies, known as "one of the most successful angel investors in Silicon Valley". In this fireside chat, Hoffman talked about the development prospects of AI technology and the problems that may be encountered in the development process, and as a veteran venture capitalist, he also explained his investment philosophy and views on the AI investment trend. Hoffman said that in the future, AI will not replace humans, but people with AI skills will be given priority in the job market, and with the advancement of AI technology, the demand for deploying small models on terminal devices will gradually increase, and eventually show the appearance of multiple models being deployed together in different fields. The core points are as follows: 1. The ability to apply computing at scale will continue to grow, and some features will even grow exponentially. 2. Open source software has a lot of value, similar to open and open science, but with a stance of caution in use. 3. Driven by demand, large and small models will coexist in the future. 4. In the next few years, the demand for Nvidia chips will continue to be strong. 5. AI training on the basis of data is a fair use of technology, and it is more important to make AI benefit all mankind than data ownership. 6. Nuclear fusion and nuclear fission are key to tackling the climate problem. 7. The public good of technology is important, and this is also an unresolved problem. 8.AI will not replace humans, but rather people with AI skills will be more competitive. 9.AI is very likely to lead to industrial transformation, but it may be slow in terms of pace. 10. In the field of management, soft skills may be the most core ability, such as maintaining a state of constant learning. The following is the full text of the interview:

Opening Remarks:

Good evening, everyone. I'm Stephen Meyer, and I'm James Brown from the Business School. P. Government Business Professor, I'm excited to welcome you all today to this fireside chat about shaping the future of artificial intelligence and the digital economy. This is part of a new presentation series that should provide a platform for business leaders to share their insights and how to set and implement very ambitious goals, as well as how to inspire innovation. There is no better person to open our lecture than Reed Hoffman.

So I'm very honored to welcome and introduce two of our guests, Reid Hoffman, co-founder of LinkedIn and Inflection AI, as well as Graylock Partner and Costis McClair, Dean of Columbia Business School.

It's great to have you here, Reed. He said I should introduce him as a person, and I expanded on it a little. As a result, he is known as the most well-connected person in Silicon Valley. I'm sure he's probably one of the busiest people as well. So thank you for being here. He is clearly a highly successful entrepreneur and executive who has been instrumental in building leading consumer brands such as LinkedIn and PayPal. As an investor, he has played key roles in many companies such as Facebook and Airbnb. He has also written not one, but five best-selling books, although his last one was co-authored with a very powerful collaborator, GPT-4. I'm sure they discussed whether this counts as cheating or smart behavior. He is not only involved in business activities, but also in many charitable causes, for which he has received many awards. The most remarkable thing for me is that he received the Honorary Commander of the British Empire award from the Queen of England. I think that's pretty cool. and the Martin Luther King Jr. Center for the Great Tribute Award.

Welcome Reid to Columbia Business School. He will have a discussion with Costis McClair. He was the 16th Dean of Columbia Business School and was the David and Lynn Self Professor of Business. He went from electrical engineer to professor of business and then to dean of the business school. Under his leadership, I think there has been quite a shift in schools, especially in leading and especially embracing technology in education. Given his vision, he played a key role in the stamp accreditation of our program, the launch of the MBA/MS Engineering dual degree, and many other initiatives, including the Digital Future Initiative, which I care deeply about because I am one of the faculty co-directors of which it is co-sponsored.

This is the new think tank of our business school. Its goal is to prepare for the next century of digital transformation to help organizations, governments, and communities understand and benefit from current and future digitalization. So I'm very much looking forward to our upcoming discussions. Before going any further, let's welcome Reid Hoffman to Columbia Business School and hand over the microphone to Dean Mike Karls. Good.

问:

Thank you Stephen. That's a short introduction. It could be shorter, but yes, thank you very much.

I'm thinking about this conversation and realizing that it could go in many different directions. We can talk about the early days of the internet and PayPal. We can talk about building one of the most successful social networks like LinkedIn. We can talk about an incredible career in the Bay Area as a venture capitalist, but I think we should talk about AI now. Especially since you've actually been involved in investing, starting companies, advising the government, you know, in that area anyway. I thought it would be great to discuss this and hear your thoughts.

Now, we had a little bit of a conversation earlier, and you used the term "cognitive superpower." So the first question I want to ask you is to give us an overview of the phenomenal growth in the capabilities of these AIs, from your perspective, how do you think we're going to evolve? And then we'll move on.

Answer:

Fundamentally, I think what we're doing is creating a cognitive industrial revolution, like the steam engine of the mind. The steam engine made physical things more powerful, inaugurating the industrial revolution that allowed transportation and logistics. I'm just doing manufacturing right now. That's what it is.

But in terms of cognitive and linguistic characteristics, the algorithm that triggers it has been known for decades. But it's a sizing thing. So it's a fact that you can apply thousands, tens of thousands of compute units, GPUs, and it changes the paradigm in which they are built, from programming them to learning them. That's part of the reason why you have data and all that other stuff.

We're still in the very early stages, because these AIs, these agents, these models have learned some really interesting things, when they're just starting to understand the relationship between data transaction paradigms, computational scale, and so on.

But GPT-4, for example, has already given us superpowers. I'm sure we can find about 20 to 30 people who might be able to do that. When you say, I want that, plus its similarities to oceanography, it doesn't take anyone, GPT-4 can do it.

Now, the idea of doing this, in a way, is what you get when you use these devices. So, for example, in an educational context, you say, write a smart essay on business strategy for me, it's unlikely to be that good. I mean, it's going to be coherent.

If you say, I want to understand how the intersection of data and manufacturing is going to differentiate between generic and specific robots in some global supply chain, let's say these materials get more expensive and those materials get cheaper, maybe get something better. And it's still useful in these operations. This is what I call cognitive superpowers.

One thing like the one I realized while writing this book is that our thinking patterns are changing, and as we learn to become thoughtful, our thinking patterns will be more like a video game. Because instead of taking a long walk trying to get that great idea, sit down and start banging and say, okay, here's a hint. That's not very interesting, but here's another one.

I think we're going to have more of the ability to think and reason in a better way in this iterative process. Obviously, there will be a lot of different kinds of superpowers. Because, for example, I have very limited artistic abilities. But if I have an idea and can describe it, I can go to DALL-E or Midjourney and start getting east, expanding in those areas as well. So if I want to make a card for my friend's birthday and want to do something specific and have a visual idea. This is another form of superpower. These are just the beginnings of a gesture towards all things.

I think anything we do with language is the minimum to start zooming into.

Ask:

When you think about the speed of change, think back, I mean, a lot of the things you mentioned have been known for a long time.

In the last 10 years, we've started to basically apply computing at scale. Data is something that we have started to use on a large scale in the last 10 years. Transformer was invented about 7 years ago. But in a sense, we've seen a dramatic increase in capabilities. Do you see this trend continuing?

Answer:

All exponential curves eventually become S-curves. But I think it's definitely going to continue for years to come and beyond. Anyone who claims they know for sure that it will continue in a few years, it will definitely happen. But, you know, all of these things happen eventually.

Part of the S-curve, and I think one of the mistakes that some people make, is that they're going to think they're going to be super smart in three years, and they're going to ask, can you get this S-curve of ability? Is this driven by much larger computation? They say, this is the IQ curve. But it's not exactly an IQ curve. The inferences, judgments you make are not actually the same thing. Now, some features are growing exponentially. But it's not the same thing as IQ.

Ask:

You're sitting in a vantage point and investing in the big picture of the entire AI ecosystem. What are your thoughts on open source vs. proprietary models or things like that, and how can we accelerate growth to make it more accessible, and what are your thoughts on that?

Answer:

So I've been on the Mozilla board of directors at LinkedIn for almost 11 years, and we've open-sourced a lot of different things. I generally think that there is a lot of value in open-sourcing all kinds of software, similar to open and open science. These models have a lot of features. One of the issues is to open source the models and make them universally usable, and they will put the functionality in everyone's hands. Now if you say, hey, we can open source them, it's going to be academics, it's going to be entrepreneurs, it's going to be governments. But the problem with open source is that once the model comes out of the bar, it's there, infinitely existent. As we've seen, these various open-source models are being used to generate content that tries to disrupt our information systems. And that's something we need to fight against. Now, I think we can use AI to help with this as well.

But the reason I'm more cautious about open-sourcing these models is that it also amplifies the bad guys. All right. If those people have an open-source web browser, there's nothing special they can do about it. An open-source database, again, nothing special can do it. These models give them superpowers that can be even more harmful.

Ask:

We've already made some models fully open sourced by sharing them.

Answer:

So some use cases, like massive political misinformation. Yes, the current open-source model can do that. Some of them, like we're seeing an increase and will continue to see an increase in cyberattacks, because of phishing and so on, are similar to misinformation. And then some other areas. I think so far we don't have a line of extension, like bioterrorism, etc. But if you just keep open-source everything, you're going to get there, and you can't control these negative cases. And some are serious. So so far, there are some problems, but there is no fire alarm level 5. But if you're not careful, we'll quickly reach level 5 fire.

Ask:

Well, let me change the subject. We've embarked on the path of large-scale computing, large-scale models that you mentioned earlier, basically building foundational models with common properties, and then we deploy them in different applications. But another view is that instead of building large-scale, general-purpose models, it is better to build smaller, application-specific models.

And, as you know in engineering, this has been happening for a long, long time. How do you see the current situation? There has been a lot of effort on both sides, what have you observed?

Answer:

So far, when you look at GPT-2, 3, 4, etc., for example, you fine-tune 3 for some cases, and then you find that 4 performs better in most of the cases where 3 has been fine-tuned, or even better than 3.5. So far, there has been a virtue in terms of capacity enhancement, which is that as the models grow in size, they become more robust, more capable, and more like an instantaneous, fast, on-demand research assistant. It has some hallucination issues, although they are trying to fix this by other means such as searching. This illusion problem will never come down to zero, but you might reduce it to well below human standards, in which case for us it's almost equal to zero or close to absolute adequate. As a result, large-scale models have an amazing increase in capabilities.

Still, you might have reason to want a small model and run it on your phone. It is less expensive to run. It really just needs to do a few specific things, or it has a different training domain where you want it to have better generation performance and a lower error rate. You don't care about everything else. So that's why I think the inevitable part of the future is, not just a model. When you create an agent or you create an application language, you deploy multiple models.

Ask:

Other coordination. Very good. Let's move on to the hardware side a bit.

You mentioned computing hardware, and I had Nvidia, GPUs, large clusters in mind. It's true that I had a conversation with Huang five months ago, when his company just crossed $1 trillion, and now they're about $2.3 trillion.

What do you think about the fact that in a sense these things are not commoditized now. Now it seems that TSMC has not extracted significant economic benefits from its ability to produce such ultra-frontier clusters, and do you think this will continue? Do you think hardware is an inevitable enabler of this revolution, or will it become commoditized over time?

Answer:

I think Nvidia has done a lot of great work. They didn't create GPUs specifically for AI or previous cryptocurrencies. It happens to be a very good math processor. And this coincides with those cases.

I think it's one aspect of some good things about capitalism, invention, and so on. I think competition is inevitable for NVIDIA. There's nothing structural to stop that, right? Great work done, there's a great team, there's a great culture of architecture and design. Therefore, I think that Nvidia chips will continue to be in high demand in the next few years. But you know, I know a lot of effort is going on to create alternative chips, alternatives, which is part of what happens when you have market demand. So I think maybe a year or two from now, you're going to start to see some chips that at least they may not be suitable for training yet, but will help in what our industry calls inference, which is serving the model and the result. But I think I'm seeing a lot of startups pitching, and I'm seeing a lot of big companies finding ways to do that in interesting ways.

Ask:

So you mentioned training, you might need cutting-edge technology inference, and when I've trained the model and I'm now querying to get a response, I might need special-purpose hardware, but simpler, different hardware.

Answer:

Yes.

Ask:

Okay, I want to pivot a little bit to data, you know, these systems are ingesting a lot of data, and so is our data. The question here is whether we are depleting the new data available for them to ingest?

Answer:

I don't think that's true.

Ask:

But another issue is the issue of data ownership. What are your thoughts on that? I mean, you've definitely been thinking about this from all angles, including you know, all the things that OpenAI is doing. So what do you think?

Answer:

So data is a complex issue that most people don't think about very well. So there's usually questions like, like, your camera was taken away, like this room took a picture of me? Is that your picture? That's my picture? That's our picture? You know, that kind of thing has to do with data, and I might have signed some waiver statement, and that's probably your picture, but you know, but it's a complicated thing.

What is the value of the data? What is the value of the data that is produced? So my first thought about all of these things is, when you're training, it's like these models are reading, right? So the rules that govern data should be the same rules that govern reading, which is whether you have the legal right to read what you read. Like whether you bought a book, etc.

Then, that's no problem, because that's reading. Because copyright law doesn't stop me from buying a book or selling it to you and then you read it. It's all part of it. What it really stopped me from doing was, I bought the book, and I said, oh, I'm going to rewrite the book and start selling it myself, and so on. So I think that's the nuance that you want to be in terms of data. Obviously, there are all sorts of places, and you're going to say, okay, some of it is my private data that I don't want to leak anywhere. And these generative models aren't good at knowing when to do this, so don't feed that data into the generic model itself.

Part of the reason now is that generative models are more of an inference engine. People usually think of it as a database frequently, but to get to the inference engine, because you have a bunch of data to get there, but it's an inference engine. So, one of the things that I find funny about the New York Times legal case is, look, it repeats these articles, and you're like, okay, you copied and pasted the first half of the article and then said finish it. It has learned something about it. Now, the person who owns the first half of the article may own the entire article, and this may still be a legitimate situation. I'm not sure if there is any harm here. If you say the article that gave me the title, and it generated, then you'd say, okay, you gave some stuff that the New York Times was selling, and the person hasn't bought it yet. That would be a problem.

I don't think these models will do this because they were trained not to. So, there's a lot of different complexity when it comes to data. I think, for example, you train in a publicly available place on the internet, like I post on the internet, and I say please read this, and the AI model reads and is trained on that, in my opinion, a fair use of technology. I think it's very important that we want these models to exist. One of the things that we talked about very quickly before we got on stage is, you know, with these AI models, we know how to create a medical assistant that can work on every smartphone, whether you have the opportunity to see a doctor or not. Obviously, it can even be trained in such a way: Hey, do you have a chance to see a doctor? If you have, great. Let me tell you something that you should see a doctor right away, and that's what you told him, or, you might be okay. But these are things that you might want to check with your doctor, and if you don't have the opportunity to see a doctor, just say, look, I'm not a doctor, but that's something you might be considering. That could be amazing mentors, among a few other things. Or, like, you can't afford a lawyer, you're looking at something like a contract. Well, it's a good thing that there's actually something that can help you deal with this.

So I generally think that we should want these models to be trained, and our main problem is not that we should train the models, but that we make sure that we can use these models as many people as possible to help humanity as a whole, not just the rich people or the rich countries. In any case, this is only a preliminary impression of some data, but it is obviously very complex. Yes, it's an evolving topic, let's just say so.

Ask:

Yes, this is an evolving topic. Suffice it to say that you mentioned OpenAI. You were one of the original early investors in OpenAI, right? And then you created Inflection AI. You've made all sorts of investments there. What motivated you to help found OpenAI, and then continue to dig deeper into that? How do you evaluate AI investments? I mean, you've been doing this particular space for about eight years now. Is OpenAI's investment in 16 or 15 years?

Answer:

I'm not a good memorizer, so I need to consult the documentation to make sure it's accurate. At first, I thought I was going to be a scholar. While studying for a degree in philosophy at Oxford, I decided to make a bigger impact on the world by helping to create software. I never thought I'd be an investor. Becoming an investor isn't my immediate goal, it's my way of helping to build the right projects. So, in the beginning, it was more for the purpose of entrepreneurship or product creation. So, when I study these techniques, I usually work as an expert in the field of software.

I've also made some non-software investments, but they've been almost all philanthropic. Like, I think fusion and nuclear fission are key to fighting climate change, and I'm going to make some investments in those areas. Sometimes you'll see a product that you think should exist in the world, so you'll invest. I would zero out these investments on my ledger when I invested because I had no idea about the range of predictions. I certainly want them to have economic value. But in the software space, it started with Internet Matters, then Web 2.0, then Web 3 and artificial intelligence. I look for things that can have a huge impact on individuals, groups, societies, and even humanity and how the world should work. If this is a way to make a real and valuable change happen in an industry, and there is an entrepreneur, both men and women, who has a great plan and the right resources, the right timing, this is the moment for me to invest.

Now, for OpenAI, it all started with some discussions with Sam Altman and Elon Musk. That's when we realized that the AI revolution was coming. We should ensure that human-friendly AI is not just the preserve of Big Tech, but should be committed to the good of humanity. I'm not against big tech companies, which do a lot for humanity, but I think that's a good thing. That's why I've been on Mozilla's board for over 11 years and still sit on the board, and that's that public interest technology is really important, and that's something that can't be solved right now. So yes, let's help kickstart it. At that time, we thought, maybe there is something here. Part of venture capital, like a seed round or a Series A investment, is an idea that might work, right? Let's try it. And then as you go along, and that's one of the things that I've really learned from being a venture capitalist, is that one of the things that makes Silicon Valley such an exciting place is because of its strong network of connections, where we all exchange information that works or doesn't work at a very fast pace. As a result, the entire ecosystem is learning. This is reflected in the way of capital financing, that is, from seed round to Series A, Series B, Series C, etc. When you pass a few hurdles and prove that an idea has a higher probability of working, you move on to the next bigger funding round, enjoy a higher valuation, and so on. You'll have a network of people who are watching it, investing in it, who have opted in to it, investors, customers, partners, etc., to support it.

So, OpenAI's initial idea was that this AI at scale might lead to some interesting results. We don't know, maybe let's try and make sure that its governance is first and foremost human considerations, which is the reason why it's now a non-profit organization, managed by the nonprofit sector, you know, give it a kick.

Ask:

How have you evaluated AI investments over the past few years?

Answer:

I'm not sure if there are any software investments right now that don't brand themselves as AI investments, which is interesting. You know, it's a lot like the early internet, and some of the things in it are going to be pretty amazing. There will be a lot of things that look a little crazy and don't come to fruition because they don't really think about the strategic face.

yes, I think companies that survive and thrive in the big picture will be largely positive and connected. You know, I'm going to have to focus on a few things and see how they do it. But as you can imagine, some start-ups can have potential negative effects. But remember, investors don't like to be associated with, employees don't like to be associated with, and customers don't like to do things like that. There's a lot of network governance. Because when we think about how to be responsible for humanity, it's not just a matter of voters going to the polls, polling stations. It also involves customers, employees, investors, media, etc., all of which form a network of governance.

So, I think usually when you go through the process, you usually get a broadly positive outcome. Not always, but usually. So I think we're going to see a shift in anything that involves cognitive tasks, anything that involves language. I think we're going to see new types of drug discovery. One of the things I told Stanford's long-term planning committee seven years ago was that I saw possible paths for how AI could act as an amplifier in every academic field. Maybe even in theoretical physics, right. So, if you want to do this exercise on your own, just imagine if it was 1000 times better than a professional search engine. Each discipline can take advantage of a specialized search engine. Well, imagine a situation that is 1000 times better than it. That would be a useful AI tool. This does not mean that it will write papers. Obviously it may be done in some ways, but when it is combined with the human understanding of these concepts, the paper is better.

Ask:

I wanted to change the subject. We have mostly MBA students here, and I want to turn to leadership, especially to manage the explosive growth and scale of these companies. You did it, you started one of the most successful social networking companies, LinkedIn, and led it through explosive growth. What are some of the things that come to mind when you think about that?

Answer:

如你们可能知道的,我确实写了一本名为《闪电式扩张(Blitzscaling: The Lightning-Fast Path to Building Massively Valuable Companies)》的书。 它部分地基于我所发现的世界。

Regardless of the entrepreneurial perspective, everyone should think of their working life and career as if they were their own entrepreneurs. That doesn't mean you have to start a business. Probably not, but you should think about your career path like an entrepreneur. That's the first one.

How to make this interact with the company and the corporate organization, right, and so on. What is Blitzkrieg? It's something that Silicon Valley and, to some extent, actually China, knows, that most of the rest of the world doesn't really understand. It's a shift from an idea to an industry in a globally connected world, and what makes it atypical. So there's a lot of principles in there, like embracing chaos, having a tendency to move fast, and then solving the destruction problem as you move forward. There's a chapter about responsible lightning scaling, making sure you don't destroy something really bad, right? But that's part of that. And when you're doing internet software development, for example, and of course mobile, you're actually competing with the whole world. It's not just about competing with the people next to you, and it's not just about competing with the people under the street. And that's one of the reasons why it's so critical to be part of an ecosystem that understands speed and cadence, and what is the way to solve a key problem in marketing or build modern technology to do something, is actually very critical.

Ask:

How do you manage people in this process when you have this vision and are trying to lead a company through this phase of explosive growth?

Answer:

Well, there's a whole set of principles in the book as well. But for example, when you consider that the pace of scaling is to increase the number of employees roughly every tenfold, like 10, 100, 1000, etc. So how is the organization changing. Because, by the way, some of the changes in these companies are an order of magnitude. I've seen companies go from 20 people at the beginning of the year to 800 people at the end of the year. Okay, so how do you do that? So part of what you realize is that you're not looking for perfection, you're not looking for a stable organizational structure, etc. You realize that some of your previous key leaders may not have been the right leaders in the early stages.

So like, a very micro but very important piece of advice, but very important for this kind of scaling thing, like, okay, you're the product owner in my 30-person organization. You don't say, as long as you perform well, you'll continue to be the Product Owner when we become an organization of 1000 people. Maybe they will, maybe they won't. What you're saying is that as long as you do it well, your work will continue to get bigger. By the way, when you jump from a 30-person organization to a 1000-person organization, your area of responsibility is even greater.

So, it's like your job gets bigger. This doesn't necessarily mean you're still the Product Owner. And actually, when you jump through every scale level at that pace, there's usually more than 50% of people, and you'd think it's the executive management of the company that's going to change, and you have to be prepared for that dynamic. You have to be prepared for errors of judgment along the way, and even if someone who did well before may no longer fit into the position now, you have to have the courage to deliver on early promises and build trusting relationships to be able to make that change. This book is full of such stuff because these are the things I learned. You know, one of the places mentioned in BlitzExpand, probably one of the first places to learn about rapid growth was PayPal.

Ask:

What do you think are the most important soft skills in this field, what should we teach, and what should we aspire to have?

Answer:

Fred Kaufman has a book that I really like called Conscious Business, which is thinking about management as a concept of compassion, but not just for the individual you're dealing with, but for all the people around you. So like, you say, oh, this doctor gave a very bad diagnosis. Hopefully, if you fire them, it will be painful. It's like, well, remember all of their clients and all the people they're treating, and have compassion for them as well. So you have to have this broad sense of compassion and be sympathetic across the board.

I think the most important thing is probably soft skills. That's the important point of a good learning institution, always learning, right? soft skills are, look, recognize that when you're moving fast, you're going to make mistakes. I mean, like, one of the things that I use a lot in the startup and rapid growth environment is that it's my job decision and judgment, and I may not be right, but we have to make a decision, and we have to move on, so everybody has to be on the same page, but I'm not saying that those who disagree are necessarily wrong. We have to make this decision in order to function well. So, it's actually, I guess, I don't remember which chapter, in the Blitzkrieg book, but that was a few books ago.

The OODA cycle is one of the Silicon Valley terms, and it comes from the term fighter pilot. Observe, locate, decide, act. Among fighter pilots, this is taught because in air combat, fighter pilots with faster OODA cycles survive, others may die. So you're really trying to get your OODA loop right.

Silicon Valley is one of those places where the OODA cycle of individuals and companies is talked, and it has to work right because the speed of competition is very fierce. It's something that people understand, like behind every big startup that comes out of Silicon Valley, there are dozens to hundreds, sometimes thousands, of competitors. In China, by the way, in the thousands. As a result, those companies that stand out have a fast OODA cycle and are very aggressive. So you have to be able to do that, to instill that culture into other people, to deal with the complexities of making all these very quick decisions.

So, for example, why in the relative soft skills of leadership, embracing chaos is the first lesson in getting everybody to understand the counter-intuitive rules of rapid growth, because if everybody understood, like, I wouldn't be perfectly told. We make some inefficient decisions because we have to move fast, we have to make decisions, we have to learn from them. We do this together as a team. So, always learning is a key part of the process.

Ask:

Okay, I want to change the subject a little bit before we move on to the Q&A session and talk a little bit about AI, the future of work and society. It's a big topic with interesting ideas from all over the world. First of all, what are your thoughts on the impact of AI on society in the next three to five years? And then we can talk about something more specific. I don't want to discuss books in particular, but Reed just wrote a book in two months with GPT-4. So, actually two and a half months. So, this tells us what we will be able to do now, not in the near future. What are your thoughts?

Answer:

Obviously, people like to make a big fuss about jobs being replaced. You know, don't want to oversimplify it, but over time, our human organizations are often much slower than the availability of technology and the change in work. But if you have a job that basically tries to get humans to mimic robots, usually robots can do a better job. But there's actually a lot of shifting. So for example, if you look at a company and say, okay, let's assume that the tools that are three years from now can create 2 or 4 times better performance per job. Sales, would you fire a salesperson? You like a 2x or 4x performance boost, that's great. So, it's not about replacing humans, it's about giving preference to humans who use artificial intelligence to get jobs.

Marketing, which is a competition between companies. The composition of some jobs is subject to change. So if your job, for example, is to feed digital forms into an advertising system, acting like a robot, then the speed is greatly accelerated. But how to position ourselves, create emotional connections, create brands, explore different ways, introduce new types of marketing, like content marketing, etc. So when you look back at all these sectors, it doesn't end with the fact that we're reducing human work. We're more inclined to humans who use artificial intelligence, right? So they.

Ask:

So, that's the point I'd like to insert. Yes, we need to educate people to be people who can and consume AI intelligently.

Answer:

Yes, exactly. You see, even customer service is often given you a script, according to the script, and behaves like a robot. This kind of robot-like work, these jobs will be reduced. But maybe customer service now becomes, um, how do you build relationships, right? So you have an AI to help you solve the problem, and my stuff arrives, but it's broken or I don't know how to use it, and the AI will help out. But then it's going to say, hey, do you want to be more involved with our company? And then that's going to move to human-assisted AI. So anyway, maybe it's speculation, but the work will change. So some tasks are greatly accelerated, others become new. Like, you know, it happens in educational institutions.

Ask:

That's true. But what do you think is the speed of change? I mean, sometimes societies are very good at adapting when the rate of change is intergenerational, but the chain between generations is often difficult.

Answer:

Do you have an idea? Well, never. If part of society continues to accelerate, like, you know, in futurism and postmodernism, they think we've reached maximum speed, and we're much faster now than we were then. You launch a new product on the internet and it could reach billions of people in a matter of days. Actually, it's usually not that simple, but it can be done. This pace is new and challenging. That's one of the reasons I'm glad you're bringing it back, because I'm not trying to say that the transition will be completely easy. It's a good thing that AI can help us transition. For example, if you say, hey, we're building self-driving trucks right now, and even though we have a shortage of truck drivers today, if every manufacturer starts making self-driving trucks, now it's just that self-driving trucks could take 10 years to replace more than half of the trucks on the road. But you say, well, when the truck driver finds out, wait, this is the job that I love, and the jobs are dwindling and disappearing. You say, well, look, this happens. This, by the way, makes roads safer, makes traffic management greener, and many other things.

But here's an AI that can help you figure out other jobs that you might enjoy, help you learn to do those jobs, and help you do those jobs. So I think this kind of transformation is very possible, but the speed of the transformation is difficult. Specifically, it's no longer like, you know, part of the education system is built on an industrial model, which is that you train people. You've got your training thing, and now you go to work, like, well, you've got to learn, right? like the training you got today. In five years, if we make progress, that will be modified. Instead of just learning through work experience, you have to keep learning.

Ask:

Okay, good. I want to turn briefly to policy, which is also an area in which you are engaged. In particular, you know, we're thinking about domestic policy, a little bit about AI and geopolitics. And beyond that, thinking about what is the role of these technology companies in educating us, and also thinking about what is good policy, versus restrictive policies that might actually stifle innovation, which is an area that you're very much against. Yes, the second half. But what's the state now, and what do you think we're going to see in the next few years?

Answer:

Well, the set of tools we use to do big things and mitigate bad things will only increase. So for example, when you get bigger and bigger AI models, we actually find that it's easier to align them with human interests and make them develop. For example, if somebody comes to the AI and says I'm really frustrated, I'm thinking about self-harm, instead of saying, oh, here's a great website on how to self-harm. It says, wow, that's really hard. I mean, have you talked to someone? Have you ever thought about talking to someone?

And, you know, I think you, you know, you might be able to handle this. So you know, and respond in a more consistent and helpful way. That's part of the reason I'm so eager to move into the future. Yes, you know, you start imposing, let's slow down, stop now. This is actually harmful.

For example, if we have a medical assistant on every phone, no, we shouldn't slow it down until everyone is able to use it from their phones. We should have access to cell phones somehow for everybody, maybe access to the village's phones, maybe to the neighbors' phones, but, you know, something like that. Obviously, that doesn't mean that traveling at 5 miles per hour won't help you if you say you're trying to get from point A to point B on your trip, but that doesn't mean you don't like navigation. You slow down when you reach the bend because, like now, don't go down the cliff.

You will say, by the way, this is called progress. Yes, the intermediate process is difficult, you have to figure out the new one. But the truth is, we've been doing this for decades, and we're very used to it. By the way, this is the weaver's complaint about the loom, right? He said, we are very content with our weaving.

You will say, yes, but if we use a loom, we can have more clothes for everyone. And that's a good thing. We must help humanity with this transition in every way.

So I think when thinking about policy, the question isn't that we think too often and naturally, how do we slow down? How do we stop? The question is how do we get to the right place? What do we do? So for example, I sometimes sit with politicians in this country, and I say, do you want manufacturing to come back together?

Like, yes, those are great middle-class jobs and so on. Good. What's your industrial policy? like, okay, protectionism doesn't really work, maybe it works for 10 years, and then you hand over a worse future to your children.

Artificial intelligence and robotics are the best way to revive it. Whole. But isn't that going to be a fully automated factory? Like, look, it's all a robot factory. We have other interesting opportunities as well. But actually, in fact, when you look at Amazon Central, they become more automated. They did deliver more packages per worker. Again, this is a manifestation of productivity and progress. However, they have also increased the number of employees, which is part of the progress of capitalism. I think that's what we should be focusing on.

Ask:

What specific things do you think our policy discussions need to focus on in the coming years?

Answer:

Take, for example, medical assistants. Now, most of the model builders try to stop them from giving medical advice in any particular way because they don't want to be held responsible. I believe. Unless they're in a medical situation, because there's at least one person present doing it, because I've just met him before, but like, generally things like GPT-4 and the like tend to stay away from those. I really think if I were a proactive policy person, I'd say, look, here's the line that you have to draw. You have to say, I'm not a doctor. You have to say, you know, can you see a doctor? You have to say, I'm not sure what I'm going to do for you, right? You really should seek medical advice. Very causal, but on that basis, you can give some answers, and we should follow up, and we should see how it works, and then you can start configuring a medical assistant on each phone, right?

Because, you know, we as a social progressive, I personally believe that health care should be provided by society. I don't think it has to go through the employee. There are a lot of people here who don't have insurance, which means they can't get it. Well, this might be a way to start helping them. For example, this will be some proactive behavior that actively achieves positive outcomes, and something can be done at the policy level.

Ask:

I received a signal to have a Q&A session, but I would like to ask you briefly for your thoughts on artificial intelligence, social networks.

Answer:

I'm a LinkedIn advocate, and that's obvious, and that's something I think technology companies need to realize that we're not just offering products to individuals. But when you reach a certain level, you also have to treat society as a customer. And when you see through things like the Dominion lawsuit, you know, Fox's opinion commentators text each other when they're wrong. This is a serious problem. This is something that society should understand. You should have a learning ecosystem to get to that point. So that doesn't mean you have a truth provider, like I'm going to tell you what's true and what's fake on LinkedIn. That's a challenge. What you want is that they have a learning ecosystem, and that's why it's important that we say the truth judgment when we're thinking about almost any system, any institution.

We have panel discussions with humans. We do things like science, academic journals, and critics. We do it on the jury, we do it in scientific research, all these things. Okay, so how do we deploy these as learning systems? That's what we should be trying to achieve.

Ask:

Thank you for being here today. I'm new to this field and only have one year of experience. My question is, what would you do if you were 30 years old today.

Answer:

If you're willing to take risks, take risks, make investments, and be willing to learn, that's going to be an opportunity.

You will take advantage of today's opportunity. So one of the things that I mentioned in my entrepreneurial perspective is that people kind of underestimate certain decisions. So one decision is to invest in soft assets rather than hard assets. That's the network around you, knowledge, etc. Now everyone, a lot of people have heard of you or investment knowledge is a good thing. Get more involved in networks and industries than specific companies. Companies may be great, but which networks and which industries are the ones that will amplify and grow? and then do whatever it takes to join them.

So for example, I'm not unhappy with the end result of my career and what I ended up doing. But for example, if I look back and think about what would have been a smarter decision, like I left Apple for Fujitsu because I had to be a product manager. I must have experience as a product manager. Actually, maybe the decision was to go to Netscape because it's an online revolution, and being a part of the online revolution is more important than figurative property.

So choose which networks and industries are right for you and that are right for what you want to do. You know, like, I guess I'm not going to point it out and make some people unhappy with me, but you know, there are some industries that are declining. Be serious.

If you don't want to, or maybe think about it. If you want to do that industry, great. But realize that you've chosen an industry that is trending right on you, not behind you. So this set of choices is now clear that software technology, artificial intelligence can be part of the cognitive industrial revolution.

The other way is that you start thinking, where do I hold an unorthodox and correct view, where do I have an interesting underlying argument that stands out and does something extraordinary? For example, maybe not a lot of people are paying attention to AI drug discovery right now, I'm going to do this. I have a background in biology. I don't know what your background is. So I'm just casually saying something. And the perspective that I described at the beginning is, look, you have your set of resources, you have your ambitions, you face the realities of the market, where are you making products to have the biggest competitive differentiation. That's what you should focus on. Is the problem over there?

Ask:

Hi, I'm Chelsea. Thank you so much for sharing. I'm curious, what is your investment thesis, what type of company do you like best, and what drove your decision to invest in Airbnb?

Answer:

Let's talk about Airbnb because it's relatively simple. Because there is an interesting story in it. The first person to pitch me Airbnb described it as couchsurfing. This resulted in me not going to see the founder for a year. Because I thought, ah, couch surfing isn't a very good idea. It won't work as a one-size-fits-all strategy. So, you know, everybody else is telling me that these founders are great.

The first lesson I learned was not to oversell you to someone who wasn't the founder, which would lead to a negative impression because that person got it wrong.

Within three minutes of meeting the three founders, I said, okay, I'll give you an investment proposal. Sundays to make presentations to the partner team, etc. Only we make representations to the team of partners. David Z is LinkedIn's most valuable board member and the reason I joined Greylock. So we discussed the presentation. The founders are gone. David Z looked at me and said that every VC must have a losing trade. Airbnb can be yours, right? We talked to each other very bluntly at the partner table, and I thought, it's funny. I think this has increased my interest in investing.

But six months later, the data has not changed. David came to me and said, well, you're totally right. David's learning machine was very nice. He came to me and said, you are right and I am wrong. Did you see what I didn't see? I said, look, everything you said was right. The local league will hate it, especially in the hotel. Cities won't like rezoning for this kind of thing. Neighbors will feel uncomfortable. Maybe something bad will happen.

All of this could kill this investment. They have good plans, and that's the way the world should be, which suddenly gives travelers the opportunity to have a more unique experience, connecting with the local community, no matter where they are. The host can be turned into a small entrepreneur, providing rooms, apartments. They can innovate in ways that hotels don't actually innovate. It can be cheaper, or it can be more expensive and more pleasant. It can be the entire range. And that's actually what the world should be. So I think if we can circumvent those risks, we can create something truly amazing. And I think, you know, these three founders have the potential to have the qualities that are needed. There is always a risk factor.

A comprehensive description of how I view investing. Can't comment on recent investments because they're still hidden, but it's, that's why I'm answering Airbnb's question.

Ask:

I have two questions. First, do you think there are any valuable arguments for open source AI, and second, do you think policymakers are getting ahead of the technology in order to regulate more rationally, for example, in the same way as the online ecosystem?

In 2016, you know, those bad guys were already there, and we let some of the effects happen without anybody really noticing. Do you think policymakers are paying more attention to what's being built and really understanding what's happening?

Answer:

Let's see. So on the first question, I do think that, in general, open source software, in some way, is beneficial. So I think it's perfectly fine to have some different kinds of small open source models. I think it's good to promote entrepreneurship, to empower academic work, to have openness and censorship. So if you can look at a model and know, because by the way, these models can be trained later as soon as they appear in the world. It's like I did a security training and then I actually posted, like, the safety training can be canceled. So you're going to say, well, I train it not to tell you how to make anthrax. Yes, I can countertrain it and it does.

And then all of a sudden you have more people who know how to create a recipe for anthrax. This is not a good thing relative to public health. So this is probably about the open source thing. I'm trying to figure out how to say, how can you get a part of open source instead of 100 percent, like, how do you have broader access, which is good, but not for terrorists or crazy people.

Now for the second point, the problem is that you do a bunch of good things with your build technology, and then you run into some challenges. At that moment, in hindsight, everybody would say, well, it's obvious how you should have regulated at that time. You should do it. But, by the way, if you try to regulate beforehand, your opinion on what the real problem is, and how to navigate it, in fact, even among experts, is almost certainly inaccurate. So you may be blocking a lot of good things, maybe you're blocking bad things, but you're also making progress much less.

For example, social networks, we came across an incident in New Zealand where you say, I want fewer murders. But what you're doing is, you say you have to audit it. You have to get your auditor to run it and we will have a penalty structure. So, if you accidentally show a murder, it could cost $10,000. But if you show 100 murders in the same event, well, that's a million dollars. By the way, the tech ecosystem will find a way to keep it very small. So, it's the right way to think about these regulatory issues.

But of course, then the hard work that you have to do is think about what the outcome is that you're trying to avoid, and that's the real work, not just saying stop until you know you're perfect. If you do that, like the standards that are used today to evaluate technology, aspirin wouldn't be approved, cars wouldn't be approved, you know, if they were approved from the beginning. So you have to say no, how do we learn and iterate as we go and add seat belts incrementally, that's something we need to discuss.

Ask:

My question is about Inflection AI. I think Inflection AI is probably the best large language model at understanding and expressing emotion. I'm curious what is the secret to getting to this point, and why can't GPT-4 and Gemini do the same thing?

Answer:

I think the best thing is that I'll just say it's our trade secret, and I think it's reproducible. I think other people are too, and one of the things in technology is that people see it, and they realize that they can produce it too. But this has been made possible by the work of many very smart people for many years.

Ask:

My question is about your background. You mentioned that you studied symbolic systems and philosophies. I'd love to know why you chose to learn these and why you chose to move to entrepreneurship. How your background makes you a better investor, an entrepreneur.

Answer:

I don't think investing is an analysis, like discounted cash flow and market growth and CAC and LTV and all those things, you know, those are important. But in fact, when you're imagining what the world can become, it's a lens of possibility. It's a question about the technology you might be able to build. It's a question of how the team operates, how it scales, and a few other things. So, you know, when I was teaching some why commentator classes with Sam Alton, Sam asked me, what do I believe, but what do most people in this room don't believe? I think, in order to be a good entrepreneur, you need to have a clear theory of human nature. And then when you're building your product and thinking about it, you say, here's my take on human nature.

It's that I think people will respond well to my product because I'm going to help them lift and get better, through. And this is one of the places where philosophy is useful. In the case of symbolic systems, although I entered philosophy from the perspective of thinking about what symbolic systems need to learn. It's about how to think about thinking or thinking about how language works when thinking about the artifacts you create and take the issue around the precision. I think that's more important than a lot of other things when doing technological innovation.

Now obviously, you have to understand some technology to be surprised. This issue is like valuations and bubbles, VCs, and now AI. One of the things that happens here is everybody says, oh my gosh, it's like on the internet, there's going to be amazing technology. People started investing. You know, valuations, especially as investors, will think that you might want them, that they don't exactly make sense in terms of discounted cash flow analysis and everything else.

Part of the question is what kind of time frame we should consider and what happens to compound interest. So I think there's a lot of crazy deals, and some of the crazy deals are also crazy valuations. But it's also where people know that you're going to create a multibillion-dollar company in a relatively short period of time, and you're making a venture right now into that. You know, as an investor, I'd want a lower valuation. The market leads to higher valuations. You know, it's a good thing for entrepreneurs, and that's ultimately something that I really love, because that's how you can create something. Investors sometimes just follow along and try to help, if they're doing well. So anyway, that's a simplified answer, and then this is, you know, why it's almost always technical, you know, classic investors would say this technical stuff is crazy because they're all bidding on very high prices.

Like Tesla, why is Tesla's valuation higher than all the other car companies combined, you and I are not saying that this should be the case, I just said, this is a question. This may be widely believed by Tesla's investors, who would say that the shift in automobiles and automated transportation is from a mechanical engineering paradigm to a software paradigm. And the existing companies will not survive and will not do this. And Tesla is going to be a great, you know, car giant, then the valuation is not so crazy.

Now, I think their valuation is probably making that possibility look like a certainty rather than a possibility. But, you know, that kind of thing is one of the reasons why part of the market valuation is happening in technology. That's why people are sure that what's coming in technology is the future, and they're basically right about that. Thank you.

Ask:

Do you think AI will fundamentally challenge the importance of relationships and communication, especially in industries where relationships are the center? For example, in K-12 schools, you think we say that the teacher-student relationship is one of the most important factors in determining student growth and performance. Do you think AI will eventually challenge or modify this in some way?

Answer:

I think it will transform because it will join the education system like a mentor with infinite patience. So now you have a teacher who says, look, I'm in charge of X students, I only have a limited amount of time. If a student doesn't figure it out, I'm going to have limited time to debug and spend time that you have something that would actually help in that situation. I don't think it's going to replace because, you know, as a gesture comparison, you know, humans don't play chess like AI anymore, stop completely. But there are many more of us watching humans play chess with each other. We are people-oriented. It's like we're tribal social animals.

So I think probably, even if there will be some people who say, oh, nobody understands me, this AI is my only friend. You know, we're going to have some weird features like that. I think it's healthier for us to build AI, like Pi, and say, hey, let me help you connect your friends. And I think broadly speaking, I think people are going to move in that direction naturally because we like human connection in a variety of ways. So I think it can be transformative, but augmentative, whether it's education, healthcare, all those things. I think it's going to be helpful and it will bring about some shifts.

Ask:

Okay, thank you very much for coming. Thank you for participating, and thank you all for coming.

⭐ Star Wall Street news, good content do not miss ⭐ This article does not constitute personal investment advice, does not represent the views of the platform, the market is risky, investment needs to be cautious, please make independent judgment and decision-making.