laitimes

"AI Godmother" Li Feifei: Think of the big model as a beautiful calculator Don't take it as a threat

"AI Godmother" Li Feifei: Think of the big model as a beautiful calculator Don't take it as a threat

"AI Godmother" Li Feifei: Think of the big model as a beautiful calculator Don't take it as a threat

Tencent Science and Technology News reported on January 7 that according to foreign media reports, Dr. Li Feifei, the "godmother of AI", was recently interviewed by foreign media and talked about how to popularize AI science to the public and how to better supervise AI.

There are only a handful of researchers who have laid the foundation for the AI revolution, and Li Feifei is one of them. She moved to the United States from China when she was 15 years old, where she earned extra money while studying and doing odd jobs. Today, Dr. Li is a professor of computer science at Stanford University and director of the university's Human-Centered AI Institute.

In an interview with foreign media, Li Feifei said that AI should not be feared or revered as a ghost, but should be regarded as a tool, a tool that can serve the interests of mankind. Li Feifei believes that we must always pay attention to the subjective initiative of human beings.

The following are the interview questions of foreign media and Li Feifei's answers:

Q: The more technological products there are, the more people will want to know: without their help, what would the human mind be able to produce, and what value would human beings bring?

A: This has to start with people's subjective initiative. Humans are very complex creatures. We're not just defined by one side of our brain or by how we compute on big data. We are not defined by our memory load or by the algorithms of any of our own neural networks.

What defines us as human beings is our will, our emotions, our agency, our relationship with ourselves and each other. As an expert in the field of AI, I've been emphasizing lately that we need to have confidence and self-esteem in ourselves, because we're not like computing tools.

Q: Why are these human abilities important?

A: It is very important to have a correct "view of tools". Tech products are very powerful, but I want to emphasize that they are tools. Some people may find this idea a bit nerdy, but I've seen some survey material in my research about how Americans spend their time. Americans divide their time between work, play, leisure, and housework, which includes tens of thousands of things.

I don't want to disparage tech products, but there's very limited what tech can do compared to humans. I think it's very important as human beings to understand how we relate to the tools that we create. This is a major issue that human civilization has always faced. Sometimes we can handle the relationship well, and sometimes we don't. We need to straighten out this relationship and use subjective agency to decide how it should develop.

Q: There are some companies that worry that keeping AI in a "guardrail" will slow down the pace of innovation. How do you balance the speed of innovation with security?

A: It's a trillion-dollar question. It is important to find a solution to this problem. It's an ongoing, iterative process.

I don't think the answer to this question will be straightforward. Frankly, if anyone claims that they can put the solution clear in a sentence or two, I feel that this person is not facing reality. Because innovation requires both speed and security.

Innovation will lead to new discoveries, job opportunities, higher productivity, better health, better educational resources, and a more suitable environment. These are predictable.

But at the same time, we also need "guardrails" to protect human life, human dignity, especially those who are disadvantaged. This concerns the values that humans care about as a species. As a technologist, as an educator, I get worried when I hear anything that goes to extremes.

Q: How do you design a good "fence"?

A: This is something I've been working on for the past five years. This question led me to establish the Human-Centered AI Institute at Stanford. The name of the institute means that the well-being of individuals and society should be at the center of the design, development, and deployment of AI.

Designing and building a good fence is a complex affair. I think there needs to be a framework that advocates a balance between technology and guardrails, that ethics is built into the design of technology, and that there is a stakeholder-focused approach that considers the impact of technology on individuals, communities, and society.

Q: Where is the biggest gap between the public's understanding of AI and that of experts?

A: Frankly, the gap is huge.

AI technology is so new that it's normal for the public to not know much about it. It took a lot of time for the public to learn about electricity in the first place. We have to give everyone some time to get the public to embrace AI science.

At present, the public is misled and the focus is diverted. It's not that someone is maliciously guiding, but because of a lack of communication and popular science.

There is also a big gap in the voice of public opinion. There are so many great researchers, entrepreneurs, technologists, educators, and policymakers focused on creating a better future, such as using AI in medicine or agriculture, but we don't hear from them. Instead, a small group that wants to "take it all" has taken control of the megaphone, which is not good for the public.

Q: What's the biggest misconception you'd like to clarify?

A: For example, what can a large language model do for you, this question needs to be popularized.

Some people jump straight from "large language models have appeared" to "all human agency is gone, and no one needs to Xi learn English anymore". One might think that when a company uses a large language model, it's an employee who turns on a large language model, leaves the room, lets it run on its own, and the product is done automatically — I don't think that's possible.

Actually, large language models are like a pretty calculator, and the analogy may be a bit awkward, but the point is, they're tools that can be used to improve productivity.

But we also need to have honest discussions about questions like what is the impact of AI on jobs and wages, and how can we use AI responsibly? But those discussions didn't come up. If you were to pick up a random American on the street and ask him, "Where did you see anything about AI?" What did you learn about it? What was your impression of AI? He probably didn't have anything to say.

Q: What does a good AI education look like?

A: I don't know how Tylenol cold medicine works, but I believe it and will take it. So education is also hierarchical. AI technology is so new that many people think that unless they understand mathematics, they can't understand AI. But that's not the case.

We don't need to know biochemistry to have a common sense understanding of Tylenol. Public education will tell you what medications are used to treat what symptoms, what drug regulators are doing, and how patients can work with pharmacists and doctors to take Tylenol responsibly. 

The existing public education has increased the subjective initiative of the masses and made them understand some common sense. Then, if you're really interested in biochemistry, you'll want to study the molecular effects of Tylenol yourself.

At present, there is a lack of AI education. Although there is a lot of publicly available information on the technical side, it is not very easy to understand. In a way, that's why I wrote the book "The World I See" – I wanted to talk about AI in a way that was easy to understand. We need to educate the public and communicate with them to understand how people look at AI from other perspectives, such as economics or legislation.

Q: What are the consequences if you don't have the right education?

A: What worries me the most is that people will lose their initiative. Many people are scared and fear that humanity will eventually be reduced to being ruled by machines. That's an exaggeration, isn't it? But if public education is inadequate, this could become a self-fulfilling prophecy.

We all have agency. Policy development is really important. It can ban the entire AI altogether, or it can let the situation get out of hand, and getting out of control would be worse than banning. For the sake of human dignity, it is necessary to have proper guardrails. So that we can use AI to provide well-being.

Q: Every time I tell people at a party that I (foreign journalists) make a living from writing, they will say that ChatGPT is going to take away my job. How do I respond?

A: You underestimated your own initiative! 

You might as well do an experiment, you find a topic, ask ChatGPT to write it, and then you mark the part that you have to rewrite in red, and show it to everyone: Look, 95% of it has to be revised.

I mean, maybe there's something that you let ChatGPT do because it's really doing a good job, but if you look closely, you'll see that there's a lot of room for you to play yourself. You can use AI to improve productivity, stimulate your initiative, and don't take it as a threat. (Compiler/Yunkai)