laitimes

Is ChatGPT the biggest revolution in AI history?

author:Insight Express

This article is reproduced from the author | Feng Yuqing

Is ChatGPT the biggest revolution in AI history?

What kind of jobs will be replaced by AI? Will AI create more disinformation, bring more commercial monopolies, and will people be more controlled and monitored as a result?

A discussion on artificial intelligence was co-hosted in New York on June 20.

The forum explored the disruptive potential of ChatGPT to attract great attention from the capital community and why, as well as the possible impact of AI development on various industries. What kind of jobs will be replaced by AI? Will AI create more disinformation and bring more commercial monopolies, and will people receive more control and monitoring as a result? Will it even affect the survival of human beings themselves? And how will the competition between China and the United States in the field of artificial intelligence unfold?

Moderator: Feng Yuqing

Guest: Kathleen R. McKeown: Founding Director of Columbia University's Institute for Data Science and Engineering, Professor, Natural Language Processing Expert

Francesca Rossi: IBM researcher and IBM's global leader in AI ethics

David Chen Angelvest: Founder & Director

Feng Yuqing: First of all, I would like to mention this book. A Brief History of Artificial Intelligence was written by Professor Woodbridge, head of the Department of Computing at the University of Cambridge. The book was published in January 2021. The book mentions that AI will not be able to answer questions comfortably, translate at the human level, interpret situations in photos, and write interesting stories in the foreseeable future. And just 2 years later, ChatGPT did it. Has ChatGPT challenged that prediction and made a real technological breakthrough?

Kathleen R. McKeown: I do think it's a milestone that ChatGPT is now far more capable than it used to be. ChatGPT is a large language model, and I think the large language model has played a disruptive role. New capabilities are rapidly emerging every day and even changing the way we conduct research and develop new models. Over the past six months, we have tested the ability of large language models to write abstracts of articles. This is a topic I've been working on for years.

The results we found were very interesting, and we asked people to judge the results written by various large language models compared to abstracts written by freelance writers. We found that GPT3 (the predecessor of ChatGPT) is similar to human-written abstracts. This suggests that single-document news summaries are a solved problem. We also found in this work that the indication given to the model is critical, not just the size of the model.

Feng Yuqing: Do you think the emergence of ChatGPT is not foreseen in the AI field?

Francesca Rossi: The technology for specific neural network architectures is out there, but there has been no way to use them for the masses. This ChatGPT did it. This is not only a game-changer in terms of capabilities compared to 12 years ago or even 7 months ago, but also how more people, not just experts, can really take advantage of this technology. So now many businesses are waking up to this new capability.

That's why companies want to give their customers the possibilities of AI, in this way they can do things that they couldn't do before. But some customers, such as IBM's, are asking us a question. What should I do if there is an error message? What about fairness? What about copyright or data privacy? Companies willing to use this technology have a lot of questions, they ask, what have you done about this? If something goes wrong, who is responsible? They want to know this.

Feng Yuqing: Dr. Rossi, I know IBM is a true pioneer in AI, especially at Watson Lab. In 1997, IBM's Deep Blue defeated the World Chess Championship. And in 2011, IBM's Watson won Jeopardy. So how do you compare ChatGPT's achievements to previous AI?

Francesca Rossi: Chess challenges are mainly achieved through computing power. AI at the time was more limited in terms of capabilities. So there is some AI technology, but mainly computers have far better predictive power than the best chess players. Watson's winning over Jeopardy is different because it's already using some more advanced AI techniques, such as machine learning. It is able to understand natural language questions and understand what is the right answer. So this is already the second wave of AI. Now the third wave of AI. AI can not only do this, but also generate content. That's why it's a milestone, because what has been done so far with AI is interpret images or text, make predictions or classifications, or make decisions, but not generate content. So ChatGPT, which is really a game changer.

Feng Yuqing: What do you think of the current craze for AI investment? What impact will it have on innovation across the industry? Will there be a bubble like the dot-com era in the early 2000s? David, you have a lot of investment experience in AI, what do you think?

David Chen: What sets AI different in the last six months from previous generations is that it's really impacted the market in ways that we can't ignore. In the first five months of 2023, the amount of money invested in generative AI has exceeded four times that of 2022, and the wave of AI gold rush has arrived.

This reminds me of the Internet company days more than 20 years ago. As long as there is something about .com, get funding immediately. This is the mindset that investors are afraid to miss out. This is the wave of investment in generative AIs that we are seeing now. This time will pass. We may be in an AI investment bubble right now, and the question is when that bubble will burst, just like the dot-com bubble before it. However, the truth is that even as the dot-com bubble came and went, it still changed the way society runs and the way people live. So we have an AI craze today, not every company will succeed, some will fail, that's the natural law of investing. But in general, we will still use aspects of AI technology.

Feng Yuqing: ChatGPT has the potential to bring fundamental changes in various fields such as education, finance and healthcare. According to Goldman Sachs, AI could automate up to 18% of jobs, with significant implications for high-income countries. In the United States, for example, 15%-35% of jobs will be replaced, with office, administrative, lawyer and engineering jobs that are the easiest to replace, and the least likely to be replaced by construction, installation and maintenance and hands-on jobs such as nurses and counselors. The most affected are the well-educated white-collar class, so the danger lies in the top-down movement of the middle and middle classes. IBM recently announced that about 7,800 jobs could be replaced by artificial intelligence and automation. What does this mean for young professionals looking to thrive in their field? Do you have any advice for them? "

Francesca Rossi: First of all, it's easier to see jobs that will be replaced by automation than the jobs that will be created. But every time there's such a disruptive technology, a lot of entirely new jobs are created. But all work will be changed. Because in every job, even if it's not automated, people have to be helped at least a little bit to improve or reshape so that they can use the technology correctly to help their work. So people will do the same job, but they will do it in less time thanks to partial automation. So with more free time, it's an opportunity for companies to come up with new tasks, new qualities that they couldn't do before.

Feng Yuqing: Now many students want to study computer science and finance, because the salary in this field is very good and respected. But in the future, if AI can replace elementary computer, finance, lawyer, accounting jobs, what kind of profession will be most popular?

Kathleen R. McKeown: I would advise 18-year-olds to learn about existing technologies and what they can do, even 18-year-olds who are interested in the humanities, history, literature, and art. I think there are many opportunities for collaboration between the humanities and computer science. I think it's important for students to understand what's possible and know how to use it, whereas before that, you need to be a computer major to be considered involved in these areas. Now, tools are becoming more widespread and people can do more. There's a lot of art in the data, and there's a lot of data in the art.

David: Jobs seem to be disappearing because of technology. The question is, can we adapt everyone to these new technologies? There are many examples in history to prove this. For example, with the invention of the elevator, I went from the second floor to the forty-fifth floor, who will have a job to push the elevator running, or who will connect your phone calls in the background? In the 1940s, these jobs still existed, but today, they are gone. The question is, can we as humans elevate ourselves to do more meaningful work? I always give this example: you go to the store, if there's a cashier, or someone is going to clean the bathroom, when you're a kid, 10 years old, you say to your parents, I want to be the best bathroom cleaner in the world? If your child said that, how would you respond to your child? The sad reality is that some people do do this kind of work because we need it. However, with the development of advanced technology, most people can upgrade to better jobs. This means that we as humans need to retrain ourselves, and that's my answer.

Feng Yuqing: The revolutionary change of AI in all walks of life and the impact on the future of work prospects are very huge, and another thing that is very concerned is that the false information brought by AI is more likely to spread in society, and when it is more and more difficult for people to distinguish between true and false, it will have a very negative impact on the operation of society.

Francesca Ross: I think we have to be careful because until about two years ago, people's trust in AI was a problem, and people wondered should I trust AI? It's a bit of the opposite right now, with people trusting the technology too much and tending to over-attribute AI. Why? Because the machine seems to behave like a human. So, since these machines are able to write texts like humans, we tend to think that it has other abilities that humans have.

David Chen: I think it's a combination of trust and distrust, and people trust it because it's so easy to use. ChatGPT's interface is browser-based and you type in questions to get answers. The answer sounds believable. But in fact it may be wrong. Solutions like IBM', which don't mean everyone can use it, and ChatGPT can make it accessible to the general public who don't know AI, and the results are so powerful.

Feng Yuqing: But it also brings a lot of trouble. Like you just said, ChatGPT will have some false information. For example, Steven Schwartz, a lawyer in New York, recently apologized to the judge for using ChatGPT to help find and write information in a statement he submitted. But ChatGPT generates cases that don't exist. Schwartz admitted that he had no idea that ChatGPT would make up cases and rulings. People know what we don't know, so does ChatGPT know what it doesn't know?

Kathleen R. McKeown: I don't know in many cases. ChatGPT is produced by one company and we can't see what's going on behind the scenes. We don't know the model approach it uses. We do not know what data is used, it is confidential. So in order to determine what it can do, we do need to take some time to describe it.

Francesca Rossi: On the issue of ChatGPT generating lies, these systems are built in a way that doesn't recognize what's true and what's false, and doesn't consciously lie. They are simply trained to generate the most likely next word. They use a lot of data and a lot of computing power and have a good natural language model. But there is no real building up the ability to recognize the facts, what really happened. That's why, after building a model, a lot of techniques can be taken to prevent it from generating this illegal information. But even if the content is authentic, it can be a problem in this context, because the generated content can be harmful, it can be offensive, it can be racist, it may not be in line with human values.

Kathleen R. McKeown: I think Francesca Rossi makes a good point. I've mentioned this in other scenarios where these models have no intent. For example, they can write poems or short stories, but we generally think of poems as intentionally conveying meaning or the author's emotions. But that's not the case for ChatGPT, but there's no intention to convey the meaning behind it.

Feng Yuqing: The spread of false information can be a very serious problem. In 2018, a fake Obama video went viral. With the development of AI technology, deepfake Deep Fare has become more popular. The "seeing is believing" narrative is challenged. Such distorted information can significantly influence public opinion and undermine the bedrock of the functioning of a democratic society. Can government regulations effectively address this issue? Given that social media's distortion of authenticity has become the norm, how should we respond and manage AI?

Kathleen R. McKeown: I think this is a huge research topic right now. Many places are working on how to verify that AI-generated text is false information, but this is a difficult problem. People who don't do this work may not realize how hard the problem is. Therefore, it will take some time. I'm not a policy guy. So, it's hard for me to imagine what kind of policies and regulations can be adopted to solve this problem. But there is a collaboration between researchers. Perhaps there could be some collaboration between technology companies to find solutions, such as watermarking to make AI-generated fake images identifiable.

Feng Yuqing: David, you have a good understanding of the investment in the AI industry, and the current investment boom in the AI industry. All the big tech giants are pouring into the space. Do you think it's possible for them to collaborate? Because from what we've read, all the big tech companies want to regulate the industry and make sure that AI is used correctly. What are your thoughts on this?

David: Recent congressional hearings have focused on this issue, and I think the proposal to regulate the industry may be the right direction. The question is whether this can really work. This is about whether regulators can keep up with the development of technology. Technology will always outpace regulators, right? The question is, how do you prevent bad actors, not bad technology? Because there will always be people with ulterior motives or malicious intentions who abuse new technologies when they appear. "Can we trust AI?" I would ask, can we trust the person who spreads fake news? For all these fake news and all these things that appear, are machines doing it, or are they done by people with intentions? I think the latter is more likely. This is the one with intentions. I think it takes more regulation or oversight, and the technology that we're using, maybe it's flawed in itself, or it's being used by people who use it for malicious purposes.

Feng: Another challenge is that it's hard to reach a consensus on which areas of AI are suitable for development and which need to be blocked, and even if some countries have a consensus, you can't stop others. The world's two largest economies are competing in the AI industry, each hoping to dominate the space. OpenAI's CEO recently called for cooperation between China and the United States, but with Sino-US relations so bad? Do you think cooperation between AI industries in different countries is possible?

David: My simple view is that there is a lot of room for collaboration in academia in AI, but in the business and political worlds, the answer is probably no. Because such technologies have become a new weapon of our generation. Looking back at history, can you see that the United States will cooperate with Russia or China on the latest missile technology? The affirmative answer is no. The question is, when AI becomes a technology that could potentially be weaponized, will you see opposing governments collaborating on the technology? The answer is no. "

Feng Yuqing: What's next? What new advances will occur in AI in the next 10 or 20 years? Will artificial intelligence threaten the survival of human beings in the future?

Kathleen R. McKeown: I hope that AI can interact more with humans and use AI as a tool to help people achieve their goals. I think humans also need to be more involved in building these systems. My expectation for the future is to see more interaction at all stages of AI.

Francesca Rossi: I haven't signed any similar letters that claim AI risks or ask for a six-month moratorium. I don't think AI poses a threat to human survival. I see a lot of problems in the current system, and for me, the overemphasis on possible risks in the future only distracts us from the problems that need to be solved at the moment.

If we solve the current problems and move forward on a more solid foundation, we will be in the best position to avoid any challenges to human survival that may arise. Many people think that this existential risk to humanity is so devastating that it is more important than any problem facing us now. I take the opposite view. I think the best way to avoid future risks is to solve the current problem well in the short term.

David Chen: Whether it's blockchain or artificial intelligence, I think there's huge potential. It would be very nice if we could find a way to find out how this technology affects the human experience for a better life. That's my vision. I hope this is achievable, and I think it has the potential to be achieved.

Read on