laitimes

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

author:Titanium Media APP
The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |
"Reducing the risk of AI extinction" should be a global priority, along with other society-scale risks such as pandemics and nuclear war.

Turing Award winners Bengio and Hinton while warning that AI could exterminate humanity!

On May 31, an open letter with up to 350 signatories quickly went viral, with only one sentence at its core:

"Reducing the risk of AI extinction" should be a global priority, along with other society-scale risks such as pandemics and nuclear war.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Signing the letter include the CEOs of the three AI giants, Sam Altman, ChatGPT boss, OpenAI's Sam, DeepMind's Demis Hassabis and Anthropic's Dario Amoudi.

As early as March 29, before May 31, the Future of Life Institute (FLI) issued a similar warning in the "Suspension of Artificial Intelligence Giant Model Experiments" open letter:

Call on all AI labs to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

The signatories this time are: Tesla founder Musk, 2018 Turing Award winner Joshua Bensio, Apple co-founder Steve Gary Wozniak, and Stuart Russell, author of "Artificial Intelligence: Modern Methods".

And the beginning of all this actually came in 2014, the first shot fired by the "Future Life Institute".

Stuart Russell, Stephen Hawking and Tegmark's article "Beyond Our Complacent About Superintelligent Machines" published in the Huffington Post expresses the author's potential risks and concerns about the rapid development of AI.

The release of this article also triggered a boom in media coverage of AI safety, and technology leaders such as Elon Musk and Bill Gates joined the discussion.

Maybe it's not difficult to find that there is a person's name that always overlaps with the history of artificial intelligence, who is he?

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Stuart Russell was the first person to publish a signed article with Hawking and others calling on everyone to be alert to the threat that artificial intelligence may bring us, and co-founded the Board of the Centre for the Study of Existential Risk at the University of Cambridge in the United Kingdom, and became the only scientist in the field of artificial intelligence to serve on the board.

He is also the second scientist in history to receive two major IJCAI awards (Research Excellence and Computing and Thought) (the International Joint Conference on Artificial Intelligence (IJCAI) is one of the leading academic conferences in the field of artificial intelligence).

As the world's leading expert in the field of artificial intelligence, he has co-authored with Peter Novig, director of research at Google, and published the "standard textbook" in the field of artificial intelligence, "Artificial Intelligence: Modern Methods", which is used by more than 1,500 universities in more than 100 countries.

This book almost provides a standard model for AI education worldwide, promotes the rapid popularization of AI knowledge, and makes significant contributions to AI education. Earlier this year, the fourth Chinese edition of the book also met Chinese readers.

There is growing concern about the potential threat that the development of AI models such as ChatGPT could pose to society and employment, lest it cause irreparable damage to society.

At this time, we can't help but ask: On the eve of AI awakening, will ChatGPT really be the optimal solution?

Recently, Li Xianwei, partner of Titanium Media Group and president of ChainDD, a well-known media data platform on blockchain, and Stuart Russell, author of "Artificial Intelligence: Modern Methods", opened a peak dialogue on the future of AI theory, discussing the current development and future trends of artificial intelligence technology.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

People expect artificial intelligence to free us from tedious labor, but fear that artificial intelligence will "replace us", how should we control them in the future?

The following is a condensed version of the live broadcast recording, the content is condensed from the live broadcast column of "Titanium Time | Reading Moment" on June 7,

01. If Siri's destiny is "artificial mental retardation",

That ChatGPT is not really "artificial intelligence"

Li Weiwei said that in China, many people's initial memories of artificial intelligent products come from intelligent voice assistants such as Microsoft Cortana, Amazon Alexa, and Apple Siri, and it seems that ChatGPT has been ahead of these intelligent assistants in terms of "intelligence", and asked Dr. Russell: How to view the subversive revolution brought by ChatGPT in this field? And are these "ancient" AI interactive products a "failure"?

In response, Dr. Russell said he doesn't see products like Siri as a "failure," and he believes that technology will evolve over time. Dr. Russell gave an example: "For example, if we buy a new car, the new car tends to be a little better than the old car, but we don't say that the old car is a failure, because the technology we use in this area is continuous. ”

In his view, intelligent voice assistants like Alexa, as well as Siri, are earlier technologies, which are more based on pre-calculated questions and answers, which we used to call "chatbooks", the essence of these technologies is based on a list of question templates, when people ask similar questions, they will produce standard answers.

Dr. Russell believes that ChatGPT, a new technology that is completely different from products like Siri, is what everyone calls a "language model" that is not based on a set of answers to questions. It's just a general-purpose tool that predicts the next word based on a sequence of preceding words, which could be a conversation or a newspaper article put in, and then analyzes it with ChatGPT. However, these functions are impossible to do with Alexa, because Alexa is only a tool designed to answer very basic questions.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

For the view that "ChatGPT is not really artificial intelligence", and what will the real artificial intelligence that everyone will see in the next 5 years? On this question, Dr. Russell said that he believes that GPT-2 clearly does not achieve much internal improvement, it is able to produce seemingly believable text, but it does not perform particularly well at answering questions and so on.

Dr. Russell said that the interesting thing about the new generation of large language models is that even 5 years ago, language models were a small branch of artificial intelligence science research, and they could help improve the accuracy of speech recognition.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

For example, if I say 'Happy B******,' you might think I'm trying to say Happy Birthday, because Birthday is a common word that starts with B and comes after Happy. I'm sure you wouldn't think of it as Happy Breakfast or anything like that.

So at that time, the intelligent voice assistant was almost such a function, you can use the language model to improve the output of machine translation, so that the output language is more grammatical, but in addition, more people pay attention to the language model.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Later, people found that they could choose to expand the number of words that judged context, and previously only judged context by one word, so if I say Happy, you may say that the next word may be Birthday, which is a judgment of a single word context in the context, that is, just by Happy a word to predict that the next word is Birthday. Then you can start writing 5 to 10 words depending on the context, which helps people even more, and then expand the vocabulary to a few hundred. ”

Dr. Russell believes that now with GPT-4 and 32,000 words of contextual semantic processing, to make predictions based on so many contextual vocabularies, a larger model is needed, and more parameters are needed. And it was found that as the size of the vocabulary increased, the accuracy of the output improved. If you ask ChatGPT a question, then the natural continuation of that question is just the answer to that question.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So ChatGPT is more like how we "respond" to these regular questions. We're not really "answering", we're just "responding". So this is just a superficial "answer", and most humans' biggest puzzle about this technology is that we don't actually know what is actually going on behind this "answer". ”

At the same time, Dr. Russell agrees that ChatGPT continues to surprise people because of its ability to do some really great things, such as write poetry and prove a mathematical theorem in the form of a poem. But people don't know how it does these things, and that's what's surprising.

But this is also an equally worrying place. Because ChatGPT is actually less intelligent than humans in many ways, they are also more capable than humans in many ways. For example, OpenAI's test results show that GPT-4 can pass almost all college entrance exams with high scores. and passing the bar examination and the medical license examination.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

But ChatGPT never went to school and passed all these exams without the help of human code. So Dr. Russell believes that even if ChatGPT cannot fully reach the level of human intelligence, in terms of its impact and role, everyone can roughly think that it is comparable to human intelligence.

Its emergence is equivalent to adding billions of "new humans" to the world, and people are constantly adding these "highly intelligent humans" to their lives, which will have a huge impact on everyone's world, but no one knows what this impact is.

02. How China is on the other side of Silicon Valley

"Missed" with ChatGPT?

Li believes that in China, he found that people have been thinking about the question, why was ChatGPT born in the innovation soil of Silicon Valley at this point in time? How far is China from ChatGPT?

For this question, Dr. Russell believes that ChatGPT was born in Silicon Valley for certain technical reasons, and he analyzed that Silicon Valley has more well-trained talents.

First of all, there are Berkeley, Stanford, and many top institutions that attract the best students in the world.

Second, Silicon Valley pays very well for very talented people, he added, adding that a Silicon Valley AI researcher earns $10 million a year, according to reliable sources. This is indeed a shocking figure.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Moreover, Silicon Valley companies have a large computing power base and a willingness to take risks.

As Sam Altman, founder of OpenAI, said, "When you're talking about creating artificial general-purpose intelligence, there are enough people willing to invest a lot of money and make a long-term bet with pretty high odds, and I think the odds of success are small at first, but they do succeed, and all these things come together in Silicon Valley that you can't do anywhere else in the world." ”

Dr. Russell says he sees this mentality in many universities and startups: for any interesting idea, you should be free to pursue it, not seek approval from your boss first.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Dr. Russell said he once had a job as chairman of Berkeley, which evaluates appointments and promotions for all faculty at the university. And the way the reward system works at Berkeley is that people really want to reward people who are willing to take risks and are willing to explore, and they may have worked for 5 or 10 years and still not solved a problem.

They may fail, they may not publish much in 5 years. But if they do make a big contribution, they will also be considered successful. So the committee wants more people to meet the standard: "We don't force them to publish 5 papers or 10 papers a year or get themselves into trouble." We want everyone to choose their own research direction. People naturally give positive feedback to this academic freedom and do the right thing based on their own understanding. ”

03. Unsafe artificial intelligence

will lose any value

Based on the uncertainty of the current global political and economic environment, as well as from the perspective of cross-border data compliance and data security, Li Weiwei raised his own questions: how should artificial intelligence be regulated and guided by reasonable laws and regulations in the future?

Dr. Russell said that many draft regulations on large language models are very strict, and they require large models not to output wrong information, but the fact is that one of the big problems with large language models is that they do output wrong information. Because their job is not to tell the truth, but to be trained to speak like a human being.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Dr. Russell gave an example: "For example, I can say that I am 49 years old, this sentence seems like a normal person's saying, but in fact it is false because I am 61 years old and not 49 years old, I mean, being able to "speak like a human" is what ChatGPT is concerned about, but it doesn't care if this sentence is the truth." ”

So Dr. Russell believes that some changes to the rules and regulations are necessary, otherwise the rules themselves have banned the use of large language models in public, and at the same time, he also believes that one of the big problems with large language models is whether they can comply well with any reasonable rule.

He explained that in most countries, it is illegal to give medical advice unless it is a licensed medical professional. As a result, OpenAI says ChatGPT and GPT-4 cannot be used to provide medical advice. But OpenAI can't enforce this, because they don't understand how it works, so there's no way to enforce it. But what the OpenAI team can do is that every time ChatGPT gives a medical advice, they respond to ChatGPT "You're doing something wrong."

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Sometimes ChatGPT starts to improve as a result, but most of the time, ChatGPT doesn't understand why it "did it wrong" and doesn't make any changes. But it's still a great pride to note that ChatGPT still has 29% fewer wrongdoings than before, and of course it still has a lot of misbehavior. It will still give you illogical or illegal advice, so many people are now wondering what is more reasonable for ChatGPT.

Dr. Russell personally believes that reasonable rules include not providing medical advice and not describing how chemical weapons are made. At present, large language models cannot pass these tests. People will think that mistakes are born because there are problems with relevant laws and regulations, but he still believes that the big language model is the wrong side, and people must fix them in order to better comply with those reasonable laws and regulations.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

In his view, governments are responding positively, especially the Chinese government. U.S. Senator Charles Schumer has also proposed plans to introduce some regulations, and while it is not yet known what regulations will be introduced, the European Union has developed a draft Artificial Intelligence Act that will impose very strict regulations on how these AI systems are used.

He believes that AI will be of great value in the future, they can improve the capabilities of all kinds of people, they can be used as personal assistants, they can be used to help people process massive amounts of information.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So AI can do all sorts of useful things, but until people can trust the system to work correctly, stay true to the facts, tell the truth, and not violate human policies, he thinks it's too dangerous to use it.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Dr. Russell said he could not safely use it to handle his mail because he could not afford the risks himself.

So Dr. Russell concludes that it may be many years before humans understand how to make AI that is both capable and safe, and that unsafe AI is of no value.

04. Humans can also avoid the fate of being eliminated by ChatGPT

Li Weiwei proposed that similar to industries such as medical consultants, they are easy to be replaced by ChatGPT-like technology, and human beings face ChatGPT's strong sense of being replaced, will it have a negative impact on the development rhythm of the artificial intelligence industry?

Dr. Russell believes that ordinary people actually understand this very well, they know that although they have a certain level of education and ability to work, these artificial intelligence systems, at least on the surface, are indeed superior to themselves in many aspects, which naturally makes people feel anxious.

If you think about your children, you worry about what kind of employment they will have in 10 or 20 years.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Economists would say that while technology destroys old jobs, it is also constantly creating new ones, and that new jobs will be better. But what if AI could also do new jobs?

So he thinks governments have to take this very seriously, saying he's worked with economists, science fiction writers and futuristic artists to try to envision a future in which AI systems can do almost everything humans can do today, a truly desirable future.

But unfortunately, they have failed so far, and they really can't describe this future. And without precise objectives, it is difficult to develop this development extradition plan. Everyone knows that the world will face a huge shift under the influence of artificial intelligence, but if you don't know a clear destination, it will be difficult to plan around that purpose.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

In the envisioned future, most human roles will be human relationships, not to produce thousands of televisions in factories or cars to deal with machines, but to work directly with individuals. And all this requires a better understanding of human psychology than it is now.

If there is no way to better explain and solve the problem of human relations, then even the economic achievements of mankind so far cannot be counted as success, and this technological basis should not be based on natural science in the traditional sense, but on human science. Because only in this way can people better and accurately add value to life through interpersonal relationships.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

For example, Dr. Russell has some well-known psychotherapists doing similar things, but only because these people have some intuition gifts, but in general, there is not enough understanding of how to do these things well. To gain sufficient understanding, it may take decades of research, and there will be a huge shift in the focus and support for research.

At present, the main scientific fields funded by mankind are physics-based sciences, biology-based sciences, but there is no real funding for human sciences.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So it's going to be a huge change in the world, and it's going to be a huge change in the education system, and a lot of the things that are being taught to children now may have nothing to do with human science, and there are a lot of things that should be taught to children and are not being taught.

And the education system happens to be the slowest to update of all the systems in the world, even slower than the health care system in terms of its ability to change.

05.The madness and bubble of ChatGPT and artificial intelligence

Li Weiwei believes that now everyone is crazy to praise and chase artificial intelligence, with the development of ChatGPT, the heat wave of the Chinese intelligence industry is also heating up rapidly, even kindergarten students can talk about some things about artificial intelligence, on this titanium media also calmly expressed their own views: self-hi is satisfied with the past, and technology is always "next". So what kind of calm and rational voice does the industry need now in the face of madness?

Dr. Russell said: "If you are an investor, this is a very important question. The monetary cost of creating truly general artificial intelligence, which we call AGI, is a multi-trillion-dollar investment area. ”

He said about $10 billion a month in the U.S. is invested in startups that develop AGI. This may seem like a lot of money, but it's a drop in the ocean compared to the value you expect to create.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

But the question is whether the current direction of building larger and larger language models can reach true general artificial intelligence? Personally, he doesn't think so. He added that not only does he think so, but Sam Altman, the current CEO and co-founder of OpenAI, made the same point in his speech.

Dr. Russell also gave his reason for this: because big language models don't have the ability to make people smarter.

Therefore, he believes that in order to reach the real general artificial intelligence, a lot of efforts need to be invested in two directions:

First, how to turn something that has been built, such as ChatGPT, into a tool that can be used in a real environment.

Russell gave an example: "For example, I am an insurance company and I want to use a language model to communicate with my customers, then the language model must tell the customer the truth about the product, it must recommend the right insurance product to each user according to the regulations, and if it violates these regulations, then the insurance company will face a huge fine from the government." Because the insurance industry is strictly regulated, large language models cannot be used in these fields at present, so we have to make great efforts to domesticate this "beast", how to domesticate a wild dog into a well-behaved domestic dog, which is a general direction that many companies in the field of artificial intelligence are working on. ”

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Second, if big language models are not the answer to creating AGI, then what is? They are part of the puzzle, but not the whole puzzle.

The difficulty with this is that people don't really understand which part of the puzzle they are, or even the shape of the pieces, or what the pattern on them is. So it is not known what is needed to create general artificial intelligence, these are more fundamental research questions.

06. The metaverse may be an opportunity for ordinary people to harness artificial intelligence

Li Weiwei took the meta-universe product embedded in artificial intelligence function made by his company a few months ago as an example to ask Dr. Russell, he said that the human world dominated by silicon-based life forms was first realized in a virtual game, NPCs became the natives of the virtual world, and human players became occasional visitors to the world.

In this regard, Dr. Russell believes that the possibilities of virtual reality as an immersive experience have not been fully utilized. He believes there is huge potential for this immersive experience to form an art form that is more powerful than books, movies, or music.

It can cover all of these forms and more, because it responds to people in all sorts of interesting ways, and Dr. Russell wants artists to take advantage of that and try to start exploring what can be done in this space, so that people don't just think of virtual reality as a place to just fight monsters and upgrade.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

At the same time, he believes that there are many interesting business models related to virtual reality and the metaverse, such as the use of non-player characters such as virtual human NPCs powered by artificial intelligence systems, which are used to do business and convince people to buy things.

But Dr. Russell also says that many of them are also deceptive, pretending to be human and pretending to be friends with you. They spend hours, days, or weeks getting to know each other, talking about family, talking about individuals, learning about family situations, and then they might casually mention that they just bought a new car and it's nice to give Amway to everyone. This business model may be very effective, but it is based entirely on deception. Therefore, he believes that as human beings, knowing whether they interact with people or machines is a basic right for everyone, and the European Artificial Intelligence Act will also have relevant provisions on this and as one of the basic rules.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So, Dr. Russell thinks, it's important not to let machines impersonate humans. That would make business models based on deception harder to implement, and he felt that if these virtual humans told the truth, they would tell everyone they were machines and directly demonstrate their purpose. If its purpose is also their own interests, it may be easier for everyone to accept it.

Dr. Russell explained that this model can improve efficiency, similar to the model of sales people in the showroom, except that it all happens in virtual reality. But he thinks it's a very practical business model that could yield good results.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

He concluded that there will be various opportunities for artificial intelligence in the metaverse, and there is no doubt that everyone has this need, because the cost of labor is very expensive, but artificial intelligence can be used to replace human resources in the future.

07. The moral responsibility and responsibility of the media to the field of artificial intelligence

Li Weiwei said that as a media platform, Titanium Media has been paying close attention to the development trend and business changes in the field of artificial intelligence, how does Dr. Russell view that media platforms can better provide more accurate and efficient information services for the market in the field of artificial intelligence?

Dr. Russell believes that media platforms such as titanium media are mainly to explain the concept of artificial intelligence, and explain what the principle of artificial intelligence is and what are the different types.

Dr. Russell believes that a big focus is ChatGPT, and everyone has been talking about GPT, but there are many other types of AI systems, such as self-driving cars and ChatGPT, which have nothing in common, are completely different types of systems. Or computer vision systems already being developed for various applications, specialized machine translation systems, etc., which he believes are more efficient than large language models for translation.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So there are many other types of AI systems. Dr. Russell thinks it's important to explain all the types of systems, as well as their capabilities and limitations, because it allows people to distinguish between rumors and facts, and says one of the biggest misconceptions he repeatedly sees in the media is that people have to worry about machines generating consciousness. This misunderstanding will appear two voices, some people will worry that the machine will begin to conscious, and let other people feel that since it is AI, it will never be conscious, there is no need to worry.

In fact, Dr. Russell feels that both voices are wrong, and he believes that what everyone needs to worry about is the way AI behaves.

"If I play against a chess program and the program keeps eating my pieces, I lose," he explains.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

It doesn't matter if the program is conscious or unconscious, what matters is that it goes down well every step and I can't do anything, I lost the game. It's the same in the real world. Whether on a chessboard, on a Go board, or in the real world, what matters is what the system does and whether we can do enough to ensure that we maintain control of the system, even if the system is smarter than we are, we must try to avoid conflict.

Because if you create an AI that isn't well designed and pursues goals that don't align with the goals we care about, then people will face real problems. ”

08. AI needs "responsible innovation" the most

Li Weiwei proposed that in the social context, some traffic accidents in the field of autonomous driving will cause casualties, is there a standard framework to judge the damage caused by artificial intelligence judgment and its responsibility?

Dr. Russell believes that at the standards of self-driving cars, this should be the responsibility of the manufacturer, which is called strict liability. This means that it doesn't matter who is at fault, as long as the self-driving car hits and kills people, then the manufacturer is responsible.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

"There was a previous case in Belgium where a man apparently committed suicide after talking to a chatbot, and I believe his family is suing the company, we can see what happens next, we can wait for the court's opinion on the matter," he said. There are many examples of actions that are often illegal, such as giving medical advice without a license. As far as I know, if someone gets hurt by this, the producers of language models will be held liable. ”

Dr. Russell believes that developers of today's big language models often sell their models to another company, which builds on them to develop more specific applications. So in the European Union, there's the Artificial Intelligence Act, which is a new law that should be completed by the end of this year.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

In the process of developing this law, technology companies included a clause in the law stating that general artificial intelligence systems should not be considered artificial intelligence systems for the purposes of this bill.

Apparently, the reason for this provision is that some people are trying to exclude their products from regulation altogether.

Dr. Russell added that some companies are now arguing that when they sell a generic model to another company and use it in a specific application, they can only use that model in low-risk applications, and developers will not be liable if used in any high-risk application. Therefore, the downstream company is 100% responsible for any loss.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Dr. Russell said that in most countries, the way the law works is that the legal system puts the onus on the party who has the ability to solve the problem, and if someone trains a large language model and then sells it to others and fine-tunes it for a specific application, then the second company can't even access the raw training data. If the system keeps giving illegal medical advice or constantly advising people to commit suicide, there's nothing the second company can do about it.

So in his view, the original producers who created the big language models should bear at least some of the blame for the accident, because they are the ones who can solve the problem, and the downstream company cannot fix the system because there is no data and the system has not been trained.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

But he also said there will be many important legal cases to explore such incidents in the coming years. and other issues related to copyright.

Dr. Russell said that as far as he knows, Reddit, a large online site, is suing OpenAI, claiming that OpenAI stole all conversations on Reddit's sites, and all of this information belonged to Reddit. The same is true during image generation, do they steal copyrighted images from artists to produce new ones?

For such things, he believes that in the next few years, everyone will find the right answers to these legal questions. The situation is a bit out of control, and he personally believes that the AI technology and its deployment have gone far beyond regulation. ”

09.BAT Who is the "number one player" of Chinese Gong Intelligence?

Li Weiwei proposed that similar to Alibaba, Tencent, Baidu and other companies have made important attempts in the field of artificial intelligence, and have successively launched important products, how to view the attempts of these Chinese Internet giants in the field of artificial intelligence? What are their advantages and disadvantages compared to companies like Meta, Google, and Microsoft?

In this regard, Dr. Russell said that he has seen the comparison between Baidu's chatbot Wen Xin and GPT-4, and it seems that there is indeed a big gap between the two systems. He wasn't sure why, but it might be that in terms of the amount of high-quality text data, there were more high-quality English texts available than Chinese.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

He himself is a board member of a French company that is also working in this area, and that there is also not enough data in French, and there is not enough French text in the world to train a model equivalent to GPT-4. So he believes that for the current situation, one option is to try to translate all English texts into French and then train the GPT-4 in French, but this is inefficient and extremely expensive. So on the whole, there should not be too much emphasis on the big language model as a measure of success or failure between companies, people have been competing, such as 100 billion parameters, or 500 billion parameters, and this "competition" is not beneficial.

He said that for Chinese companies, one of the arguments that Western commentators often say is that because China has a larger population, they have more data, and data is the most important thing, but he personally thinks this is nonsense.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

For example, he said: "If you already reach 100 million consumers, it won't make much difference to add another 100 million consumers. What matters is how much information you know about each person, as this allows you to make high-quality recommendations that are helpful to that person. ”

So Dr. Russell thinks this is where companies like Tencent have an advantage, because they have so many apps on the market, such as WeChat, QQ, etc., you can get people's networks from social apps, you can get similar information like Facebook and Amazon, and there are more sources of information. Include financial data on people's shopping behavior. Therefore, when a platform has more data about users, it is possible to provide more help to everyone, so people generally think that a company like Tencent will be slightly ahead of the traditional Internet giants in the West in this dimension.

10. Startups are still far from the next OpenAI

Compared with large companies, from what angle do today's start-ups have the highest probability of successfully achieving commercial returns in the field of artificial intelligence?

Dr. Russell said that in his opinion, it is currently difficult for startups to obtain enough resources to compete with OpenAI in terms of how they enter the market. Currently, as far as he knows, there are 5 to 6 startups that have enough funding, but each company has around $2-500 million. They need so much money because they need a lot of computing power, they need a lot of data, they need a lot of engineers.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

So Dr. Russell believes that under this premise, another option is to use an off-the-shelf system, which may be open source, or you can license ChatGPT. There is a market for all kinds of big language models, and the cost of licensing is not very high, and the biggest opportunity is to find a way to make these systems work well, follow the rules and stay true to the facts, and then use them in these important applications within the company.

Dr. Russell added that every day people will come up with new creative ways to use these AI systems, which are not intelligent bodies, they are not shaped differently from the human brain, but they still have intelligence similar to real people, and people will find thousands of ways to use these new tools creatively.

He suggests that startups can look up as many examples as possible of people finding creative ways to use AI systems, and in the process, they may be able to come up with innovative applications based on their own experience in a particular market.

11. Ethical laws that guide the future of artificial intelligence

Li said that every era has a set of rules about artificial intelligence, so what kind of rules should be formulated in the future to guide artificial intelligence to the next stage?

Russell argues that what restrictions people should change and what requirements they impose depends on how AI companies make their systems work, and it's not up to governments, consumers, or academics.

In the future, we need to do the following:

1. Limit false misinformation

The ability to output false information is a huge problem, and Dr. Russell believes it will be difficult to eliminate it completely. First of all, there is a need to establish rules for the use of these systems for false information, and counterfeiting should also be prohibited. There should be restrictive rules on the type of output, such as those that provide unlicensed medical advice, that list what the system does not want the system to do, and then establish enforcement provisions in the penalty and reporting mechanism.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

2. Quickly modify and update AI rules

The OECD has a database of AI incidents to which people can submit reports of bad behavior caused by AI systems and update regulations when these reports are received. Dr. Russell believes that the ability to develop and modify AI rules fairly quickly is the key point, because when any regulation is passed, it will probably become irrelevant after six months.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

3. Good coordination between professional institutions and management institutions

For specialized technical fields such as aircraft and nuclear power plants, there are currently specialized regulatory bodies with the power to set the rules. So instead of going to Parliament or the European Parliament every six months to ask for a new law, it is absolutely necessary that Congress and the European Parliament delegate rule-making powers to a specialized agency. When these institutions are established around the world, they need to coordinate with each other, keep track of what is happening, what problems are going on, how to solve them, and ensure consistency around the world. That way companies can't find an extralegal place for the misconduct of their AI systems to do things they can't do in other countries.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Dr. Russell said he can guarantee that many AI experts, regulators, politicians, and companies around the world are engaged in lively discussions on this topic. OpenAI, Microsoft, IBM and other companies themselves have taken the initiative to come forward and say: the government needs to regulate the industry, which is an important new news, most technology companies have been saying "don't regulate, regulation stifles innovation". Now they say no, "Please watch us before we kill again," and that's the situation that everyone is in.

Titanium Time New Book Recommendation - "Artificial Intelligence: Modern Methods"

Original price:¥198

Titanium empty cabin special price: ¥139

This book comprehensively and deeply explores the theory and practice in the field of artificial intelligence (AI), integrates today's popular artificial intelligence ideas and terminology into widely concerned applications in a unified style, and truly combines theory and practice.

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

"Titanium Space Time|Shared Reading Moment" is a special reading column of titanium empty time. This is the spiritual position of emerging entrepreneurs and a new generation of changemakers, linking each unique individual with reading, knowledge and culture as the link, and deconstructing the thinking trajectory of each unique soul.

We invite special guests to explore the essence of the event by reading a good book together, and capture new trends in the industry, open up innovative new ideas, and master new methods of development with advanced ideas and cutting-edge vision!

The prophecy on the eve of the AI awakening: ChatGPT may be a "bad idea" |

Read on