01
After the birth of ChatGPT, the debate about AI never stopped.
In March, a public statement calling for a halt to AI gained a lot of recognition.
A number of well-known people, including Musk and Turing Award winners, publicly called on all AI labs to immediately suspend for at least 6 months and not train AI systems more powerful than GPT-4.
Then, this public statement disappeared, and even Musk formed a group to do AI research.
But now there's a new announcement that has been endorsed by a number of AI industry insiders, including OpenAI founder Sam Altman, OpenAI chief scientist, and Google DeepMind's CEO.
The statement is only one sentence: reducing the risk of AI extinction should be a global priority, along with other social-scale risks, such as pandemics and nuclear war.
It sounds like even the founders of OpenAI think that if AI continues to develop, it may exterminate humans?
In fact, it is like countless previous statements, when useful, it is a weapon to prove values, and when it is not, it is a piece of waste paper.
The statement is much more hyped in marketing at the moment and intended to stifle their future rivals than they fear about the future of humanity.
02
Why do you say that?
OpenAI successively opened plug-ins and mobile apps to its GPT in May, which is equivalent to liberating the limitations of AI, allowing it to synchronize real-time information, combine with scenarios and applications, and integrate into real life.
The mobile app has expanded the scope of users, and up to now, the IOS version of ChatGPT (US) has exceeded one million downloads, and the subscription of paid services has exceeded 300,000 US dollars.
This is only part of what can be queried, and the real data is only known by OpenAI itself.
To this day, OpenAI is still striving for more resources and attention, investing in its own state-of-the-art AI development, and beginning to break through to mobile applications.
Including American technology giants such as Google, it has never given up its investment in AI, and Google, which has lost a big face on a large model, recently opened its chatbot Bard to 180 countries.
In the matter of innovation, any loss of face is not terrible, what is scary is retreat.
So why do OpenAI and Google feel that AI is risky, or even mention the same risk of extinction as epidemics and nuclear war?
Let's look at two things:
First thing: On May 16, during a hearing before Congress on May 16, OpenAI founder Sam took the initiative to ask the U.S. government to regulate AI, to set up a global agency like Atomic Energy International to set safety standards for global AI development, to have independent experts conduct audits, and to issue so-called "industry licenses."
It's like some industries in our country need a "license", is it because it's too risky?
Not exactly, but to maintain a certain degree of control and exclusivity.
Some councillors asked Sam on the spot, since you think there is a risk, I also feel that there is a risk, so why don't we stop?
Sam replied categorically: "It won't stop, because it's meaningless, unless one day we find out that we can't control the AI anymore."
What do you think this looks like?
A man dug a gold mine, he said to others: oh, this gold mine is too dangerous, if you want to dig in the ground, if you don't get it right, it will collapse, the risk is too great, don't come, you can't grasp it, leave this matter to me, I'll just come.
Now the development of the AI industry, from a global point of view, only enterprises in China and the United States can keep up with the pace of competition.
The strongest technology is OpenAI, which we must admit, but in the combination of applications and scenarios, Chinese companies are actually not slow at all, like digital humans, virtual people live broadcasting.
I heard some VCs tell me that they were shocked by the strange AI money-making ideas, what mirror websites, public account questions are just pediatrics, more real is AI online romance, writing online novels, writing fairy tales, AI courseware, AI scripts, AI generation, interface application, etc., there are not a few people who make money.
If you really want to talk about innovation, as long as it can be related to money, don't doubt the enthusiasm of Chinese companies and entrepreneurs.
This is why computers were first invented in the United States, but China's e-commerce ecology and society are far more accepting of the Internet and digitalization than in the United States.
We have recently been working on the application of AI models in marketing, and we have been communicating with many entrepreneurs and investors, of course, the good news is that we will launch all the modules this month.
Everyone is very interested in the specific role played by the AI big model, pay attention to my words, everyone is not interested in the AI big model now, but they are very interested in the role of the AI big model in specific scenarios.
What does that mean?
Stop drawing pies and telling stories with me, whether you can solve problems for me and improve efficiency is what I care about.
As Ma Huateng said, for the industrial revolution, you take out the light bulb a month earlier and a month later, which is not so important from the perspective of long time span.
What's important?
It is whether you can combine the scene landing and solve the problem with the product.
Therefore, in AI, Chinese companies are not only very complete this time, but everyone can find the corresponding position of the industrial chain.
Big factories make big models, data, and platforms; Small and medium-sized enterprises will do the application of subdivided fields, like we are now simulating and training AI in the specific marketing scenarios of enterprises.
For OpenAI, the adversary is not only around, but also on the other side of the ocean, especially China, which is the risk.
Therefore, the best way to control the size of a thing is to fully master it, it is best if I completely master it, no opponents, no restrictions, no competition, isn't that equal to no risk?
It's like Sam saying to MPs in Congress that if there are fewer businesses involved, it will be easier for the government to regulate us.
03
The second thing: the AI threat theory didn't exist now, it existed long ago.
Why did the leaders of leading companies such as OpenAI and Google oppose the threat of AI at that time, and now suddenly care about the future of mankind? Did they suddenly find out in conscience?
The logic here is described in one sentence, the development of the brand is to constantly meet the value innovation of customers.
In other words, customers embrace brands that follow their psychological expectations.
The previous opposition was because AI is a novel thing, which belongs to a small number of early adopters and is still in the early stage of development.
But now OpenAI is pushing ChatGPT to the global market, to all the average mobile user, looking for greater growth.
Sam, the founder of OpenAI, said in response to a question from a member of parliament: Although I like the subscription system now, this income is far from covering our expenditure, and I cannot rule out the possibility of introducing advertising.
So, on the one hand, they need to create momentum for their own marketing, don't think that OpenAI doesn't want to make money, he doesn't want to make money, and Microsoft, the major shareholder behind it, also wants to make money.
So, from a marketing point of view, how to hype a new technology?
Is it to advertise that my technology is very high-end? Or is it more compelling to advertise that it could destroy the world?
Today's brands need to use various hot opportunities to brush their presence and surround themselves like water.
On the other hand, there are indeed many people today who have embodied a certain "uncontrollability" of AI in the test, and conspiracy theories have always been everyone's favorite public opinion position.
Then, following the psychological expectations of these users, acknowledging risks, making improvements, and showing the value concept of the enterprise will inevitably win the goodwill of users.
Just like Coca-Cola and Sprite, Coca-Cola says it is an environmentally conscious organization, designing how many new materials and recycling bottles, Sprite says it has replaced its green packaging with white for environmental protection.
But that doesn't stop them from being one of the biggest producers of plastic pollution today, and certainly not from becoming a globally recognized consumer brand.
Therefore, any company today needs to understand and use marketing, as Drucker said, enterprises have only two things: one is marketing, and the other is innovation.
So, my attitude towards AI is the same, since we have started to innovate, we should not stop, unless we Chinese companies control it.
—
Responsible Editor | Luo Yingfan
The pictures in this article are all from the Internet