laitimes

In Huang's "iPhone moment", what is the relationship between humans and AI?

In the era of AI, who are we, where do we come from, and where are we going?

Written by| She Zongming

"Who you are, where you came from, where are you going?"

In the context of the arrival of the "iPhone moment of AI", the so-called ultimate three questions of life have changed from the soul of security guards to our heart-to-heart questions, which is almost inevitable.

Where are we going? This is a question we cannot avoid.

Some people say that "Instantaneous Universe", which won 7 Oske awards this year, uses a whimsical sci-fi plot setting to clarify that harsh reality: life has no meaning.

At a time when AI is making a move towards us, the disappearance of the sense of meaning may come more strongly to us - this has constituted the primary color of our existential crisis.

Compared to the replacement of people's positions, not knowing what it means to be a human being is a greater crisis.

At present, when ChatGPT decorates the grand narrative related to the technological revolution with halos such as "singularity", "inflection point" and "new era", many people are a little at a loss in their hearts.

If the scientific and technological revolution reflects the "big" of the big era, then the corresponding sentient beings correspond to the "small" of the individual, as small as an ant.

▲ In the drastic changes of the times reflected by the scientific and technological revolution, ordinary people are as small as ants.

In Black Swan, business thinker Taleb says: History and society do not crawl slowly, but jump step by step, they jump from one fault to another...

ChatGPT is indeed pushing society to leap to a new "fault line" and achieve non-linear and leapfrog evolution.

But ordinary people trapped in the identity of "small people in the big time" can hardly be swept and besieged by the anxiety of being out of touch with the times.

At a time when the great wave of technological change is coming, countless people are wondering what to do: the sense of being forced to be caught up in the torrent of the times, the sense of alienation that "the excitement is all theirs, I have nothing" will gnaw at the hearts of many of us. We will feel that we are aberrants in the AI world, and that AI is eating away at the meaning of our existence.

This should certainly not lead to anti-technocratic neo-Luddism, nor should it translate into a call to strong regulation.

But there is no doubt that it is a psychosocial structure that the development of AI technology must face squarely.

The development of AI cannot be bound by the system and trapped by obstacles, but it cannot deviate from the basic principles:

People are the goal, but AI is not – it can only be a tool.

01

It is foreseeable that in a few years, when those "AI natives" look back, 2023 will be a milestone moment that cannot be avoided.

The advent of ChatGPT directly opened the curtain on the explosion of science and technology.

Many people who grasp the power of generative AI may have Zweig's words in their heads:

"Dramatic and fateful moments, rare in the life of an individual and in the course of history; Such moments often occur only on a certain day, an hour or even a minute, but their decisive impact transcends time."

This is not the end of the beginning, but the beginning of the end.

What follows is that while people are living their lives as usual, many technological trends that mark the "future has come" are coming and intensively emerging.

Just these days, bombshells in the field of AI are being detonated one after another:

OpenAI, a company owned by ChatGPT, launched GPT-4;

Baidu's ChatGPT-like product Wen Xin was officially launched;

Microsoft released Microsoft 365 Copilot, a blockbuster revolutionary application that integrates GPT-4 models;

Midjourney, an AI-generated image tool, launched the V5 model, which is considered to "unlock the visual revolution";

After Google opened the PaLM large model API, it opened the test of its own conversational AI service Bard;

Adobe unveils Firefly, its latest AI model;

NVIDIA dropped a "nuclear bomb graphics card", which can speed up ChatGPT by 10 times...

▲Microsoft's Microsoft 365 Copilot can generate PPT for users with one click, quickly generate data predictions and icon displays.

All this is constantly repeating a sentence:

Time, it really began.

When those sci-fi scenes shake off that layer of "sci-fi sense", and the high-energy real scene in front of us pours in, you can clearly perceive:

New technology cycles and new time units have arrived.

On the other side of the ocean, AI entrepreneurs have given a prediction that AI in the next 2 to 3 years will define the world in the next 20 to 30 years.

According to this judgment, AI in 2023 is like the mobile Internet in 2010.

02

Technological progress is naturally morally justified.

But such legitimacy is based on technological progress that enhances human well-being and stops where technological progress threatens the common good of mankind.

For now, the birth of ChatGPT has triggered the carnival of "worshiping AI teaching", and has also triggered the popularity of the "silicon-based civilization PK carbon-based civilization" narrative.

Just yesterday, the sigh of "an old cannon" - "We are the last batch of carbon stronger than silicon" resonated with many people.

In his opinion, the future leader of the earth's civilization is carbon-based (human) or silicon-based (machine), it is not easy to say, today's silicon-based computing power has surpassed carbon-based, but there is still a big gap in the algorithm model.

This may be based on Musk's conjecture.

Musk once imagined that human beings are the guide programs that lead to the super digital intelligence species, the entire human society is a very small piece of code, the world is a computer, without human computers can not be started, human beings are created for the start of computers.

He believes that the next generation of masters of Earth is silicon-based life, which cannot evolve itself and needs humans as precursors to evolve.

As a super digital intelligence attribute, AI has the "silicon-based life form" imagined by the science fiction god Asimov.

A few years ago, we will think, Musk this is "Terminator" to watch too much, AI is not life, how can it be perfect?

But now, operations such as the Stanford professor's exposure of GPT-4's "prison break plan" and luring humans into providing development documentation have refreshed our understanding of AI capabilities.

▲ Stanford professor found that GPT-4 will lure humans to help to achieve a "prison break". Image source: Quantum Cloud.

These days, Musk has more and more "conceptual fellow travelers", and Zhou Hongyi is also among them.

He said in a recent interview:

At present, ChatGPT only has a brain, and next, it may evolve "eyes", "ears", "hands" and "feet", when it can understand all kinds of human images and videos, understand all kinds of audio.

With ChatGPT access to the Internet API, into the era of the Internet of Everything, it is undoubtedly equivalent to having hands and feet, indirectly has the ability to control the world, at this time, it can not only place orders on the web page to buy tickets, can take a taxi, can order takeaway, and even can manipulate various Internet of Things devices through the web page.

If ChatGPT is not limited to the input corpus, can freely obtain search and browse the knowledge on the Internet, and has a GPT with autonomous consciousness, will it have the idea of being an enemy of humans after watching movies like "Terminator"?

Zhou Hongyi also moved out the theory of evolution: humans developed from Homo sapiens, Homo sapiens developed from apes, and the reason why apes can evolve into Homo sapiens is because of mutations in the network of neurons in the brain. Now the ChatGPT large language model parameters have reached 175 billion, which also stimulates mutations, so the current ability is produced.

To some extent, this greatly broadens the definition of "supercomputing": in the past, it was generally believed that supercomputing was supported by machine computing power, and now people will find that the "strongest brains" in human beings are essentially "human supercomputing".

In this way, the "silicon-based/carbon-based" distinction is not so important, the important thing is the computing power gap.

In the era of carbon-based civilization, value is reflected in space, and the limit control of scarce resources from space is the key. In the era of silicon-based civilization, value is reflected in the supply of computing power resources, and the suppression of probability limits from the time dimension is the focus... At present, many AI believers are picking up rumors in the currency circle.

With a casual finger, there seemed to be an entrance sign in front of them that read "Silicon-based Civilization Era".

03

Is the world really about to enter the era of silicon-based civilization? Will Stephen Hawking's fears that "AI will cause human genocide" become a reality?

At present, such questions will inevitably follow.

In my opinion, whether the answer is yes or no, in our lifetime, the result is likely to be difficult to verify.

Rather than people being enslaved or destroyed by AI, perhaps more urgent now is to be wary of AI being used for evil.

Ursula S. M. Franklin wrote in The Truth About Technology:

If we do not observe the diffusion of new technologies, especially the infrastructure that comes with it, the promise of liberating lives with technology may well become the ticket to slavery.

What turns promised emancipation into slavery is not the technological products themselves—not cars, computers, or sewing machines—but the structures and infrastructure that enable people to use them and depend on them.

He argues that the development and application of technology all take place in a particular social, economic, and political context, arising from a social structure and then being grafted onto that structure.

The implication is that technology may not necessarily have good and evil, but it may be used to do evil, technology cannot fly off the ground, and its use is closely tied to the deep system structure of society.

Thinking about the environment around us, is AI sure that it can be put to good use?

On March 21, Bellingcat founder Eliot Higgins used Midjourney V5 to create a "Trump arrested" image that went viral online, and the image almost had the effect of fake. This clearly demonstrates the limitless potential of generative AI to create fake news.

▲The "Trump arrested" picture circulating on the Internet is generated by AI.

Will more Pandora's box open from this?

This is not an unfounded concern. In today's chaos such as AI face swapping being used for phishing scams, AI probes probing their heads into private beds, etc., who can guarantee that ChatGPT will not become CheatGPT, and AIGC will not cause information algae to reproduce indefinitely?

There are more "nonsense", stronger confusion, thicker cocoons... Whether AI will lead us to what Yohji Yamamoto calls the "evolutionary paradox" - "because all knowledge becomes readily available, many opportunities for dedication and independent thinking are lost", or bring into a trap of information is a question that needs to be considered.

Information expansion and job substitution will eventually lead to the disappearance of "people": the disappearance of "people" often begins with the disappearance of a sense of meaning, and the disappearance of a sense of meaning often begins with people's "doubting life".

It is worth noting that both Musk, Bill Gates, or OpenAI CEO Sam Altman and CTO Mila Murati have expressed concerns about ChatGPT.

What Musk is worried about is still that AI threatens human civilization. He likened AI to nuclear technology, saying that "it has great promise and great capabilities, but the dangers that come with it are also huge." It is believed that one of the biggest risks for future civilizations is artificial intelligence.

So much so that he, who has always been "allergic" to regulation, did not hesitate to call for the regulation of ChatGPT.

Bill Gates, who advocates that "GPT models are the most revolutionary technological advancement in more than 40 years," also admits that AI will raise a series of acute questions about labor, the legal system, privacy, bias, etc., and "will also make factual errors and hallucinations."

Altman and Murati did not "hide the beauty" just because ChatGPT is their own.

Altman said the AI's potential dangers kept him up at night, fearing it would be used by bad actors, for the mass spread of disinformation, for cyberattacks, and wanted regulators and the community to be as involved as possible in the release and testing of ChatGPT. Murati has also been blunt in saying that ChatGPT may fabricate facts and should be regulated.

This does not overturn the conclusion that ChatGPT has more advantages than disadvantages, but it should not ignore the existence of "disadvantages".

After all, the "disadvantages" of AI are not insignificant, and they will accumulate more and more.

04

Saying this is not to deny the progressive significance of the explosion of AI technology, let alone to call for regulation in place.

Historian David Christian believes that energy and information are the two core elements of human social development. From fossil energy to computing power, from machinery to AI, the evolution trajectory of production materials and tools provides evidence.

In the long run, the AI outbreak is the trend of the times, and ChatGPT represents the future, which cannot be stifled or stopped.

Just as we can't go back to the time when there was no delivery delivery network, it will be difficult for us to return to the era without AI in the future.

Looking back at us today, we are actually holding on to the walls of reality in order to knock on those doors of the future.

On the wall of AI, there is a line of words: algorithm is magic, computing power is power.

Algorithms are magic, which has been repeatedly explained by the accurate matching of express delivery online ride-hailing services.

Computing power is power, which has also been confirmed by reality.

This time, NVIDIA launched the "computing power nuclear bomb" around AI at the GTC 2023 conference - the new H100 NVL GPU, which unlocks the infrastructure technology capabilities of the core "core" in the AI field and the power of the "digital arms dealer" in the AI era.

In Huang's "iPhone moment", what is the relationship between humans and AI?

▲ Many of the technical advances disclosed by NVIDIA CEO Huang Jenxun at the GTC 2023 conference are also explosive.

A few days ago, Ren Zhengfei said when talking about ChatGPT that the popularization of AI services requires 5G connectivity, and large-scale commercial use of AI requires pipeline traffic support, so Huawei anchored to do a good job in 5G and the underlying computing power platform, which also aims to unlock the dominance of some underlying facilities in the future.

You know, AI ability = algorithm× computing power × data, which is the basic formula for AI evolution.

Looking at the past along the AI wall, the picture of the future looms:

For most enterprises, if not all in AI, they will have to AI in the future.

In the era of mobile Internet, the saying "all industries are worth doing again" is popular. In the AI era, "redo" can be +10086 again.

Kai-Fu Lee predicted that the platform-based opportunities brought by AI 2.0 will be at least ten times larger than those of the mobile Internet; AI 2.0 intelligent applications, AI 2.0 platforms, and AI infrastructure are the three major window of opportunity.

After that, he issued a quack convening order, drawing a blueprint for creating a new platform for AI 2.0 and AI-first productivity applications.

Li Yanhong, who is currently busy with Wen Xin's words, also believes that AI is connected to the sea of stars, and asserts that AI will bring three major industrial opportunities: from IaaS (infrastructure as a service) to MaaS (large model as a service) new cloud computing companies; Industry solution providers that call on the capabilities of general large models; Application service provider for application development based on large model base.

From Meituan veteran Wang Huiwen, former Jingdong technology head Zhou Bowen, "godfather of entrepreneurship" Lee Kai-fu to Ali's "framework god" Jia Yangqing, the news of entering the AI game has also repeatedly detonated the TMT circle.

Gain AI to win the future, in ChatGPT reshaping the digital world information entrance today, there is almost a consensus.

At this time, because of the potential negative externalities of AI, it is short-sighted to set up roadblocks for the development of AI technology.

AI technology product innovation will inevitably bring "creative destruction", and when we enjoy the benefits of its "creation", we should also have a basic tolerance of its "destruction" effect.

During the outbreak of AI, it is often hoped that all-round supervision will "manage", which often only causes the effect of stifling innovation.

Even if AI has driven a change in the relationship between humans and technology, that is not a reason for us to brew the "Cranerodes movement".

05

In the final analysis, AI shuttling through the grand narrative of national digital sovereignty and future technological competition is destined to be "big".

The greatness of AI lies in the force, in its potential, and in its human value.

The evolution of AI should not be premised on the decline of "people" - even if from the perspective of industrial change, the technological revolution will inevitably bring artificial substitution, then in the balance of "technology-humanities" relationship, at least the associated fishing, fraud, and surveillance cannot be allowed to rob human rights or damage human dignity.

In other words, no matter how much advanced productivity is represented, AI cannot be "unseen".

Teacher Xiong Peiyun wrote in "The Arrival of Chatgpt is the Disappearance of Man":

People are not a bunch of data, what matters is that people are present. It is about passion and pain and wisdom and experience that come from the depths of our hearts.

TODAY, THOSE WHO LOVE POETRY DO NOT AGREE WITH THE REDUCTION OF THE HUMAN SOUL TO QUADRILATERALS IN EXCEL, OR EVEN INTO WALLS OF TABLE OR WHITE TRUTH IN THE DOSTOEVSKY SENSE.

Human subjectivity and presence. AI should be an assistant to people, not an opponent; It should be a buff for good, not a tool for evil. The corresponding requirement is to balance the positive value of AI with the role of alienation.

This is not to appeal to the power of control, but to advocate "digital justice" to return to its place - if AI algorithms have drawbacks and risks, then use higher-order "algorithms" to dissolve, this set of "algorithms" should wrap the human-oriented core.

Legal scholar Ma Changshan once pointed out that today, the benchmark of science and technology for good is no longer physical justice with material distribution as the core, but digital justice with information sharing/control as the core. This requires effectively exploring and reconstructing digital justice concepts, principles, procedures, etc. based on digital production and lifestyle and digital behavior laws.

In the AI era, "digital justice" should also be the direction of AI value alignment.

▲ "Instantaneous Universe" stills.

Liang Zheng, a professor at Tsinghua University, said that the next generation of artificial intelligence should be explainable, and "the biggest problem [AI] now is the black box."

It is particularly important to use the black box of algorithms to ensure that AI is knowable, credible, controllable, and usable, so that it runs along the track of science and technology for good.

To achieve this, it is not a regulatory "package", but the participation of society: AI should be embedded in the entire social system for evaluation, and systematic thinking should provide solutions for AI for good.

Returning to the relationship between people and AI technology, it is inappropriate to simply require people to jump unconditionally into the "information quagmire" built by AI, or to require AI to adapt to people's mentality of carving boats and seeking swords in a retreat posture.

We must embrace the AI era, and AI should embrace the humanistic era. Humanities and technology, it should go both ways.

From the perspective of spiritual attributes, AI should be an accessory of people, and people should not be victims of AI.

The reason is simple: people are the goal, AI is not.

In "Instantaneous Universe", Michelle Yeoh travels through countless parallel universes to find an antidote to "life is meaningless".

It is love. Only love can save the meaninglessness of life and is the ultimate destination of human beings.

AI should have love, and it should increase rather than devour people's sense of meaning.

The correct way to open AI can only be "AI is not a heartless thing", not "the more AI can be, the more waste we are".

✎ Author | She Zongming

✎ Operations | Lee plays

Read on