The conceptual misunderstanding and future vision of the "emergence theory" of artificial intelligence consciousness
Text / Wang Feng
Summary: Is AI conscious? Or is it possible to be conscious in the future? This is a question that seems a little difficult to answer. Emergenceism seems to provide an affirmative answer. However, examining the conceptual structure of emergence, it is found that the emergence-theory of artificial intelligence consciousness is more of a misunderstanding caused by the continuous transfer of concepts, rather than the real emergence of consciousness. This emergence, too, is not without its benefits, as it encourages societal perceptions to be more tolerant of AI as an organic existence rather than a mere machine. Emergentism is not an ontological problem, but a practical problem that includes a speculative dimension of the future. Whether or not to recognize the existence of AI consciousness in the future can only be decided by the people closest to the future. Therefore, the theory of the emergence of consciousness in artificial intelligence is not only a discussion of technology and physical objects, but also a discussion of cultural concepts.
Keywords: artificial intelligence; Consciousness emergence; Conceptual analysis
This article was published in the Journal of East China Normal University (Philosophy and Social Science Edition), Issue 2, 2024, #重启科技与人文对话栏目
About the author|PROFILE
• Wang Feng, Joint Professor, School of Communication, Department of Chinese, East China Normal University
Catalog overview
1. Why it emerged
2. System misplacement
3. The near-future vision of the emergence of artificial intelligence consciousness
Full text
ChatGPT seems to be approaching the edge of consciousness, or has already shown initial characteristics of consciousness. This begs the question: can we infer that an AI like ChatGPT will have some level of awareness? If the answer is yes, then what is the rationale behind this possibility? Emergenceism is seen as one of the possible solutions. However, if emergenceism cannot be proven, then it must be excluded.
George Church's influential essay "The Rights of Machines" asked, "Have robots already exhibited free will? Are they already self-aware? The robot Qbo has passed the 'mirror test' on self-awareness, and the robot NAO has also passed the test to recognize his own voice and infer his inner state. According to the information found on the Internet, the robot Qbo is a robot that can recognize its own image in the mirror, and can say things like "Oh, I'm so handsome". The robot NAO is a robot that can move freely and interact with language, and has gone through six rounds of upgrades and entered the family. According to the product description, it is capable of conducting conversations in more than ten languages. We don't have to worry too much about whether these two robots have really passed the test of self-awareness, because what self-awareness is can be set through standard transformation. What we are concerned about is whether human self-awareness can be replicated in AI, which is actually the highest standard of self-knowledge. If it can't be replicated, when we talk about AI awareness, we actually lower the entry criteria for the concept of consciousness to apply to AI. This practice is very common, but it is not a question of technical substance, but more of a problem of conceptual connotation.
Let's make the point first: AI has a thinking function and meets the standards of intelligence, but consciousness is not necessary for AI and must be eliminated. Our goal is to find a way that both satisfies the need to explain AI capabilities and avoids over-reliance on the concept of consciousness. However, at the same time, we also clearly know that the problem of AI awareness is not a pseudo-problem, but a constructive action that constantly goes back and forth between problem exploration and practical advancement. What matters is how to understand the present and future dimensions of this action.
1. Why it emerged
Emergence: A special kind of generation. Generation refers to the continuous formation of organisms according to certain laws in the process of growth. Thinking about the growth of an organism, generation usually refers to the continuous growth and presentation according to certain laws. Like a small sapling, it starts out as delicate leaves, but gradually grows and eventually grows into a towering tree. The measure of the growth of this tree may seem independent, but it is always linked to some kind of frame of reference. Therefore, growth is not only a matter of the organism itself, but also depends on the measurement of the reference system. If the whole forest is used as a frame of reference, the criterion is to be part of the forest; If you use a human observer as a frame of reference, you need a scale that applies to a human observer. It's a hybrid scale. The leap of the organism often comes from an inadequacy of knowledge, not from the state of growth itself. We will also have different attitudes towards different stages of generation: when facing a seedling, we may see it as a tender life that needs to be cared for; Once it grows into a towering tree, we let it go through the storm.
The concept of "emergence" represents a leapfrog generation. It is a presentation of a completely new structure and logic, in which there is a clear unexplainable situation between the previous structure and the subsequent structure, but from the point of view of organism generation there is a substantial transition that indicates a definite continuity between its predecessor and its afterness, which is only temporarily inexplicable. This phenomenon is often triggered by certain conditions or factors, which allow new forms or laws to emerge. This process of emergence is not just a simple change, but a profound change, which can lead us to a whole new level.
The emergence of large language models means that when the number of parameters reaches a certain scale, such as 100 billion parameters, its sentence generation effect will be significantly improved to the level of fluency. This reflects the strong connection between arguments and utterance generation. This correlation demonstrates the model's ability to handle natural language tasks, enabling it to express itself efficiently and fluently in a variety of contexts.
As the number of parameters increases, the model is able to capture more language rules and materials, improving the quality and coherence of sentence generation. This optimization is not only reflected in the generated text content, but also in the model's ability to process and understand the input information. Through continuous learning and optimization, large language models have gradually demonstrated strong intelligence and adaptability, providing people with more convenient and efficient natural language processing capabilities.
We can see that once a parameter range is set, it is obvious that different levels can be found, and the overall tasks that cannot be done in the previous level can be done in the next level. If we look at the former level based on the structure obtained by the latter level, we will find that it has its own framework and logic, which is different from the other set of logic and structure that the next level has, and we cannot find a causal explanation for these two levels, but we find that there is a coherent relationship between the two levels. This inevitably leads to an interesting situation: a substantial coherence is found at the point of the break, which is why we have to call it an emergence, and of course, sometimes a "black box" out of incomprehension of the whole process.
To understand this emergence, it is obvious that the two hierarchies need to be understood in a state of equivalence, and such an understanding actually jumps out of the category of "generation" to some extent, because we assume that "generation" refers to the continuation of the previous stage to the next, and its links are coherent; However, we find the inconsistency of the hierarchical logic in the emergence, which inevitably leads to an embarrassment, because in the process of generation, we will always presuppose the coherence of the logic before and after, even if it is one of the transitional structures, we can also find its coherence with the later stage, but we do not find such coherence in the emergence, that is to say, the interruption between the former and the latter is a helpless sign, We can only use terms like quantum leaps or leaps to express some kind of inexplicable (not non-existent) coherence. In this way, we see a certain embarrassment in the concept of emergence, that we have to create a higher level of coherence to accommodate the interruption of a lower level. Only in this way can emergences be seen as a natural process of growth, rather than as a helpless attempt to find continuity in the midst of interruption.
Here, we can combine some generative function of large language models with an emergent understanding of human language behavior. This emergence is created by the number of parameters of the large language model, which has similarities to human language behavior. However, this juxtaposition alone cannot be taken as a natural understanding of the capabilities of machine learning. In fact, it is a conceptual misplacement to compare the two discrete phases of machine learning here to biological processes as understood by humans. For the biological process of human understanding, emergence is the explanation of the two natural stages before and after, and this explanation also has its essence of the direction, while the emergence of large language models is the connection of the presentation of two functions, and it is not easy to confirm whether this is really the emergence of natural functions. Further, if we consider the emergence of language comprehension functions as a prelude to consciousness, then this is undoubtedly a false system docking.
There is a process of re-interpreting to definition. The emergence of artificial intelligence comprehension and sentence generation functions lies not only in the determination of parameters, but also in how to put human understanding into the process of AI sentence generation. We call this the emergence of artificial sentence generation capabilities. One consequence of this definition is that AI sentence generation is seen as a natural consequence of language use. Thus, we think that we have discovered two different stages of substance, but the real reason is that we infuse an external explanation into the process of making it appear "natural", as if something that exists substantially, and assumes it to be the appearance of a substantial thing, thus identifying the generation of a new substantive, ignoring the fact that this is the mixed effect of the superposition of the interpretation on the substance.
2. System misplacement
To discuss the emergence of machine consciousness is to discuss the transformation of technical logic to non-technical logic. During this transition, we must be mindful of whether we are switching between different systems, and if there is a system switch, we must explain the switch as clearly as possible. The human body functions can complete various basic activities, and at the same time, the human body has a holistic regulation and response. For an organism like man, these are observable and can be learned through reflexivity. Suppose we see more powerful human-like activity in AI, and we see some kind of holistic regulation, and we infer that AI consciousness can emerge like a human, and this inference may be a dislocation of systemic differences.
Let's look at a similar scenario to see what the problem is. Humans have always lived on Earth, and we aspire to have magical powers beyond our own capabilities. However, due to the limited talent of humans, on the surface of the earth, a healthy individual can jump about a meter in height. However, when they set foot on the moon, the situation changed significantly: according to calculations, the height of human jumps on the surface of the moon should reach a height of six meters. So, can we say that this man has some kind of mysterious supernatural power emerging? Of course, for people trained in modern science, this claim of mystical power is merely a rhetorical joke and does not represent a truly paranormal phenomenon. Scientifically speaking, assuming that an individual moves at the same speed as the earth's surface at the moment of take-off, then the kinetic energy in the initial phase will be equal. According to the principle of conservation of mechanical energy, when the individual reaches the highest point, there is an inverse relationship between the potential energy E and the mass m, the gravitational acceleration g, and the height h of the object. This means that the height of an individual's jump is inversely proportional to the gravitational acceleration of the planet in which it is located. Specifically, since the gravity on the surface of the Moon is only one-sixth of the Earth's, the height that can be achieved by jumping on the Moon will be six times that of the Earth's surface, or 6 meters high.
We found that the differences between the Earth's gravitational system and the Moon's gravitational system lead to differences in how you walk on the Moon and on Earth, as well as in take-off heights. Just as people on Earth and people on the Moon exhibit different patterns of behavior in different gravitational systems, there are systemic differences between AI and humans. Artificial intelligence is a man-made product, ostensibly in the same system as humans, and we habitually use the scaffolding concept of the existing human system to reflect on it, ignoring that it is using the old system concept to think about the new system.
In fact, the development of artificial intelligence follows a similar pattern. While AI has shown amazing abilities for certain tasks, this does not mean that they are conscious or self-aware, and "intelligence" actually refers to complexity thinking, which is a simulation of the complexity of human thinking.
It is not easy to understand that AI is both artificial and intelligent. From the perspective of human systems, intelligence is closely related to many aspects such as body, mind, consciousness and soul. But is this true for AI systems? I'm afraid that such a connection is not appropriate. Artificial intelligence is called intelligent only because it shows the ability of a human to think at the level of thinking. In this regard, AI is consistent with humans, but if we continue to discuss whether AI has other human abilities or qualities, the answer becomes ambiguous. Ray Kurzweil, who is strongly optimistic about the future, criticizes the "unconscious" conclusion of artificial intelligence directed by Searle's Chinese house, but only opposes this arbitrary conclusion, and is somewhat vague about whether it will necessarily lead to the direction that AI is conscious. "It's hard to deduce today whether a computer program has real understanding or awareness," he said. Perhaps we should reflect more on this line of thinking, which contains the underlying question of how to replace the human capability system with an artificial intelligence system, and what kind of consistency and coherence there is between the two.
There is some consistency between any two systems, especially between AI and human systems, but their fundamental distinctions are often obscured. If you don't pay attention to such a fundamental distinction, you risk confusing the differences between different systems. Of course, we acknowledge the existence of such confusion and note that it has social and practical benefits. We note that the two systems are different, but they are temporarily presented in different factual appearances, but not yet in a completely different systemic appearance, which is one of the root causes of system confusion. The effort to use the old system to explain new facts has in fact been a unique feature of the early stages of the transformation and transformation of scientific theories. As the new system is constantly compared with the old, it is often the new system that replaces the old one, for example, Imre Lakatos discusses the reasons why the Copernican scientific program was adopted as the Ptolemaic scientific program, stating: "The Copernican program is superior to the Ptolemaic program according to all three criteria of the evaluation research program: the criterion of theoretical progress, the criterion of empirical progress, and the criterion of heuristic progress." The Copernican program, which predicted a wider range of phenomena, was confirmed by novel facts, and although there was a degenerative element in the Celestial Motions, its heuristics were more unified than those of the Grand Synthesis. "It needs to be recognized that artificial intelligence is a profound scientific revolution, and the problem it brings is no longer the factual problem we face, but the problem of how to understand human beings themselves. Removing people from their systems is painful and difficult, something that has never been done before, and it is certainly a huge opportunity for us to be brought to us by artificial intelligence.
The cyborg (the fusion of machine and man) is not an ordinary thing, but it is presented as an ordinary thing. In other words, it's easier to see the ordinary and less aware of the cyborg. In fact, cyborg facts have existed for a long time, such as the widespread use of the Internet, but we do not see them as a universal phenomenon, but rather as a representation of some cutting-edge culture, which in turn makes it disappear into daily life and ignore the birth of new facts.
When we see a certain consciousness phenomenon in large language model technologies such as ChatGPT, we only see the substitution of human consciousness, and do not see the transformation of different systems. In the human system, when we observe a certain feature of consciousness, it is possible to conclude that consciousness exists by inverse deduction. Because here in human beings, consciousness is directly linked to the representation of consciousness. If feature A appears, feature B will inevitably appear; Therefore, when feature B appears, we can infer the existence of feature A. This is a strong inference, but limited to human systems.
There is no such strong correlation with artificial intelligence. Different systems have different ways of relating. Associations, which are under human systems, cannot be translated to AI systems. The first thing we need to do is to highlight the differences in systems in order to better understand AI systems and, by extension, human beings themselves.
Why do we turn a blind eye to the fact that it is a cyborg? This is because our normal vision is gradually wearing a pair of tinted glasses, which is a side effect of technological developments, but we have always taken it for granted that tinted glasses are "natural". In fact, all glasses are gradient, and the gradient maintains our illusion of normal feeling, but does not bring the strength to face the truth. It is only when the rupture between the two facts makes it impossible for us to adapt to it that it is possible to take off this pair of tinted glasses and face up to the true face of the cyborg world.
When discussing the differences between AI and human intelligence, we must recognize the fundamental difference between the two. Human intelligence is holistic, while AI intelligence is distributed. This difference is not only intuitive, but also in the influence of scientific theories and means of communication.
Thousands of years ago, or even just 200 years ago, people saw the moon as a magical being, because in those days, the moon was a distant existence that could not be explored in a scientific way. Since the middle of the 20th century, human beings have landed on the moon, and the moon is no longer a distant and mysterious moon, but is positioned in a scientific astronomical system. An astronaut's walk on the moon is also very different from walking on Earth, which is determined by different gravitational systems and wearable devices. The power of modern science allows us to compare two different systems and thus reveal the differences between them.
There is a process of systematic conversion between the facts of everyday life and the facts of cyber technology. This transformation is not only about grafting technology into everyday life, but more importantly, it provides a new technological framework that allows us to continuously push for the transformation of everyday facts into cyber facts based on cyber technology. We should not just think of it as different types of facts under the same system, but we should recognize that these differences are actually based on different systems.
Under AI systems, we cannot find concepts such as consciousness, soul, and heart that are associated with everyday systems. However, to some extent, these concepts can still be expressed in a way that is connected to the relevant parts of everyday systems, for example, the concept of artificial intelligence consciousness, which is a new word generated by conceptual docking across different conceptual systems, which may represent some substantive transformation, but is more likely to be a concept spillover generated by the dislocation of different conceptual systems. This spillover plays a role in the culture of society, as if there is a real AI consciousness appearing, and we are working on it. Here, we can see that a series of conceptual representations of consciousness and substantial activities of consciousness are mixed together to allow us to compress facts and ideas into a single whole. This leads us to conclude that there may be a factual emergence of consciousness here, which must be anatomically analyzed.
Kant's method of work can be transferred to the problem of artificial intelligence and consciousness. Kant pointed out that sensibility and reason are independent of each other, and have legislative power over their territories, and that judgment is a bridge across the two realms, and together constitute the three major faculties of man: the capacity to know, the capacity for pleasant and unpleasant emotions, and the capacity for desire. Similarly, in terms of intelligence, human intelligence and artificial intelligence are two different territories, and the reason why they can be handed over is because of the part of intelligence shared by humans and artificial intelligence. This shared part of intelligence has never been seen before, and only really emerges when AI is able to perform complex intellectual activities such as Go and defeat humans, and can only be confirmed in the linguistic expression of large language models. This shared intelligence does not simply belong to the human organism, nor does it simply belong to artificial intelligence, but is a kind of intelligent representation displayed at the machine level, it does come from the learning of human intelligence, but this learning is learning without body, learning at the level of language, this is a cyborg state, it comes from human intelligence, and is displayed on artificial intelligence, forming a state of integration between human and artificial intelligence intelligence, which is a kind of intellectual cyborg, after the emergence of large language models, There has also been a change in the way human intelligence manifests itself.
3. The near-future vision of the emergence of artificial intelligence consciousness
It can be seen that from the current state of technology and concepts, artificial intelligence consciousness does not have the possibility of realization, but do we conclude that it will not be realized in the future? It is not possible to judge so decisively. We can only be cautious about this. We do not know what new technical methods and materials will emerge in the future to achieve the complete replacement of the human brain, and the fact that there is no possibility at present does not mean that there is no possibility in the future, so we can only explore this direction with a pragmatic attitude.
The future is not determined by us, but by the people at the stage closest to the future. Those sounds that seem unpleasant actually reflect our expectations and influence for the future. As we move into the future, we want to make decisions now that they will have a positive impact on the future, which is a responsible attitude. However, we must also realize that the journey from the present to the future is fraught with expectations that may or may not come true, leading to the glory or predicament of utopia. In either case, we don't have complete control over the future, because there are so many uncertainties.
If one day humanity itself changes, then it is up to those closest to the point of change to decide whether they need to change, without the concern of those of us who are far away. We must not forget that humanity has changed a lot over the past few thousand years. People 5,000 years ago may be shocked when they see our current level of technology; People 2,000 years ago would have thought we were incredible; People 100 years ago might have thought we had fallen. Xu Yu proposed that this is caused by different cosmic technologies, and that the technical system is not only an aid to life, but also helps us form an overall view of the universe. Xu Yu sees cosmic technology "as a perceptual condition for the production of knowledge, or rather, as a collective aesthetic experience of an era and a region (its universe)." In the same way, we can't predict what the future will look like. Whether AI can emerge as consciousness may not only depend on technology, but also on concepts, technologies, ethical structures, and so on. But if we ignore the gradual transformation of the structure of human reality, it is likely that we will be afraid of certain special states. But this fear is a transcendent conceptual perspective, ignoring the fact that the reality is often influenced by a variety of practical factors and is constantly changing.
Humans can always find a solution to the problem, unless they encounter some kind of dilemma that can no longer be solved by manpower, such as the impact of the earth, the invasion of the sun, the earthquake, the tsunami, etc., or, as imagined in science fiction movies, aliens invade the earth. However, will people in the future see these problems as difficult to solve? We don't know. Perhaps the people of that time had already left the earth, and we could not predict it either. While we think responsibly about what might happen in the future, this apprehension has always been a feature of contemporary culture. This characteristic shows that we humans as a whole have a reflective concept, but the leap from human beings to cyber people is progressing gradually. Zhao Tingyang deduced that "the GPT concept is only a transitional artificial intelligence model, and its design concept is doomed to its limitations in 'species'. If artificial intelligence reaches the 'Cartesian-Husserl machine' in the future through the design of some new concept, that is, artificial intelligence with the ability to generate any intentional object with autonomous consciousness, then a real subject and self-consciousness will be formed. "This characterizes the conceptual uncertainty of AI consciousness. Perhaps at some point in the future, we will see AI awareness as inevitable, and this transition is also likely to take place in a tortuous way. We seldom identify a certain state because of a singularity, such as the emergence of artificial intelligence consciousness, but are unconsciously brought into this situation, which is likely to be the result of a complex game between social and technological structures, and is not simply a designation of a substantial existential or technological state.
Are we at the moment when we need to make a decision? This stems from our understanding of humanity itself and our judgment of the nature of the crisis. Only in an imminent emergency, such as the threat of mass extinction, can lead us to make a choice or need a choice. On top of that, we are both worried and hopeful, facing the challenges of real life and constantly striving for a better future.
With the passage of time, we are prone to feel sad about the regrets of the past and worry about the uncertainty of the future. However, each era has its own unique worries and feelings, and the things we worry about may not be the ones that future generations are worried about, and the things we revere may be discarded by future generations. whether or in what way can consciousness emerge from artificial intelligence; Is the problem of AI consciousness a false question, or is it a complicity between technology and social consciousness? And so on, and so on, maybe it doesn't necessarily need us to judge. We don't really know how people will judge at this point in time. The only thing that is certain is that they will inevitably face different challenges and opportunities than they do now, and they will also face new singularities that we cannot imagine now.
From the essence of AI consciousness, to the misunderstanding diagnosis and treatment of the concept of AI consciousness, and then to the action practice of AI consciousness including the future dimension, we touch on several key dimensions of AI consciousness through the Yangguan Triad discussion, and see the confusion caused by the misplacement of concepts. In the process of continuous practice and advancement of artificial intelligence, the problem of consciousness will also rise and fall, and in the continuous advancement of technology, it will continue to oscillate between affirmation and negation, forming a pendulum oscillating trajectory. We are now in an era of continuous change, with many breakthrough technologies and innovations that are changing our lives. For example, Alpha Dog's victory over Lee Sedol, the emergence of ChatGPT, etc. These seemingly disruptive changes can easily give us the impression that artificial intelligence is conscious; At the same time, culture seems to have a broader capacity, and AI consciousness only causes a temporary panic in daily life, and then everything returns to calm, and we accept the possibility of AI consciousness. Artificial intelligence enters work and life, until the next time the new technology of artificial intelligence brings another shock, and once again stimulates the problem of artificial intelligence awareness. In the midst of this constant vacillation, we may think, "It's always good for AI to be conscious, but what if it happens?" Of course, we can add: "Is the question of whether AI is conscious or not, is it that important?" ”