laitimes

Chen Changfeng, Huang Yangkun | ChatGPT's Knowledge Function and the Knowledge Crisis of Humanity

author:Build the Tower of Babel again
Chen Changfeng, Huang Yangkun | ChatGPT's Knowledge Function and the Knowledge Crisis of Humanity

Abstract:Generative AI technology represented by ChatGPT is showing strong knowledge production and dissemination capabilities, or has an elusive impact on human knowledge systems. From the perspective of social epistemology and sociology of knowledge, we can analyze the theoretical legitimacy and potential impact of ChatGPT as a social actor in the construction and dissemination of knowledge: it does ideally create opportunities for promoting knowledge exchange and integration and cultivating a higher level of collective intelligence, but in the case of the interweaving of social reality and technological development, it may also move towards the opposite of "crowd intelligence connection" - capturing knowledge power and developing a silicon-based knowledge oligarchy in the post-human sense; The integration of media ethics into the framework of predictive technology ethics is helpful to adjust the tension and contradiction between socialization technology and the process of knowledge socialization, and then help to better clarify and locate the human knowledge crisis brought about by the intervention of artificial intelligence in knowledge networks.

Chen Changfeng, Huang Yangkun | ChatGPT's Knowledge Function and the Knowledge Crisis of Humanity

directory

1. Knowledge production and dissemination in the era of human-machine symbiosis: socialized technology and socialization of knowledge

2. ChatGPT as a subject of knowledge: how artificial intelligence is related to human knowledge

3. The Possibility of Crowd Connectivity: The Vision of Knowledge Socialization

4. The risk of knowledge oligarchs: the hidden concerns of technological socialization

5. The Resolution of Knowledge Crisis: The Integration of Predictive Technology Ethics, Knowledge Ethics, and Media Ethics

ChatGPT and the GPT technology behind it (Generative Pre-trained Transformer) have attracted global attention. Since its inception, ChatGPT's rich knowledge base has made it a "huge engine that stimulates productivity and human creativity", and its functional position in knowledge production and dissemination has gradually been recognized.

However, when discussing the generative artificial intelligence and human knowledge system represented by ChatGPT, some issues need to be clarified in advance: first, whether the content produced and disseminated by ChatGPT can be called "knowledge"? In other words, how to locate the content produced and disseminated by ChatGPT from the existing epistemology? This is the first problem that needs to be solved; second, if the "legitimacy" of its content as knowledge is recognized, the "knowledge" it has is different from the existing human knowledge. Finally, a question that has to be pondered is, what is the possible impact of the knowledge system it dominates on the knowledge system dominated by humans? Will it promote the exchange and integration of knowledge, or capture the power of knowledge, monopolize the socialization process of knowledge? Although there are still many opinions that cautiously regard ChatGPT as a weak artificial intelligence, in order to face the possible "Colin Ridge dilemma" in the field of knowledge In the context of the increasingly prominent knowledge and ability of artificial intelligence technology, these thoughts have their important value.

1. Knowledge production and dissemination in the era of human-machine symbiosis: socialized technology and socialization of knowledge

What is knowledge? This is an important question that must be introduced before we can discuss knowledge and artificial intelligence. In fact, there is much disagreement about the knowledge of knowledge, i.e., epistemology: the Theaetetus definition of knowledge "justified true beliefs" has been admired by traditional knowledge theorists for quite a long time in the past. However, with the formulation of the "Getier problem" in the 60s of the 20th century, people realized that the triadic definition of knowledge in traditional knowledge theory is insufficient: even if "proof", "truth" and "belief" are guaranteed, people may not get knowledge, and the improvement of the definition of knowledge needs to be achieved by adding or completely replacing the necessary conditions of knowledge. Since then, traditional epistemology has been challenged, and the concept of meta-historical metaknowledge and the logical tradition of pure rationality have attracted criticism of traditional epistemology. Some non-mainstream theories, such as naturalism in epistemology, which enters from the perspective of individual epistemology of physiology and psychology, and contextualism in epistemology, which enter from the perspective of social epistemology of social history, have been reactivated in the theoretical field to explain the dynamic process of knowledge production, dissemination, and reception.

As human society enters the big picture of rapid technological development, nonhuman beings, such as machines and technology, animals and plants, and ecological environments, have received attention as actors in meaning-making. The function of machines and technology in the construction of people's knowledge and cognition can be seen in Peter-Paul Verbeek's moral example of dehumanism, where technologies such as ultrasound have revolutionized the human perception and experience of pregnancy, disease, and life. At present, the knowledge system of human beings is facing new changes, and knowledge innovation is moving towards a new stage of human-machine collaboration, and intelligent technology is changing contemporary knowledge from multiple dimensions such as connotation, type, scope, production mode, representation and structure, and revolutionizing our basic concept of knowledge.

It is precisely for this reason that the theory of knowledge based on human logic, rationality, physiology, and psychology has encountered the test of explanatory power in today's reality: when machines and technologies may (or have been) become important subjects affecting the production and dissemination of human knowledge, how can rational logic, physiology, psychology and other natural factors look at the non-human part of the knowledge system? Epistemology), which still retains its explanatory power and vitality for thinking about the current situation of human-machine symbiosis: unlike other epistemological views, social epistemologists oppose individualism, which regards knowledge as the result of the operation of the isolated individual mind, and regards it as a social consensus and social system, focusing on "the concepts and norms of knowledge at the social level". The influence of social factors on the confirmation of knowledge has been clarified under the social epistemology, and the highly constructed machines and technologies of "socialization" can naturally enter the discussion of knowledge production and dissemination in the era of human-machine symbiosis.

It should be noted that social epistemology is rooted in sociology of knowledge in terms of experience. Therefore, from the perspective of social epistemology to the discussion of knowledge production and dissemination in the era of human-machine symbiosis, we should follow the sociology of knowledge to examine the definition of knowledge, which includes not only general scientific and theoretical knowledge, but also ideas, concepts, and common sense used to guide daily life, which delineates the boundary of "knowledge" discussed in this paper.

2. ChatGPT as a subject of knowledge: how artificial intelligence and human knowledge are related

Based on the two levels of socialization technology and knowledge socialization, the above article shows that from the perspective of social epistemology, the perspective of machines and technology should not lack the position of discussing the production and dissemination of knowledge in today's intelligent society. However, when the observation shifts the perspective to specific artificial intelligence technologies and tries to explain the function and impact of ChatGPT in knowledge production and dissemination, it is necessary to respond to another question: What is the relationship between artificial intelligence represented by ChatGPT and human knowledge?

As far as AI technology in the general sense is concerned, there are at least two dimensions of relationship between AI and human knowledge: on the one hand, knowledge itself constitutes the basic connotation of AI. Nils Nilsson has long said that AI is "the science of knowledge, the essence of which is how to represent, acquire, and use knowledge." Knowledge is the basis of intelligence, and without the pre-existing knowledge of human beings, it is impossible to evaluate the intelligence displayed by machines. Although the three major schools of artificial intelligence - symbolism (naming), behaviorism (referring to things), and connectionism (referring to the mind) have their own technical propositions, the pursuit of knowledge representation has not stopped, and they have long been committed to finding better data structures to better symbolize, formalize or model human knowledge and give full play to the computing power of machines.

On the other hand, knowledge discovery is an important extension of artificial intelligence. For example, Michael Polanyi's dichotomy of explicit knowledge and tacit knowledge is considered in the context of artificial intelligence: human explicit knowledge can be learned by machines under appropriate knowledge representation and then used to serve human decision-making; The tacit knowledge expressed may be transformed into explicit knowledge in the process of knowledge discovery and data mining supported by artificial intelligence technology. In addition to the transformation and discovery of human knowledge, a kind of machine knowledge parallel to human knowledge has also been continuously discovered and improved in the development of artificial intelligence, such as in the deep learning process of knowledge distillation, the "teacher-student model" will extract the knowledge information (such as model parameters) contained in the trained model. Generalize to other models, forming a process of human-like imparting "knowledge" from the "teacher model" to the "student model". The "dark knowledge" transferred in this process has the general attributes of knowledge, and it is also discovered from the training process of artificial intelligence, but it is often not within the scope of knowledge that humans can grasp and understand, and some scholars regard it as a typical example of machine knowledge.

As far as the actual situation of ChatGPT is concerned, its learning and absorption of human knowledge has reached a high level - and the evaluation of this level comes from two levels: parameters and data. As a large language model, GPT's performance typically scales as more data and parameters are added to the model. At present, ChatGPT has two models of GPT-3.5 and GPT-4 in different versions: the former has 175 billion parameters and 499 billion tokens in the training set. Although the number of parameters and the size of the dataset have not been officially stated, GPT-4 has demonstrated SOTA (state of the art) level on multiple test sets, and has performed more than 80% in multiple subject areas To a certain extent, these data show that GPT technology's mastery and utilization of knowledge has begun to surpass not only similar large language models, but also most human individuals.

At the same time, ChatGPT also uses reinforcement learning technology to dynamically learn knowledge that it does not know or correct its own mistakes from the dialogue with humans. At the technical level, the Reinforcement Learning from Human Feedback (RLHF) method has been embedded in ChatGPT, and the existing technical conditions can ensure that the conversational AI system can update its knowledge system in real time under the RLHF mechanism and improve its response in subsequent communication.

Artificial intelligence is based on the representation, acquisition, and application of knowledge. From the perspective of technology, it is a natural way to enter and observe the technical nature and social impact of artificial intelligence from the perspective of knowledge: on the one hand, the main purpose of the development of artificial intelligence is to continuously complete the collection, learning and internalization of human existing knowledge, and on the other hand, artificial intelligence has shown a certain ability to assist in the transformation of tacit knowledge and the generation of its own knowledge. Artificial intelligence represented by ChatGPT has become an important actor in the contemporary knowledge system, and they are not only learning, using, and disseminating the existing knowledge of human beings, but also participating in the mining and even self-production of knowledge, and their status as the subject of knowledge should be recognized.

3. The Possibility of Crowd Connectivity: The Vision of Knowledge Socialization

The status of artificial intelligence represented by ChatGPT as a producer and disseminator of knowledge has been demonstrated, and it has become important to think about the changes they may create for the human knowledge system. As mentioned earlier, ChatGPT's function for the knowledge system is not limited to internalization and inheritance, but also innovation and development. The essence of this assertion is based on the vision of knowledge socialization: knowledge is a product of society, generated and disseminated in social interaction, and as one of the actors of intelligent society, artificial intelligence technology represented by ChatGPT is connected to the social network of knowledge production and dissemination, and can participate in the process of knowledge socialization together with human beings.

Peter L. Berger and Thomas Luckmann introduced the concept of social stock of knowledge when they explained the process of knowledge socialization: they believed that language and the semantic field formed by language objectified the overall experience of individuals and society, determined the retention of experience, and finally formed the social knowledge base. In the past, we may have thought of human language as a distinctly human artifact, "the way in which language expresses, that is, the way in which society as a whole represents empirical facts". For an artificial intelligence with language ability and the function of producing and disseminating knowledge, ChatGPT is showing the ability to materialize the social knowledge base described by Berg and Lukeman: as a large language model, GPT technology itself has advantages in language understanding ability, and is compatible with the language and semantic field behind the social knowledge base. It is possible to expand the knowledge system of human history, and learn the user's input and feedback in real time in the interaction, and dig out the knowledge from it, which includes the explicit knowledge that has been grasped and expressed by human beings, and also includes the tacit knowledge that has been grasped by human beings but cannot be clearly communicated, and finally realized in the interaction to realize the social distribution of knowledge.

In fact, this concept of "crowd intelligence connection" has been practiced in the Web 2.0 era, that is, the development of so-called "collective intelligence" in interconnection and social networking, a kind of distributed intelligence that emerges from individual interactions, is constantly enhanced and coordinated, such as collective knowledge bases such as Wikipedia, crowdsourcing platforms such as Topcoder, social annotation applications such as WeChat Reading, and recommendation platforms such as IMDb. In the context of the large-scale social application of generative intelligence, the vision of "crowd connection" is given more possibilities:

First of all, the heterogeneity of the subject of "crowd connection". Individuals under collective intelligence are no longer limited to people, but can also be software agents. The generation of collective intelligence no longer relies solely on the knowledge co-creation model between Wikipedians, and the seamless integration of collective intelligence between machines and people and between machines (such as the transmission of "dark knowledge"). For ChatGPT, there are countless cases of knowledge exchange and collaboration between machines and humans, as well as cases of intelligent communication between machines, such as the grafting of functions between ChatGPT and other generative intelligence models such as Stable Diffusion, so as to improve the production capacity of multimodal content.

Secondly, the intelligence of the "crowd intelligence connection" model. It must be admitted that the "collective intelligence" in the Web 2.0 era is not without the participation of intelligent technology, for example, Google uses the collective knowledge and digital traces recorded by the search engine to provide intelligent recommendation and completion functions for the questions entered by users in the search bar, which is a manifestation of intelligent technology connecting the wisdom of the crowd. However, on the one hand, it remains to be considered whether the results of this preference prediction based on recommendation algorithms can be called a kind of "collective intelligence" or "public knowledge", and on the other hand, users everywhere transfer a large amount of local knowledge in the dialogue with ChatGPT, and with the help of data mining technology, this user-input wisdom can be crystallized in the process of model fine-tuning and iteration knowledge), which is reflected in subsequent models, which is difficult to achieve in recommender systems.

Finally, the effect of "crowd connection" is expanded. After the scale of large language models exceeds this threshold, their abilities in few-shot prompt, multi-step reasoning, model calibration, etc., will emerge, that is, they will suddenly appear and improve in a form that humans have not predicted and grasped, and the larger the model size, the faster it will remember and the less content it will forget. GPT-4, which surpasses GPT-3.5 in learning, reasoning, correcting, and memorizing knowledge, is embedded in ChatGPT, which greatly enhances the efficiency of ChatGPT in learning, discovering, and disseminating knowledge. To sum up, the generation and development of collective intelligence in the era of large models is unimaginable for us in the era of small models.

The technological progress of ChatGPT and its social application provide a new approach to the development of collective intelligence in the intelligent era, creating a new vision of socialized production and dissemination of knowledge: knowledge between people, people and machines, and machines can be connected, creating a large, pluralistic and self-evolving social knowledge base. However, the past experience of collective intelligence reminds us that collaborative knowledge production is essentially the production of social consensus, and that the reverie of "crowd intelligence connection" from the technical level alone may lead to a blind eye - the machine is only one of many actors, and technology is only a part of the social system. In the past, it was believed that the generation of collective intelligence needs to be in a diverse, independent, and decentralized environment, and other scholars have refined four basic principles for collective intelligence: openness, peering, sharing, and acting globally. It is not difficult to see that the development of the concept of "collective intelligence" is also in line with the main logic of social epistemology and the sociology of knowledge: that is, knowledge is a social product under the action of political, economic, cultural, scientific and technological elements of society, and the investigation of the production and dissemination of knowledge should be placed in the overall social background. At present, in terms of the technical characteristics of ChatGPT, it is indeed possible to lead the emergence of "crowd connection", but when it and the social system in which it is located run logically contrary to the principles of openness, reciprocity, sharing, and global action of collective intelligence, the "crowd connection" will be blocked or even reversed.

4. The risk of knowledge oligarchs: the hidden concerns of technological socialization

Karl Mannheim's "politics of knowledge" and Michel Foucault's "knowledge-power" theory provide insight into the power bias exposed in the process of knowledge socialization. "Crowd connection" is an open vision of ChatGPT as a knowledge subject and the integration of human knowledge system, but when artificial intelligence and its environment conflict with the principles and conditions for the development of collective intelligence, its own attributes as a huge knowledge base may push it in the opposite direction of "crowd connection", become a silicon-based knowledge power center, and become a new "knowledge oligarchy".

The reason why ChatGPT is emphasized as a new knowledge oligarchy is because the previous process of knowledge socialization is actually the process of the emergence of various knowledge oligarchs. In the pre-ChatGPT period, the early formation of the global knowledge system depended on universities and academic societies as the main body, large academic publishers as intermediaries, and scientific knowledge verification, truth-seeking and argumentation as a general method. With technological innovation, the Web 2.0 era seems to have changed the original knowledge power structure, although the data presents a form of convergence and symbiosis, but platform capitalism has actually made databases, search engines, online encyclopedias, and public platforms the new knowledge oligarchy, and the systemic inequality behind them is considered to have caused damage to the global knowledge production system.

From the reactions of several types of knowledge distribution subjects in the current knowledge production and dissemination system, we can intuitively perceive the impact of ChatGPT on the existing knowledge power, as well as the possibility of it developing into an intellectual oligarchy: some universities are forming rules and regulations against ChatGPT entering the campus, and some academic journals have begun to prohibit or restrict ChatGPT from appearing in academic articles in the form of collaborators, Stack Expertise communities such as Overflow are not allowed to upload ChatGPT-generated content...... The mainstream knowledge distribution subjects are initiating intensive restrictions on ChatGPT as a new type of subject, which is actually an important signal that the knowledge power system is facing change and change.

At present, ChatGPT, as an emerging technology, is still attached to platform capital and deeply rooted in the social soil of the West, which determines the first step in its path to becoming a regional or even global knowledge oligarch: just as the knowledge oligarchs cultivated by platform capital in the pre-ChatGPT era - Google Search, Wikipedia and Elsevier, The distribution of knowledge led by Springer and other large academic databases is actually a microcosm of the global capital expansion process of Western tech giants, in which they not only convey knowledge and concepts with the imprint of Western culture to global users, but also redefine the way most people acquire knowledge—when the desire for knowledge is stimulated, "go to Google, search Wiki" has become the stereotype of many people. At present, ChatGPT is copying the path of Google and Wikipedia to a certain extent, harvesting users at a high speed around the world, and people's inertia of "going to Google and searching for wikis" may be being broken by "ask ChatGPT".

This kind of cognitive oligopoly cultivation path formed by attaching to a specific social environment and platform capital needs to be paid attention to. The databases and search engines mentioned above have been repeatedly questioned and questioned by Western-centrism in the process of their respective global expansion, but in fact, they do package some Western and northern prejudices as knowledge, and spread these prejudices after promoting the attachment of the East and the South to their intellectual power, which is actually the social, biased or ideological nature of knowledge that Mannheim said. In addition, in addition to promoting attachment to knowledge power, this knowledge oligarchy may also directly cause the blockade and blocking of knowledge, which has already been seen in some of ChatGPT's previous moves - ChatGPT and its developer OpenAI have banned a large number of Asian nodes, which objectively makes it difficult to obtain and exchange knowledge.

But if the idea of an intellectual oligarchy that AI technologies such as ChatGPT may form only stops there, then ChatGPT will not be different from Google search and Wikipedia. Obviously, there is a significant difference between the former and the latter two in terms of intelligence, and intelligent technologies such as GPT have demonstrated their ability to assist in knowledge discovery and generate their own knowledge, giving ChatGPT subjectivity in knowledge creation and dissemination. The basic viewpoint of the sociology of knowledge requires us to maintain logical coherence: if an artificial intelligence with the ability to create and disseminate knowledge is regarded as an actual social actor, its possible impact on knowledge needs to be carefully considered.

After going through the first stage of becoming a knowledge oligarchy, that is, attaching to a specific social soil and platform capital, and occupying the global knowledge power system, ChatGPT may also move towards the second stage of its role as an knowledge oligopoly: in this stage, the concept of knowledge oligarchy directly points to the relationship between machine knowledge and human knowledge. As computer scientist Stephen Wolfram has put it, a "civilization of AIs" is taking shape that is as computational irreducibility as the weather and parallel to human civilization, and that belongs exclusively to the web of artificial intelligence history), when humanity reaches the technological singularity, all humanist narratives, including human knowledge, are likely to lose their describal and explanatory validity. At present, in the process of artificial intelligence development, whether it is the various capabilities that emerge from the expansion of the model or the "dark knowledge" that appears in the process of model compression, it is actually beyond the scope of human existing knowledge that can be explained and predicted. Computer scientists are trying to sort out and explain the emergence of large language models to form relevant theories and knowledge, which is actually the translation of human knowledge into unexplained machine knowledge.

This path of cognitive oligarchy cultivation based on intelligence and technological logic is also worthy of attention, which is related to the crisis of human subjectivity and the existential crisis of humanism in knowledge production. On the one hand, the emergence of this knowledge oligarchy redefines the concept of knowledge: the unexplainable but certainly occurring phenomena in artificial intelligence cannot be ignored, but the interpretation of it is a supplement to human knowledge or a commentary on machine knowledge, which needs to be discerned. On the other hand, the emergence of this knowledge oligarchy marks the disintegration and reshaping of intermediate groups, or the rewriting of the effects and methods of knowledge socialization: GPT technology has not yet completely solved the "illusion" As for the problem of plagiarism, if its status as an oligarch of knowledge is established, the production and dissemination of knowledge dominated by GPT technology may push the socialization of knowledge into a vicious circle of frequent misinformation and misinformation and circular reproduction of knowledge; in addition, the exchange of human knowledge is still completed in the crowd - people gather together on various platforms to complement and correct each other to form open and accessible knowledge, but when people begin to turn to ChatGPT to "learn from experience" In a small interface, the exchange and co-creation of knowledge with the machine will lead to the decline of the human knowledge community, and the human knowledge production will move towards atomization.

5. The Resolution of Knowledge Crisis: The Integration of Predictive Technology Ethics, Knowledge Ethics, and Media Ethics

From the perspective of social epistemology and sociology of knowledge, this paper regards the artificial intelligence technology represented by ChatGPT as one of the important social actors in the construction and dissemination of knowledge, and clarifies the two situations that ChatGPT may bring about after its intervention in the social knowledge production network: it can ideally promote knowledge exchange and integration, cultivate a higher level of collective intelligence, and is also likely to capture knowledge power under the interweaving of social reality and technological development. The development of a silicon-based knowledge oligarchy in the post-human sense will push knowledge production and dissemination into a crisis.

Traditional humanistic ethics often advocates that technologies are formed after they have been formed and then reflected on their social implications, and this kind of post-thinking external approach is often criticized by post-phenomenal and posthumanists. Some new ethical paradigms are beneficial to improve the lagged adaptation of human beings to intelligent technologies. Among them, the Dutch anticipative technology ethics is an attempt to respond to this criticism, advocating forward-looking and practical thinking to evaluate the social consequences of technology. For example, Tsjalling Swierstra and Katinka Waelbers designed an ethical matrix for the technical nological, combining the three dimensions of status, capacity, and obligations, and the three levels of stakeholders, consequences, and good life mediation of morality) to help identify ethical risks in technology; Philip Brey, for example, has compiled an exhaustive list of technical ethics around the four dimensions of harms and risks, rights, distributive justice, well-being and the common good.

These ethical theoretical tools are essentially a response to Verbeck's "materialization of morality" of non-humanistic ethics, that is, machines and technologies are equally important social actors, and AI is endowed with moral concepts through technological design, which is feasible to regulate AI products such as ChatGPT, which are quickly socialized once released. However, ChatGPT has a large number of users, and knowledge production and dissemination are only one of them.

The combination of knowledge ethics and predictive technology ethics lies in how to judge the value and harm that an intelligent technology may bring about after it is used in knowledge production, including whether it can form specific knowledge, why it is (not) necessary to form specific knowledge, what are the values and problems of such knowledge, and who should be the subject responsible for the value and risk of knowledge, etc., and then think more carefully about the moral ideals, discipline affiliation, and epochality corresponding to specific knowledge. Correspondingly, the combination of media ethics and predictive technology ethics lies in how to think forward-looking about the value and harm that technology-generated content may bring about in information dissemination, including whether specific AI-generated information can be disseminated, why (not) it is necessary to form specific AI-generated information, and what are the values and problems of such disseminated information. Who should be responsible for the value and risk of its dissemination, etc., is actually oriented to the socialization of the media.

The introduction of knowledge ethics and media ethics corresponds to the dual positioning of ChatGPT as a knowledge producer and knowledge dissemination medium, and their integration into the framework of predictive technology ethics will help to better clarify the risks brought by the intervention of intelligent technology in knowledge networks. As far as Berry's theory of predictive ethics is concerned, by integrating knowledge ethics and media ethics, ChatGPT's ethical issues in the field of knowledge can be clearly positioned: it may show two obvious risks of impairing human cognitive ability and social knowledge system at the level of harm and risk, and may affect people's autonomy, dignity, and intellectual property rights as the subject of knowledge production in terms of rights, and there will be injustice in knowledge dissemination and distribution in terms of distributive justice. Issues such as discrimination and blockade are more likely to undermine democracy and pluralism in the process of knowledge socialization in terms of well-being and the public interest. Following these potential risks, we can further allocate ethical responsibilities and pursue risks for multiple entities such as the technology itself, technology developers, enterprises and investors, industry regulatory organizations, and users at the three levels of technology, artifact, and application mentioned by Berry.

It is true that with the influx of generative artificial intelligence represented by ChatGPT into the field of knowledge production and dissemination, whether it is the beautiful vision it may promote or the monopoly risk that it may cause, its essence lies in the tension and contradiction generated when socialized technology is integrated into the socialization process of knowledge. Values and knowledge outlook and externalize them in practice are propositions that need to be emphatically considered, otherwise, when artificial intelligence takes over the position of knowledge production and dissemination in the form of silicon-based oligopolies, everything may only look forward to the launch of the "off-switch game" or "big red button".

(note omitted)

Chen Changfeng is a professor and doctoral supervisor at the School of Journalism and Communication, Tsinghua University, and Huang Yangkun is a 2022 doctoral student at the School of Journalism and Communication, Tsinghua University.

Citation Format Reference:

GB/T 7714-2015 Chen Changfeng, Huang Yangkun. ChatGPT's knowledge function and human knowledge crisis[J].Modern Publishing,2023(6):10-18.)

CY/T 121-2015 Chen Changfeng and Huang Yangkun, "ChatGPT's Knowledge Function and Human Knowledge Crisis", Modern Publishing, No. 6, 2023, pp. 10-18.

MLA Chen Changfeng, Huang Yangkun." ChatGPT's Knowledge Function and the Knowledge Crisis of Humanity." Modern Publishing. (6)2023:10-18.

APA Chen Changfeng,Huang Yangkun. (2023). ChatGPT's Knowledge Function and the Knowledge Crisis of Mankind.Modern Publishing,(6),10-18.