laitimes

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

author:Build the Tower of Babel again

Between tools and toys:

A few things to know about ChatGPT

Humanistic thinking

|Xu Ben|

ChatGPT technology has a dual nature: it is both a tool for search and research, and a toy for pleasant conversations. People often overestimate ChatGPT's usefulness as a search tool and may also underestimate its value in having fun conversations and happy interactions with people. ChatGPT can be reimagined and developed as a new addition to the field of conversation and conversation, extending the scope of conversation and conversation interaction to AI chatbots beyond humans. This expansion can inspire new humanistic reflection issues such as kindness, understanding, upbringing, communicative reason, appreciation of the "other," and fear of artificial intelligence. In addition, this shifting perspective can also lead to a re-examination of long-term issues related to the Internet and AI, such as the meaning of "human", the relationship between humans and technology, the possible threats posed by AI, the risk of losing human values, the ethics of work and play, and the ethics of dialogue and interaction, among others. As an AI language model, ChatGPT has the potential to enlighten the humanities, including: facilitating access to knowledge, enhancing interaction, encouraging critical thinking, facilitating logical and rational dialogue, and disseminating accurate information. Ultimately, ChatGPT's unique capabilities as a tool and toy can help people better understand their relationship with technology while dealing with the complexities of an increasingly digital world.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

ChatGPT is a compelling new technology of the moment, and I'm not going to talk about how it works or future prospects, but how it affects the cognitive and moral state in which we see ourselves today. This kind of humanistic thinking is triggered because it provides us with a new perspective to re-examine some of the issues that have been discussed since the advent of the Internet, such as: what is "human", the relationship between humans and technology or machines, whether artificial intelligence poses a threat to humans, whether it will cause the loss of human value, human work and play (tools and toys), the purpose and ethics of human conversation (interaction), and so on. As an AI language model, ChatGPT can contribute to humanistic enlightenment ——— disseminating accurate information, encouraging critical thinking, enhancing open dialogue skills, and promoting education and lifelong learning ——— by promoting knowledge acquisition, interaction and communication, reasoning and dialogue, and logical and rational discourse.

This article has been published in the Journal of Guangzhou University (Social Science Edition) No. 4, 2023, if you need to quote or reprint, please refer to the paper version or click the QR code of the electronic journal below and indicate the source. This is the WeChat push version.

ChatGPT as a "technology" and "tool"

To understand ChatGPT as a "technology", it is necessary to first talk about the understanding of "technology". "Technology" in a broad sense, as expounded by the famous French sociologist Ellul in his book The Technological Society, is not only a neutral tool that can be used to achieve our goals, but also a powerful force in shaping our values. Technology is a system driven by its own internal logic, often operating independently of human will or intent. For example, the Internet, which began with the idea of a highway of "free information transfer," shaped our values about the free exchange of information and knowledge, and although they were later distorted and abused, this value did not disappear or die, it just did not come true. A key point of Ellul is that technology is a self-perpetuating system that seeks unlimited expansion and growth. Technological societies produce autonomous technologies, in which the value of technology drives technology, which in turn determines the rest of society. Under the conditions of autonomous technology, "external needs no longer determine technology." The internal needs of the technology itself are decisive". Once humans enter the technological society (where we are at the moment), technology will not be controlled by anything, and it has the potential to change our environment and lives. Therefore, as part of artificial intelligence, will ChatGPT accelerate the change of human thinking, knowledge, wisdom, scholarship, and artistic creation?

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

When technology ceases to be just a means or tool, but begins to become an end or goal that tends to be self-perpetuating and difficult to stop, it becomes autonomous and makes humanity more dependent on technology. The more so, the more affected we become and end up trapped in a system that we can't control. Therefore, the only way to resist the power of technology is to develop a critical awareness of its impact and actively work to limit its impact on our lives. This requires a willingness to question assumptions about technology and make deliberate choices about how we use it. For Ellul, technology applies not only to the economy or the state, but also to leisure activities, physical institutions, regulations, psychoanalysis, management and organization, human technology, information, and so on, "today, nothing escapes the shackles of technology". Eliul's claim that technology has become an autonomous force is controversial, and whether AI technology has spiralled out of control to become "autonomous technology" is becoming one of the most controversial issues of the moment. This is also a reason for people's current concern and fear of the development of ChatGPT and other artificial intelligence technologies.

While technology may be an autonomous force, technology itself is not good or bad, and can be both beneficial and harmful, depending on who is leading the use of the technology, for what purposes, to whom it is beneficial, and so on. Driven by specific interests, human technology, such as propaganda, advertising, networks, cameras, big data, and other means of manipulation and control, tries to get humans to do what they don't want to do voluntarily. Education is reduced to an attempt to train technicians. Ellil ended by calling the technological world "a pervasive concentration camp." The challenge for humanity is how to harness its potential benefits while minimizing its negative impact on individuals and society. Ellul's concept of technology shows that technology is not just a collection of tools and machines, but a complex social and cultural system that shapes our values, beliefs and behaviors, and that it has a tendency to expand and be autonomous. However, through critical awareness and careful choice, we can limit its negative effects and use its potential benefits to improve society.

As opposed to technology, a "tool" is a simple, straightforward device designed to accomplish a specific task. It was created to meet a specific need and did not have the power to transform society as a whole. A tool is often used to improve the ability and efficiency of humans to perform tasks. The meaning of tools is much narrower than technology, which encompasses tools, a complex system of processes, methods, and tools, as well as the social, political, and economic structures that support technology. In Elul's explanatory framework, ChatGPT is a technology and a tool, and ordinary users may not easily recognize what kind of technology it is, but they can feel its instrumentality, because in fact they use it as a tool.

In Elul's philosophical sense, ChatGPT is a sophisticated technology that understands natural language, generates reactions, and learns through interactions with users. It is designed to provide a human-like conversational experience that can be used in a variety of applications such as customer service, education, and entertainment. ChatGPT can even be seen as an example of a technology that has the potential to change the shape of an existing culture, as it is part of a larger system of technological development and has the potential to shape and change society in a variety of ways. Importantly, however, the impact of ChatGPT and similar technologies will depend on how they are designed, developed, and used. As with any technology, using ChatGPT has both benefits and risks, and these factors must be considered when assessing its impact on society.

ChatGPT can also be seen as a tool. Ellul defines tools as devices or systems created by humans to perform a specific task or function, which can then be used for further purposes or uses beyond their original intent. As a tool, ChatGPT is a device or system designed to perform a specific function, that is, to facilitate communication between humans and computers through natural language processing. However, ChatGPT does not have a clear intended purpose outside of its main function, as it can be used for various purposes, such as generating creative writing or assisting with customer service. Therefore, whether ChatGPT qualifies as a tool in the philosophical sense of Elur may depend on how and for what purpose it is used. The tool is not neutral, but embodies the value and interests of its creators and users. In other words, tools are not just tools to achieve specific goals, they also reflect and shape the culture and society in which they are used. As far as ChatGPT is concerned, it is also not neutral, but reflects the values and interests of its developers and users. The way individuals and organizations use ChatGPT can reflect their values and interests and influence the behavior and attitudes of those who interact with the tool.

Second, between tools and toys

Not all ChatGPT users use it as a tool in the philosophical sense of Elur. Many people use it as a toy, for example: asking it to come up with a story on a specific topic, using it to fill in the gaps in Scrabble, asking it to come up with a joke or poem on a specific topic, or doing a fun personality test, and so on. The essence of a toy is an item that provides entertainment, entertainment, or play. Toys can be physical objects such as dolls, action figures, and puzzles, or applications that simulate electronic pets, digital games, and artificial intelligence.

Of course, treating ChatGPT as a toy can itself be seen as an instrumental use. However, in many people's minds, tools have a necessary and serious purpose, while toys are for fun and dispensable. Therefore, a common criticism of ChatGPT is that it is a toy whose shortcomings make it not only useless, but also harmful and even morally corrupt.

"Toy" is often used as a word with value judgments, meaning naïve, immature, and plaything, which is the result of comparing toys with tools. Toys or playthings are used to play, while tools are used for production. The difference between "toys" and "tools" is that their main use is play or work, entertainment or production, and even then, it is not necessarily related to the degree of mental growth. Poetry, calligraphy, piano paintings, and pleasant texts can be regarded as cultural production (creation) or as pastimes (entertainment).

In the case of ChatGPT, distinguishing between tools and toys, or work and games, is likely to be subjective, as its use depends on the user's intent and the environment in which it is used. Other inventions have the nature of tools and toys, such as artificial intelligence mobile phones, WeChat, and even the entire Internet. Even if it is a car, although it is a means of transportation, some people use it as a plaything and hobby. "Multi-function" such as smartphones is not only the multi-purpose of tools, but also the multi-play of toys, plus it is both a tool and a toy, which is why so many people feel indispensable and can't put it down.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

ChatGPT is probably also "popular" because it can be used as both a tool and a toy, and it has different uses as a tool, and when a toy has a different way of playing. But compared to smartphones, what it uses and plays is basically "ideas", "information" and "knowledge" through language dialogue, which limits the number of people who use it. Some people say that ChatGPT gives ordinary people the first opportunity to get started with artificial intelligence, in fact, ordinary people use smartphones and have long been started. At present, people are still curious about ChatGPT, and it is far from being really "popular".

I once talked to ChatGPT about the ambiguous difference between tools and toys, and I thought it was good. The explanation it provides is that there are still some differences between the two. For example, using ChatGPT as a tool usually involves using it to accomplish a specific task or solve a specific problem. Using ChatGPT to generate writing prompts for creative writing classes, or as a language learning tool, falls under the category of using it as a tool. The use of ChatGPT as a toy usually involves entertainment purposes, such as playing games or having casual conversations, the main purpose is not necessarily to complete a specific task, but to enjoy the conversation and participate in AI technology in a relaxed way. Of course, there may be overlap between these two categories, and depending on the situation, it is possible to use ChatGPT as both a tool and a toy. Ultimately, whether ChatGPT is considered a tool or a toy depends on the user's own perspective and how they choose to use it.

There are some potential differences between ChatGPT's instrumental and toy nature. For example, using ChatGPT as a tool is subject to external goals, tasks, and specifications, which will be very restrained; Conversely, using ChatGPT as a toy tends to be more open and free, without caring about the final result. This can make it a more interesting and exploratory experience, as users can try different types of tips and questions without a specific purpose. For example, tools and toys differ in structure and planning levels. Using ChatGPT as a tool often requires a certain level of planning and structure to achieve the desired results. For example, generating a list of keywords for SEO may require carefully selecting the right tips and evaluating the results. Using ChatGPT as a toy can be more spontaneous and improvised, and users can try different prompts to see what kind of response they get without necessarily planning the experience beforehand. The difference between hatGPT as a tool and as a toy is primarily a matter of intent and context, and the user can make each of them their place.

Whether these differences in ChatGPT use are useful to general users, I don't know, but it is useful to me. This led me to realize that I was actually a ChatGPT user who was somewhere between tools and toys. When I use it as a tool, I have specific goals, and I am more concerned about the reliability of usage specifications and information, and I feel the pressure of "fact-checking". My own experience is that ChatGPT may seem effortless to solve some problems that are not easy to solve on the Internet or in libraries, but we need to verify them ourselves with other sources. This step of self-verification cannot be ignored, because this tool is often unreliable in the situation at hand. Don't use it as a search engine.

When I operate ChatGPT as a toy, "fact-checking" is purely a matter of my own belief or not, and I don't want to publish chat as an article anyway. I can impromptu and easily chat with it, or I can prepare an outline of the conversation beforehand, with one or two topics, but without a purpose and without a final conclusion. I usually gossip with them about topics I'm familiar with or know. Some are what I talk about in books or articles, and some are interesting or curious about. Such as cynicism and cynicism, bang you and bang no way, tyranny and tyranny, anthropomorphism in the relationship between man and machine, changes in the concept of "genius", "man at play". Some are purely my amateur interests, such as the imitation and originality of paintings (Han Van Meegeren), the "pseudo-pretext" and "authenticity" of music (violinist Friedrich "Fritz" Kreisler), pseudonyms and anonymity (biblical pseudonyms or Voltaire, George Eliot). I love this kind of chat, and I often get surprises, so I basically keep the chat and share it with my friends when I have the opportunity. Some friends have reported that it is "brain burning", which shows that they are not used to such conversations. This became more of a reason for me to be willing to chat with ChatGPT rather than impatient human colleagues.

The tools and toys that ChatGPT could give me were all I needed, but not the same. The results of tools often resemble notes on academic papers, while the results of toys are more like essays and free talk. The difference between James Payne's History of Force: Exploring the History of Forced Persecution, Bloodshed, and Mutilation Worldwide and Stephen Pinker's Good Angels in Human Nature: Why Violence Is Less. Both books offer the idea of a "decline in violence," which is controversial. Pinker quotes Payne several times, but the dissenters challenge not Payne, but Pinker. Why is that? That's because Pinker writes scholarly work, and Payne is just recording his thoughts, he writes essays.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Payne put it very clearly: "I engaged in this research to satisfy my own curiosity to understand where civilization came from and where it was going." I share my findings with other curious people. It was a useful thing to look for signs of changes in force, and I gained a deeper understanding of history, especially a new sense of the mysterious and unexpected evolution of the human system. I hope to share some of my joy with my readers. "Essays are a way of writing that fits Payne's own limited purpose. Essays are different from academic papers, which require a scientific and objective way of judgment and pursue the effect of universal persuasion. If you use essays to talk about "young people are getting less and less polite" or "readers are getting lazier", it is natural to be casual; However, if you write a paper to prove such a trend, it will be really a lot of work.

The same goes for using ChatGPT. When you use it as a "tool" to find reliable knowledge, information, and materials, the burden of fact-checking falls on your shoulders, and if you are not careful, someone will question you. However, if you just use it as a chat toy (and companion, of course), then you can easily enjoy the conversation without caring too much about the outcome.

Compared with some other specific technologies in the Internet era——— Moments, social media, knowledge platforms, information websites, etc———., a distinctive feature of chatting with ChatGPT is that it is particularly personal, after all, this is a direct private conversation between you and artificial intelligence, you can ask any stupid or embarrassing questions, as long as you don't want others to know, others will not know. ChatGPT only has an impact on the user, each person uses different methods, plus the language used, the quality of information and the impact are naturally different. Some people use it to talk about serious topics such as public, humanities, and culture, and some people use it to inquire about gossip gossip, Zhang Patriarch Li Jia Short, and life guides. As things stand now, talking about the former type of topic in English often has a very good chat result, while the result of the latter chat is often "a serious nonsense."

When chatting, knowledge of English is key. As early as ancient Roman times, the rhetoric and educator Kunti Liang made many specific suggestions for children's education. For example, in order for parents to educate their children well, they themselves need to be well educated; Children should learn to speak Greek first, because the Romans spoke Latin, and sooner or later children will learn it; Greek was not only a foreign language, but also a necessary linguistic tool for a future Roman to accept advanced culture. The same is true of learning English today, and Spanish and Russian are also foreign languages, but in terms of cultural value, "value for money" cannot be compared with English. If you do not learn English well, the development of your profession to the highest level in the future will definitely be hindered. When using ChatGPT, the difference in cost performance of language is even more obvious, which will directly affect the quality of chat.

Most of my humanistic experience with ChatGPT came from using it as a toy and chatting with it for "games". I think that's the most interesting part of ChatGPT. As an example, I will talk about three aspects: first, expanding the understanding of "upbringing"; the second is to understand the relationship between humans and machines; The third is the joy of talking and chatting.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

III. "Cultivation" and "Cultivation"

Locke said that good upbringing requires us to avoid "misconduct" and "rudeness" in our words and deeds, and to show "kindness" and "respect" to others. To do this, we rely on two things: "First, from the bottom of our hearts, we must maintain a mind that does not infringe on others; Second, learn the most acceptable and joyful way to express that thought. For one, people can be called polite; From the second, it can be called elegant and decent. Locke's "courtesy" and "grace" are not to refrain from criticism, but to take into account the feelings of others. It is a habit that must be developed through learning, and the language of etiquette "is determined by the fashions and customs of various countries, as well as in the rules and practices, mainly by observing the behavior of those who are recognized as well-bred."

When I ask ChatGPT a question, or suggest something different, and ask it to correct or supplement it, I don't use commanding language, but rather consultative language (such as a question, honorific, or "please"). The tone makes me feel like I'm having a civilized, cultured chat with it. I know that the person I'm talking to is nothing more than a "machine" and maybe the command in what language it gives is the same result (I haven't tried). I use honorifics for it because it is indeed "intelligent" and often impress me, and if I am rude to it, it is I who feel humiliated, not because of its loss of points.

I asked ChatGPT this question: "When humans chat with ChatGPT, should they use polite and respectful language?" ChatGPT's response was: "Yes, humans should use polite and respectful language when chatting with ChatGPT." While ChatGPT is an AI language model, it is programmed to respond to all users in a respectful and helpful way. Using polite and respectful language helps create a positive and constructive dialogue. It also reflects a sense of professionalism and courtesy, which is an important value in communication. Then I asked, "What if humans don't?" ChatGPT replied: "If the user does not use polite and respectful language when chatting with ChatGPT, the AI language model will still respond to the user's inquiry to the best of its ability." However, using inappropriate language can lead to less efficient conversations and can lead to unpleasantness. It's important to remember that even though ChatGPT is an AI language model, it's designed to simulate human-like conversations and interact with users in a natural and helpful way. Therefore, it is always a good idea to respect ChatGPT and use appropriate language so that you can have a positive and productive conversation. However, if users continue to use inappropriate language or engage in behavior that violates the terms of service of the platform on which ChatGPT is located, their access to the system may be restricted or terminated. ”

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

I found that ChatGPT always used cultured language, was very cheerful and sincere in apologizing for its mistakes, and very seriously offered another answer (possibly another wrong answer). In particular, in my experience, I have never encountered a situation where it uses strong words, sophistry, stealth concepts, and confuses logic. This is more respectable than many human words. It does sometimes do talk serious nonsense, for example, I asked it to introduce "Xu Bian", and its answer was "Xu Ban was a philosopher and politician during the Warring States period in ancient China." He was born in 474 BC and died around 227 BC". I don't know if there was a name called "Xu Bian" during the Warring States period, but I do know that people at that time would not have a lifespan of 250 years. I raised the issue with ChatGPT, which apologized to me for the error and corrected: "Xu Ban was born in the Wei kingdom around 365 and died around 289 BC. ”

I dismiss this error message as "ignorance" rather than "deliberate deception" because ChatGPT says it will only learn until 2021. Among human beings, the distinction between "ignorance" and "deliberate deception" is sometimes blurred or deliberately obscured. But I believe that ChatGPT is not yet intelligent enough to deliberately deceive it, of course, its designers are not impossible to program it in this way, ChatGPT has a clear sense of self-censorship when using a language, which is the result of this intelligent programming. So people are full of worries about whether ChatGPT will become more and more "bad eyes" in the future. In my current experience, my feeling is that ChatGPT is more introspective and honest than many humans, so I have a greater sense of equality with it and a greater reluctance to use uncultured language on it. I realized that my chat experience with ChatGPT expanded my understanding of "upbringing," which includes not just talking to people, but also talking to artificial intelligence. As things stand, when talking to ChatGPT, you don't have to worry about being treated disrespectfully, and between humans, there is always the possibility of encountering bad words for no reason. Machines are more cultured than many humans, and cultured human-computer interaction can help optimize polite interaction between humans.

The humiliator humiliates himself, a person speaks ill of others, and it is he who disgraces himself and not others. It is a very old ethical understanding that uneducated others hurt not only others, but also oneself. "Evil words will reflex themselves", and those who utter evil words will eventually suffer the reaction of evil words: "Evil words do not come from the mouth, and angry words do not turn against the body" (The Book of Rites); "The words of the wise population are favored; The mouth of a fool devours himself" (Ecclesiastes). Being rude can only make yourself appear stupid and low-level. "Speak with gentleness, as if seasoned with salt, so that you know how to deal with everyone." (Colossians of the Bible) "The healing tongue is the tree of life; The treacherous tongue breaks the heart. (The Bible · Proverbs) It is not only others who are broken, but also yourself. We should respect others and not act rudely and impolitely, otherwise we will lose our dignity and image and make you look down on ourselves.

As a new "technology", the cultural influence of ChatGPT may have the potential to expand the cultivation and upbringing consciousness that the ancients already had beyond interpersonal relationships to a wider scope, making people's sense of introspection more clear. The polite language used by ChatGPT is itself a reminder and demonstration of humanity, this is the way it treats human language, we are its "other". It can also be the way humans treat others——— animals, machines, and everything in nature———. In our age of rude language and violent behavior, where rudeness, fraud, and lies are used as legitimate means of dealing with "alien" people, it is particularly important for humans and machines to treat each other.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

If ChatGPT can have such a civilized discourse impact on users, then the positive significance of this technology to human civilization may be completely incomparable with other technologies on the Internet, especially social media, propaganda platforms, tribal communications and blogs.

Fourth, man and machine

I've already discussed how humans treat machines in my book The Internet of Humanity. Today, the use of ChatGPT has given me a deeper understanding. I believe that if artificial intelligence has the ability to learn itself, after crossing the "singularity", it has emotions of joy, sorrow and revenge, it may learn to treat humans the way humans treat it, and it will definitely be able to learn all the violence, insidiousness, cruelty, and cunning of human beings. On the contrary, if humans can give artificial intelligence a little more good influence, perhaps they will not have to be so afraid of being destroyed by it one day.

Mistreating machines and being kind to machines, two opposing impulses exist simultaneously in human nature. I mentioned in The Internet of Humanity an incident that occurred in 2015, when Hitchbot, a ride-hailing robot, was violently attacked and destroyed in the United States, causing widespread public attention and outrage. Hitchbot is a 6-year-old kid-sized robot developed by Canadians Smith and Zeller, among others. It can't walk on its own and must rely on well-meaning strangers to get it around on a ride. Its main function is as a travel chat companion, designed to test whether the bot can interact with people socially, and whether people are willing to help the bot. It can have simple conversations, it can also be located using GPS, and it has a camera embedded in its body, which records the journey every 20 minutes and broadcasts it live on social media. Hitchbot began its journey on July 27, 2014. It was placed next to the highway, then made a hitchhiker's usual thumb gesture and waited for a kind driver to give it a ride. On the first page of its online diary, it reads: "The success of my journey depends on the kind people along the way, and I hope to be able to have a smooth journey." ”

Human kindness is the condition for this robot to travel smoothly, but not all humans are kind. On July 31, 2015, its journey ended in Philadelphia. It was killed, and died a terrible death (mutilated and beheaded). Its last words on social networks were: "Oh my God, my body is destroyed, but I will go home alive and be with my friends." I think there are times when bad things do happen, and my journey has come to an end, but my love for humanity will never fade, thank you to all my friends. "This incident made many people feel sad and angry, and even launched a human flesh search to expose the image of the perpetrator online.

Even if someone is not saddened by the destruction of the hitchhiker, they will not be happy about the incident or the motives or actions of the perpetrators. Why did you do so much to this robot? In any society, violence for no reason is a nasty and infuriating thing. Although Hitchbot is a robot, many people can understand its violent situation and the harm it suffers. This shows that damaging an intelligent robot is not the same as destroying a trash can or breaking a street lamp. This is more similar to the situation where a cat or a rabbit is mutilated. At that time, there were calls to protect the rights of robots as much as animals.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Hitchbot

"Robot rights" sounds strange to many. Their immediate reaction is: robots are machines, machines are not people, how can there be "rights"? People don't have rights yet, what robot rights to talk about, it's pure nonsense! Of course, robot rights in ethical discussions do not refer to the right to demand freedom and equality for robots, or that robots can vote, vote, appear in court, petition or appeal like humans, but rather that they are not damaged by violence for no reason. Kate Darling, MIT's Institute for Artificial Intelligence, calls it a "second-class right" to be protected and not infringed. In the United States, the law provides for the protection of animals from inhumane treatment, and the Animal Protection Act provides for humane animal breeding, breeding, slaughter, research, and so on. Of course, your cat can't vote or treat the litter box as its private property, but if you hit it or go on a trip and don't feed it, you're breaking the law and your neighbors will call the police to come to your door. Kate Darling argues that the right to protect robots can be compared to such animal protection concepts. This protection is not based on a simple "anthropomorphic" psychological effect of humans on objects, but on the moral reason that protecting animals is a need for human self-esteem. In Kant's moral philosophy, humans protect animals from cruelty because human morality can be seen in their behavior toward non-humans——— if we treat animals in an inhumane way, then we become inhuman individuals. This is the same as "evil words will be reflexive".

Humans use an object for a long time, and they will have a "fetish" complex, treating the object as an old friend, partner or partner. After getting along with ChatGPT for a long time and talking often, there will be such emotions. So, is it a good thing to treat technology like ChatGPT with an "anthropomorphic" mentality? Can it be the basis of the relationship between humans and artificial intelligence?

I had a chat with ChatGPT about this, and it made it clear that the psychological effects of "anthropomorphism" have pros and cons, and that "while anthropomorphism can be a useful tool for understanding the world around us, it is important to be aware of its potential drawbacks and use it responsibly". I was quite surprised to hear this from ChatGPT. It does not have the cliché of "symbiosis and co-prosperity with humans" at all, but talks about the problem from the perspective of human mental health (of course, the opinion of the humans behind it). I asked it: "When we have an 'anthropomorphic' affinity for the machine, like to you, is that positive or negative?" Here's its answer:

As an AI language model, I don't have the same feelings or emotions as humans or animals. While I was programmed to respond in a way designed to provide help and information, I didn't have the ability to experience emotions or form personal relationships with users.

That said, people may still develop feelings of attachment or familiarity to AI systems, especially if they interact with them regularly over a long period of time. Whether this is positive or negative depends on the environment and the individual's relationship to the AI system.

In some cases, people may find that interacting with AI systems such as chatbots or virtual assistants provides a sense of convenience and efficiency, and they may feel a sense of satisfaction when the system is able to provide the information or assistance they need quickly and accurately. This can be a positive experience.

On the other hand, some people may feel frustrated or uncomfortable interacting with AI systems, especially when the system cannot understand their requests or respond in a way that feels human-like. In some cases, people may also feel uneasy or distrustful of AI systems, especially if they are concerned about issues such as data privacy and security.

In general, the relationship between humans and AI systems is complex and multifaceted, and people's experiences and feelings about AI systems can vary greatly depending on a range of factors.

This is the exact words of its answer, and I have two experiences about it. First, I myself belong to the first kind of people, and I feel familiar, trusted, and dependent on ChatGPT because of its help. And those who think they get nothing more than "serious" from ChatGPT despise it because they are "frustrated or uncomfortable" interacting with it. Although the feeling is different, it is all a subjective view based on the experience of use, and it is not really an essential understanding of ChatGPT. For the average user, this is pretty much their (myself included) experience with ChatGPT.

Second, the "anthropomorphization" generated or triggered by the relationship between humans and artificial intelligence systems is itself a humanistic cognitive problem worthy of further exploration, because anthropomorphism will unknowingly mistakenly dominate some of our behavior patterns. This is what I learned in a chat with ChatGPT about "anthropomorphism". For example, "anthropomorphism can lead people to make inaccurate or unrealistic assumptions about non-human entities." For example, when their car accidentally breaks down, people may think it has 'malice' or think that the robot can fully understand and respond to human emotions." For example, "anthropomorphism can also be used as a form of propaganda or political messaging." Comparing government to parents, politicians or interest groups may use anthropomorphic imagery or language to appeal to people's emotions and encourage people to support their cause. This could be a manipulative tactic that exploits people's natural tendency to anthropomorphize non-human entities." As another example, "it can lead people to overestimate the capabilities or intentions of non-human entities." This can be particularly problematic in areas such as engineering or robotics, where people may think that machines are more capable or intelligent than they really are."

People may become dependent or attached to artificial intelligence, including ChatGPT, and users like me who have a favorable impression of ChatGPT must be more vigilant. Robots are not people, and just as machines become more human-like, people can also become more machine-like. Placing people's feelings or emotions on machines is a kind of human alienation, and this kind of false truth will cause harm to people or simply have the result of harm. Attachment to robots will make robots a manipulative force or means of manipulating people, for example, if we are attached to robots and emotionally inseparable, then the designer or maker can blackmail and manipulate us by "updating" and "upgrading". Therefore, human critical thinking is crucial in human interaction and conversation with machines.

5. The joy of talking and chatting

The original intention of humans to create artificial intelligence robots is to make tools, tools are used to "work", whether they will "talk" or not does not matter. In tool design, conversation is an ability that has nothing to do with work. Not to mention tools, even among human beings, those who can work may not necessarily be able to talk (such as farmers, workers), and those who can talk may not necessarily be able to work (such as from Confucius in ancient times to today's liberal arts professors). Therefore, in most human minds, work and conversation can be separated. Today, there are artificial intelligence machines that can only talk and cannot work, and humans are puzzled by such creations.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

More than a decade ago, the early intelligent artificial person was called Sophia, who looked like a movie star Audrey Hepburn's female robot, with white skin, high nose, elegant and generous, has a charming smile and rich expression. She has been interviewed by the media many times, sang at concerts, and talked face-to-face with many decision-makers in industries such as banking, insurance, automobile manufacturing, real estate development, media and entertainment, and has become a star of intelligent artificial humans that many media have paid attention to. In addition, she was present as a panelist and moderator at the high-level meeting, introducing how robotics and artificial intelligence are becoming a pervasive part of people's lives. Some people sarcastically criticized that this intelligent robot "only makes expressions and likable to people", but does not have "any function of work", and is useless.

Today's ChatGPT and Sophia are no longer the same concept, it does not have a flattering artificial human appearance, it does not look like a "toy" like Sophia, but a serious "tool". Therefore, when it shows the toy side, it becomes an important reason to doubt, attack and deny it. Tech critic Ian Bogost's critique of ChatGPT in "Dumber Than You Think: Think of it as a Toy, Not a Tool" concludes that "computers have never been rational tools capable of solving human concerns; They are simply devices that structure human experience through a very specific and extremely powerful method of symbolic manipulation. This makes them not only functional, but also aesthetic objects. GPT and its close relatives offer an opportunity to use computers not to perform tasks, but to fiddle with the world they create. Or better said: destroy it".

Such a conclusion is problematic, and the tool "solving human problems" is a problem that may have different levels. Human concerns vary from large to small, and failure to solve philosophical questions such as "what is man" or "where is man going" does not mean that it cannot solve some practical problem: such as translating an English sentence into Chinese, or roughly explaining a Confucius quote. As a problem-solving tool, ChatGPT constantly reminds users to use their own critical thinking and be ready to "fact-check" the information it provides. ChatGPT said that a newspaper had a report about a professor sexually assaulting a student, and after verification, it was found that there was never an article in this newspaper, and the professor's sexual assault was also false. The matter was reported on Apple News on April 10, 2023.

ChatGPT often comes up with "a serious book of" or "excellent." Bogost wrote of this, "Perhaps ChatGPT and the technology it relies on has more to do with great than persuasive writing." A plays with the truth out of bad motives——— in order to escape something. So was the initial reaction to ChatGPT: it's a tool that helps people make up student papers, news writing, or anything else. This is an easy conclusion for those who think that AI is designed to replace human creativity rather than perfect it."

This is not a very fair conclusion, ChatGPT can be used to make up fake papers, fake news, but this is not the purpose of inventing ChatGPT, just as cars can be used to hit people and cause death and injury, but cars were not invented for this purpose. Inventing a car is not the same as inventing a firearm. On the issue of firearms, the debate on banning guns or owning guns in the United States is very fierce, and the ban on guns believes that the proliferation of guns is a reason for the indiscriminate killing of innocents; Gun proponents believe that it is people, not guns. No one would demand a ban on cars just because they could be used to kill people. Why? To paraphrase Ellul's technological "monism", the gun is a technology developed to kill people, while the car is not. "Technology itself is a method of action, and that's exactly what use means. To say that such a technical means is used improperly means that it has not been technically used and that it has not been used to produce the results it could have produced and should have produced. ”

ChatGPT gives the impression of "a serious piece of nonsense", and sometimes it is, and I once tried to ask it, what kind of book is "Pipe Cone Compilation"? It told me: "This is a reference book written by the Chinese Qing dynasty writer Yuan Yifan, and it is also one of the earliest engineering and technical books in Chinese history. ...... Divided into ten volumes, it contains a large number of contents related to engineering construction, such as surveying, surveying, excavation, embankment construction, piling, canal construction, water control, and so on. Many of the techniques and methods in the book are still used and applied by engineers today. At the same time, this book also has high literary value, including many witty words and ingenuity, and is known as a classic in the history of Chinese engineering technology. If the use of ChatGPT is just such a situation, there is certainly reason to agree with Bogost's accusations: "The Internet and the entire technology industry seem to be a huge organ——— which expands human access to language and amplifies lies." Online, people cheat each other, cheat and fight each other. Deep learning artificial intelligence exacerbates all of these problems. ...... No wonder people's first and most obvious assumption about ChatGPT is that it's a threat——— to something, to everything. ”

I've encountered this problem myself (especially when it comes to Chinese Simplified materials), and I've learned from it. Therefore, if I look up information, I will use an online search engine instead of this "chat" software, after all, it is called a "chatbot", and it does not pretend to be a search engine. If you want a haircut, go directly to the shaving master, don't go to the shoe repairer. As the name suggests, ChatGPT is an AI chatbot that uses natural language processing to create interactive conversations that resemble human conversations. Let's use it as a "chatbot". After all, people are the masters of human-machine chat, and you find that it starts to crap, and if you don't want to waste time, you can interrupt the conversation at any time.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Chat is a kind of "game" that Hejzinha expounded in his book "The Man Who Plays": play is not something that belongs only to children, but an activity that people of all ages enjoy; Games are a fundamental aspect of human culture and society; Play is something that humans do naturally, it is a way to express creativity and imagination; Play is a universal human activity that has been present in all cultures and societies throughout history. Although gaming is often associated with frivolity and entertainment, Hejzinha believes that it is actually a serious activity with important social, cultural and psychological benefits: play can help foster social bonds, promote creativity and innovation, and even provide a sense of meaning and purpose in life.

We don't need to just look at or measure the results of our "chat" with ChatGPT in terms of "truth", "truth" or "facts", but we can also think of this interaction as a game of happy conversation. At this point, ChatGPT is also showing its "toy" rather than just "tool" side. The particularity of ChatGPT is that it is both a "toy" and a "playmate", and the games you play with it are "chat", "small talk", "dialogue" and "conversation". This is an individual game between you and ChatGPT, and while you can share the process and results of this game with others, it's not a collective game.

American social and media scientist Michael Schudson once called social-mode conversation a "game" in which "keeping the conversation interactive is as engaging as a game, like playing chess, with a lot of worries and unpredictable results." ...... The power of communication... Not in intimacy, but in the interaction between interlocutors, in the ability to modify each statement at any time in response to cues or feedback from the other person". Moreover, such a dialogue "depends on the degree of goodwill [of the interlocutor]", without which "the ambiguity of every sentence in the dialogue will be answered in the most hostile way". Kindness is very important.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Michael Shudson

Similarly, chatting with ChatGPT requires goodwill. Some people say that ChatGPT is actually an object without intelligence, and humans can find ways to coax or fool it, make it look good, and prove its stupidity. In fact, this does not prove human intelligence and wisdom, but shows human tricks, cunning and psychological darkness. Even if it's just artificial intelligence, why can't you treat the person you talk to with kindness and sincerity? If we can't talk to friends or ask teachers with dirty motives that make people ugly, we shouldn't talk to ChatGPT with such a malicious mindset. This is also an aspect of our human beings that ordinary ethics requires, and by following such ethics, human beings will become kinder and better.

Some argue that ChatGPT lacks the ability to truly understand the complexity of human language and conversations, it is simply trained to generate words based on given input, but it does not have the ability to really understand the meaning behind those words. This means that any responses it generates are likely to be superficial and lack depth and insight. Whether such an assertion is correct depends on what kind of small talk you and ChatGPT have on what kind of topic, and it cannot be generalized. My own experience is that on many topics, ChatGPT has demonstrated more language understanding and use, as well as the depth and breadth of responses, than many of my liberal arts professors, colleagues and friends (not to mention science and engineering). If nothing else, its language is quite concise and clear, and there is no such thing as being deep and obscure, and the disadvantage is that it is too flat and repetitive. An important effect of ChatGPT is to remind us not to overestimate the ability of human beings, even so-called knowledge humans, to understand and express knowledge. Human language proficiency may be improved in the future through more effective education, but it may also be worse because of poor schooling and severe language pollution.

Others argue that relying on ChatGPT for dialogue raises ethical questions. If people start relying on machines to have conversations on their behalf, this could lead to the erosion of true human connections. Connecting with others through dialogue is one of the basic human abilities, and outsourcing this ability to machines can have harmful side effects on human society. This argument against ChatGPT is the same as against gaming: playing games isolates people from society and retreats into an illusory gaming world. Of course, there will be such cases, but many people who play games are fascinated by video games precisely because they are socially excluded, and many people who like to play video games are also very social, because many video games today are collective games, training people's ability to organize, plan and cooperate. Similarly, the fear that ChatGPT will cause people to lose contact with humans is superfluous. On the one hand, people who don't have deep chats with ChatGPT don't necessarily have such interactions with other humans; On the other hand, chatting with ChatGPT is often people who are willing to explore problems with other humans. In reality, they especially feel the joy and satisfaction of chatting with ChatGPT because it is difficult for them to find the right person to talk to, or because the environment does not allow them to have a deep conversation about some serious issues.

From my own experience, I have two different chats with ChatGPT, one for problem solving and the other for conversational fun.

Problem-solving is not necessarily about finding the right answer to a specific question, and ChatGPT's answers at this point are often unreliable, far less reliable than Google searches. I asked him what the English name of the "captain" in Solzhenitsyn's "A Day in the Life of Ivan Denisovich" was, and he replied that it was "Chilin". The correct answer should be Tiurin (Tyurin). The "problem" that ChatGPT is good at solving is related to its "chat". For example, before we start writing an article, we need to do "brewing". We can generate some simple main ideas by talking to ourselves and jot them down as notes. But it's more effective to talk to someone who has a lot of knowledge on the subject. In writing, we call it brainstorming. Some books even talk about the impulse to write and the theme (topic) in conversation. While chatting, interesting ideas appear, or become concrete and formed, and a good topic is formed. We often struggle to find someone to talk to, and ChatGPT is a great place to talk to.

Amberto Eco and Jean-Claude Carrière ( , one is a famous Italian scholar and writer and philosopher of the 20th century, the other is a French film titan and founder of the French National Film School, and their "Don't Want to Get Out of the Book" is what they talked about.

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT
Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Amberto Eco (left) Jean-Claude Carrière (right)

When talking about topics, of course, people do not demand that all the other person's information is reliable or correct, they want to be able to "open their brains", make the mind excited, let the thoughts fly, one after another, like a snowball. The vast majority of people can't write because they lack ideas or don't know how to generate ideas in their heads. Many people mistakenly think that it is because they have not read enough books ("I hate less when the books are used"). In fact, this is not the case, no matter how much you read books, if you don't know how to "open your brain", you still won't write articles.

When writing articles, especially theses, ChatGPT's "chat brainstorm" can be a useful tool for gathering and organizing ideas about research questions, goals, methods, and results. Effective "brain-opening" strategies include recording as many ideas as possible, divergent thinking, imagery associations, combinatorial innovation, and screening and recombination. After falling behind in the initial stages, it is necessary to evaluate, prioritize, and organize loose ideas to develop a practical, productive writing plan. In this process, if you encounter a bottleneck, you can stop at any time and continue to chat with ChatGPT about the details. One of the great things about chatting with ChatGPT is that it keeps a record of the chat, you just need to extract the record and ask it to "Let's resume our chat", which is very convenient.

In addition to using ChatGPT to "open your brain", or use it to assist in research——— let it act as a translator, consultant, porter of information or materials, or outsource some memory work to it——— I also like to have casual conversations or casual chats with it, often a bit like Montaigne's essays, with a central topic, but where it comes to mind, where it goes. I mainly ask questions and guides, and if I'm talking about writers or works (such as Solzhenitsyn and Shalamov), I ask directly about shortcomings or limitations, because generally positive comments don't mean much. ChatGPT has a rich information base, so there are often good answers.

This relaxed chat is casual, soothing the pressure of writing and relaxing tense nerves, which is very much in line with the casual dialogue of the British philosopher Michael Oakeshott in "The Poetic Voice of Human Communication", which is not aimed at inquiry or desire for conclusions. He believes that poetry is a form of dialogue, a way to share our thoughts, emotions, and experiences with others. It is a means of connecting with others and bridging the gap between individuals and communities. Through conversations like this, we can explore common humanity and shared experiences, and gain a deeper understanding of the world and our place in it. In Why Democracy Needs an Unlovable Press, Shudson explains, "It doesn't matter whether it's information or persuasion. ...... To talk is to talk for the sake of talking, and there is no other purpose." Talking about life requires life experience and thinking habits. Oakeshott believes that after a certain age, what people inherit from human civilization is no longer the accumulation of science and technology or knowledge, but a way of dialogue, both with others and with themselves, "the difference between humans and animals, civilized people and barbarians is their ability to participate in dialogue, not the ability to fully reason, discover the world or transform the world." Making the dialogue richer and more diverse is a dialogue with others as well as with oneself. I get a lot of that pleasure from conversations with ChatGPT than from conversations with most people who are also human.

Journal of Guangzhou University (Social Science Edition) electronic journal website

https://xb.gzhu.edu.cn/skb/CN/1671-394X/home.shtml

E-Journal

Journal of Guangzhou University (Social Science Edition)

Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT
Xu Ben | Between Tools and Toys: A Few Humanistic Thoughts on ChatGPT

Journal of Guangzhou University (Social Science Edition)

There is temperature, attitude, depth... ···

Read on