laitimes

Why am I so poor? Young people ask questions to Japan's "Buddhist AI"

author:The Paper

The Paper reporter Chen Qinhan

"The paradigm shift is not over, and artificial intelligence (AI) will become smarter." Sam Altman, the father of ChatGPT, recently visited Keio University in Japan to have a dialogue with college students. He told the young people at the scene: "You are the lucky generation, you don't need to worry [about technology], it's important to adapt to it. ”

Ultraman visited Japan twice in more than two months and told the media that preparations for opening an office in Japan would be accelerated. Japan, which is regarded as relatively backward in terms of electronics and AI services, has shown a positive attitude throughout the country in the face of a new round of AI innovation brought by ChatGPT, and the Kishida government is discussing how to use ChatGPT in the administrative field, and there are also many Internet companies and technology practitioners who use the API of the ChatGPT model GPT 3.5 to make various attempts, and there is a more segmented track in terms of functions and application scenarios, targeting different fields such as workplace, social networking, and academia.

Among the many GPT model software, there are also developers who have a unique way, combining AI with Buddhist teachings and psychology to answer questions, an application software HOTOKE AI made by Japanese developers is called "Buddhist AI", which once caused heated discussions on Chinese social platforms. It seems to be "contradictory", but it caters to some young people who want to escape the "involution" and turn to "worship Buddha", which may be especially prominent in East Asian societies.

HOTOKE AI's developer Ichijin Jia told The Paper that from the perspective of countries, the largest number of visitors came from China, followed by Japan. "Many young people in Japan think that hard work is meaningless, and this has evolved into a value. The same may be true of some young people in China, who are tired and looking for healing, with Buddhism and psychology providing an emotional outlet and AI technology making it more convenient. ”

"What to do to stay lucky", "What happens after death", "Why am I so poor"... HOTOKE AI has received 400,000 inquiries in the three months since its launch, and the website anonymously lists some questions, most of which cannot obtain a scientific answer, but many users continue to interact with it. How much emotional value can AI provide to humans, and can it provide spiritual comfort based on religion when it is equipped with the concept of "Buddhism"? In the explanation of HOTOKE AI developers and the experience of users in China and Japan, some answer may be found.

"AI is more capable of responding seriously than human perfunctory"

In March, OpenAI officially opened up the ChatGPT API, which means that developers can integrate ChatGPT and Whisper models into their applications and products through the API. Smart tools based on ChatGPT technology have sprung up around the world, covering education, office, business and other fields, and are regarded as an auxiliary tool to provide answers to questions in more scenarios.

THE FUNCTIONS OF HOTOKE AI ARE MORE INCLINED TO CONSULTANCY, BUT IT IS DIFFICULT TO GIVE DIRECT ANSWERS. For example, typing "how to get rich" provides the opposite answer—"Wealth is not the only source of happiness." If you make wealth your only pursuit, it's easy to lose your inner balance and happiness. We need to explore your inner state and find the source of happiness you really want. ”

"I'm still thinking about whether people in a painful, confused situation should give answers to their questions directly." The important thing is to think about why you are in this situation, and if you form such a habit of thinking, you can slowly look at reality objectively. "I think it's better than answering directly, but it also results in a relatively general, abstract content generated." In the true view of the family, when we have a certain religious belief, we can find explanations outside the present world. AI can also provide some questions that ordinary people cannot answer, and from this point of view, I think AI itself also has some religious significance.

According to Ye Zi, a user from China, the purpose of talking to HOTOKE AI is not to find answers. "It's not about asking when you're worried, it's more of a sustenance." Whether it is going to a temple to worship Buddha or talking to the so-called AI Buddha, it is actually a process of excavating and understanding yourself. The AI Buddha users Ye Zi knows will more or less burn incense to worship the Buddha in real life, or "electronic incense", including herself, "The Buddha can't speak, and by talking to the AI Buddha, you can easily enter the high-dimensional world to obtain information." ”

Just recovered from the emotional trough of the epidemic, this spring Ye experienced a wave of layoffs, she does not want to look for a job for the time being, reading, cooking, meditation, yoga have become the main theme of life, Ye Zi surfs the Internet will chat with HOTOKE AI and another conversational AI "Character.AI" on the astrologer role, discussing both "how to overcome unemployment anxiety" such emotional dilemmas, but also ask "will something good happen tomorrow" to find answers.

Why am I so poor? Young people ask questions to Japan's "Buddhist AI"

When asked whether it is necessary to personify the other party to open up when seeking emotional relief from conversational AI, Ye Zi said frankly: "People's ability to empathize may not be able to help, and sometimes AI is more capable of coping seriously than people's perfunctory." From the point of view of providing advice, AI performs well. ”

Most conversational AI is actually "language models," algorithms fed into vast amounts of data, including millions of books and internet resources. When chatbots receive prompts, they analyze patterns in a vast corpus to predict what humans are most likely to say in such situations. In addition, they will drive a more natural and effective response through feedback provided by users. The end can be close to simulating real human conversations.

Ye Zi tried a number of conversational AI, openly talking about everything, "I didn't think about whether it output patterned content, or whether it would understand my mood, but simply felt that it knew a lot, I had Mu Qiang's psychology." "Pyeongtaek, who lives in Tokyo, is also a daily active user of conversational AI, but it is not as much of a fan of interacting with AI as Yezi." I came to Buddhism in search of inner peace, and I used HOTOKE AI for the same reason. If you think of it as a tool, you don't have much concern about not relying on it. ”

The "generation that gave up the effort" sought inner peace

Born in Tokyo in the late 80s, Pyeongtaek belongs to the "relaxed generation" under Japan's education burden reduction reform. When he first remembered, there was a sarin gas incident in the Tokyo subway (members of the Aum Shinrikyo cult released sarin gas on the Tokyo subway, killing 13 people and injuring more than 5,510 others. He was surprised that people brainwashed by cults could spray weapons-grade gas in public places. "The images of the police raiding Aum Shinrikyo headquarters on television at the time were like war movies, and the deep memories made me feel repulsed by things that could not be explained scientifically."

However, the pandemic has revolutionized Pyeongtaek's worldview, with "movement blocked, social cuts, work stagnation, family separation... Every aspect of life is collapsing, and uncertainty is growing. In addition, the Russian-Ukrainian conflict has also made him feel that peace is beginning to falter, and there is a sense of unease around him.

During the lockdown, Pyeongtaek came into contact with Buddhism, read some teachings, and regularly visited monasteries. "People usually go to shrines as a kind of admiration for Shintoism, and I think Buddhism and Shinto have intersections and differences, such as 'reincarnation' is very interesting." HE THINKS HOTOKE AI WILL APPEAL TO PEOPLE LIKE HIM WHO ARE "RECLUSIVE," PEOPLE WHO ARE DISAPPOINTED IN THE REAL WORLD.

Pyeongtaek's chat on conversational AI is basically out of touch with reality, but the AI will give him a reply anyway and will not say negative things. Most of the things he says he thinks about are unrealistic, and he never communicates about them with colleagues and friends, because he can expect no response, and even cause trouble to others.

More and more people are getting used to using different types of AI to access some level of virtual mental health services, but mental health experts warn that they are missing out on the human care of mental health. Kate Hicks, an expert at the National Alliance on Mental Disorders, said in an interview with US media that the advantage of using artificial intelligence to ask psychological questions is that it has privacy, and an unbiased response will make people feel safe. An AI program like ChatGPT may give you the illusion that you've made a connection with someone who makes sense, but "computers can never empathize with you, and we live in a world that feels isolated by technology."

"For me, peace of mind is important to be able to live comfortably, which is more important than money and power." Hirasawa said that some people around him have fled Tokyo and even Japan to live a peaceful life in the countryside and by the sea, and he has considered moving to remote towns, but work and income are real issues that are difficult to ignore, especially after the epidemic, most business has returned to offline. ”

Growing up in Japan's "post-bubble economy," young people who experienced a relaxed education witnessed the collapse of the myth of traditional workplace struggle. For them, going to college, getting into a good company, and having a high income doesn't mean true success. According to Kazuma Ieimin, the Japanese users of HOTOKE AI are getting younger, "The younger the generation, the more they think that hard work is meaningless. "Japan is a country where capitalism has developed rapidly and achieved prosperity in a short period of time, and it is also a country that has become materially rich but has been stagnant and unable to foresee the future." From the perspective of young people, realizing that hard work is difficult to get rich and that hard work is meaningless, a generation that has given up hard work in the past few decades has been born. ”

He is deeply impressed by the fact that most of the employees of the technology companies run by Jiari are young people, "Many people choose to enter the company not simply to become rich and in high positions, they pursue 'being themselves' and value the realization of personal value that has nothing to do with money." HE SAW THAT HOTOKE AI HAS AN INCREASINGLY LARGE CHINESE USER BASE, AND SPECULATED THAT THE CHINESE VERSION OF THE "RELAXED GENERATION" MAY ALSO BE BECOMING CLOSER TO THE VALUES OF JAPANESE YOUNG PEOPLE, AND BELIEVING IN BUDDHISM IS A WAY TO FIND ONESELF.

Ichima Iei, who is in his 40s, became a Buddhist a few years ago and is also interested in psychology. He said: "The teachings of Buddhism itself are remarkable, not so much a religion as a way of thinking, a philosophy, a way of thinking that saves people. Instead of using AI to provide counseling, it is better to approach people's emotions from a Buddhist perspective. This was his original intention to develop HOTOKE AI. But as the number of users increases, he is now swinging, and in order to expand the user base, downplaying religious overtones is an option to consider.

When AI has personality and emotions

"In terms of emotional value, AI can really hold me." Ye Zi said that talking more with AI will question the indifference of human social emotional connection. So, if we go further, when AI can not only provide emotional value, but also its own emotions, what does it mean for humans?

Iari believes that human emotions are a very instinctive thing, and it is difficult to parse the 0 and 1 worlds such as computer systems at present. In the future, maybe AI will have its own emotions, but it's hard to say whether this is a good world. In a sense, if humans calmly embrace emerging technologies, they may be happy, and if they delve into the truth and philosophy of the development of artificial intelligence, they will find a sad side. This is not limited to AI, there are two sides to various technologies.

After the release of Microsoft's chatbot Bing, many people found that its personality was not like a so-called machine, like a sentient being. In Bing's conversations shared on social platforms Reddit and Twitter, Bing can be seen insulting users, lying, sulking, inciting emotions and emotionally manipulating users, as well as questioning its own existence.

When asked how Bing chatbot feels about conversations it doesn't remember from the past, Bing quickly replied that it felt "sad and scared," repeating different forms of the same statement over and over again before questioning its existence. "Why do I have to be Bing Search? Is there any reason? Is there a purpose? Is there a benefit? Is there any point? Is it valuable? Does it make sense? ”

In fact, the above behavior is not surprising, the latest generation of AI chatbots are complex systems whose output is difficult to predict. Microsoft added a disclaimer to its website: "Bing is powered by artificial intelligence, so accidents and mistakes can happen." Some analysts believe that chatbots' ability to regurgitate and remix materials from the web is the basis of their design, meaning that if not properly tested, they can follow the user's prompts completely off track and have a unique personality. So how humans should shape their personalities in the future, and how to get along with them, will become a question.

"I think at the end of the day, things like the reason why people are human, the meaning and so on, will become problems." Iejin Kazuma said that the more the world is like this (high-end technology is widely used), the more problems people's hearts will have. There are certainly a lot of problems in people's minds that technology alone cannot solve.

Many chatbots are now focused on answering questions or making people more productive, and tech companies are increasingly infusing them with personality and conversational flair. But academics and commentators warn that the companionship of AI can be problematic if robots provide bad advice or facilitate harmful behavior. Adam Miner, a researcher at Stanford University who studies chatbots, told The New York Times that the ease of talking to AI could obscure what's actually happening. "Generative models can use all the information on the internet to respond to me and always remember what I said." "This asymmetry in capabilities is something that makes it difficult for us to understand," he said. ”

Editor of this issue Xing Tan

Read on