laitimes

Is AI conscious? Start with the definition of consciousness

Is AI conscious? Start with the definition of consciousness

Scientists who rarely explored human consciousness began to discuss "AI consciousness."

Author | Antonio

Edit | Chen Caixian

There is no doubt that human beings have their own consciousness. In a sense, this "consciousness" can even be seen as one of the connotations of human intelligence.

With the in-depth development of "Artificial Intelligence", "whether AI can have consciousness" has gradually become a question in the minds of scientists, and "consciousness" is also regarded as one of the criteria for measuring whether AI is intelligent.

For example, in mid-February, Iiya Sutskever, openAI's chief scientist, launched a discussion on AI awareness on Twitter. At the time, he said:

Today's large neural networks may be beginning to take shape.

His views immediately aroused the discussion of a number of AI celebrities. In response to IIya Sutskever's insights, Turing Award winner and chief scientist of Meta AI, Yann LeCun, first threw out his objections, giving a straightforward view: "Nope." (Nope. Judea Pearl also stands up for Lecun, saying that existing deep neural networks are not yet able to "deeply understand" certain areas.

Is AI conscious? Start with the definition of consciousness

After a few rounds of verbal warfare, Judea Pearl said:

...... In fact, none of us have a formal definition of "consciousness." Perhaps the only thing we can do is ask the philosophers of the ages who have studied consciousness...

It's a question of the source. If "AI consciousness" needs to be discussed, then: what is "consciousness"? What does it mean to have "consciousness"? To answer these questions, computer literacy alone is not enough.

In fact, the earliest discussion of "consciousness" dates back to the ancient Greek "Axial Age". Since then, "consciousness" as the epistemological essence of man has become an unavoidable issue for future generations of philosophers. After the rise of the discussion about AI awareness, Amanda Askell, a former openAI research scientist, also gave an interesting insight into the topic.

Is AI conscious? Start with the definition of consciousness

Amanda Askell, whose research interests are at the intersection of AI and philosophy

In her latest blog post, My Mostly Boring Views About AI Consciousness, Askell explores phenomenological "phenomenal consciousness" rather than "access consciousness."

Phenomenon consciousness emphasizes the subject's experiential process, focusing on feelings, experiences, and passive attention; while conscious consciousness emphasizes the subjective initiative of the subject, which emphasizes the subject's active attention in subjectivity. For example, if you write an assignment under relaxed music, you can feel the music in the background (phenomenal consciousness) but don't pay attention to its specific content; the homework is subjective to you (conscious consciousness), and you really know what you are doing.

This is a bit like the two different attention mechanisms commonly used in computer vision and cognitive science. Phenomenal consciousness corresponds to "bottom-up", and conscious consciousness corresponds to "top-down".

Is AI conscious? Start with the definition of consciousness

Note: At a glance, you can notice that the large part of the book is "phenomenal consciousness"; the awareness of the details in it belongs to "conscious consciousness".

Askell agrees that higher intelligence and conscious consciousness are more relevant, which can effectively distinguish humans from other animals, but she is "more interested in the difference between tigers and rocks than in the difference between humans and tigers," and phenomenal consciousness is sufficient to make such a distinction.

And she believes that if there is a "phenomenon consciousness", it means that some moral and ethical problems will also arise. This is why she believes it is important to study consciousness.

1

Are current AI systems conscious?

Askell makes an interesting observation:

Today's AI systems are more likely to be phenomenally conscious than chairs, but far less conscious than rats, or even less conscious than insects, fish, or bivalves.

She roughly likens AI systems to regions of plants — because the way plants behave seems to need to be planned and can do things that seem to require internal and external communication. AI systems seem to behave similarly.

But she's also convinced that AI systems as a whole will have greater conscious potential in the future than plants or bivalves. In particular, AI research with more bio-inspired neural networks in the future may produce more architecture, behavior, and cognitive systems related to consciousness.

Is AI conscious? Start with the definition of consciousness

Studies have shown that plants are also conscious and intelligent, and they can also perceive pain and interact well with their environment

So consider whether AI is conscious or not, and in what ways should we consider the evidence? Askell lists four types of evidence: architecture, behavior, functionality, and theory.

Architectural evidence refers to the degree to which the physical structure of the system is similar to that of humans, for example the structure of the brain is far more conscious than that of the fingers.

Behavioral evidence is when an entity makes behaviors related to consciousness, cognition, such as being aware of its surroundings, responding to external stimuli, or more complex behaviors such as speech and reasoning.

Functional evidence considers its goals and how those goals relate to the environment. For example, a table or chair is not really subject to the evolutionary pressure of the environment, so there is no reason for it to form the kind of consciousness that rats have about the environment.

Theoretical evidence includes the coherence of the theory itself, persuasiveness, etc.

Philosophers who study the mind today have a general theoretical tendency in two ways: the inclusionists, such as the pancentrics who believe that atoms can have consciousness, and the mechanism, which denies that non-human entities have consciousness. But either way, the issue of AI consciousness can be discussed from the four different pieces of evidence described above.

2

Does it matter if AI is conscious?

The vast majority of AI practitioners do not take into account the characteristics of consciousness, and AI and consciousness seem to exist only in the imagination of the future in some science fiction movies. However, in terms of safety, ethics, prejudice and impartiality, the combination of awareness and AI has attracted more and more attention in academia and industry.

Askell argues that AI is phenomenally conscious, which means it's likely to develop an ethics that has a lot to do with its creators. Especially when an AI makes a mistake or is "abused," its creators should take some responsibility.

Askell discusses two important concepts in moral ethics: the moral agent and the moral patient. Among them, the "moral actor" is an actor who has the ability to distinguish between good and evil and right and can bear the consequences, such as adults, while the "moral care object" cannot distinguish between good and evil, that is, entities that cannot be morally restrained and generally do not bear the consequences, such as animals or young babies.

Objects of ethical care

Askell argues that once an entity has a sense of sentisent similar to pleasure and pain, it is highly likely to be the object of moral concern. It is unreasonable to find that the object of moral concern (such as a cat) is suffering, and the ordinary person does not try to fulfill the moral obligation to alleviate his suffering. She also argues that phenomenal consciousness is a necessary condition for perception, and therefore further, phenomenal consciousness is a prerequisite for becoming the object of moral concern.

A possible debate is whether certain groups have moral status, or whether they have higher moral status. Moral status comes from ethics, referring to whether a group can discuss their faults in a moral sense. For example, most living things have moral status, while inanimate objects do not. Overemphasizing the status of one group seems to imply that this group is more important and that other groups are less important. This is as worrying as "the argument that gives animals, insects, fetuses, the environment, etc. more moral status is."

Askell points out that helping one group doesn't need to come at the expense of others. For example, eating a vegetarian diet is good for both animal and human health. "Teams don't usually compete for the same resources, and we can often use different resources to help two teams instead of forcing trade-offs between them. If we want to increase the resources devoted to global poverty alleviation, taking existing donations out of philanthropy is not the only option – we can also encourage more people to donate and donate."

So, when future sentient AI systems become moral caring bodies, it doesn't mean we no longer care about the well-being of other human beings, nor does it mean that we need to transfer existing resources to help them.

Moral actors

Because moral actors understand right and wrong, they tend to act in a good way and avoid acting in a bad way. When they do something that is morally or legally impermissible, they are punished accordingly.

The weakest part of the moral actor only needs to respond to both positive and negative incentives. This means that another entity can punish the actor for its bad behavior or reward it for its good behavior, as this will improve the future behavior of the actor.

Notably, Askell notes that receiving stimuli and getting feedback doesn't seem to require phenomenal awareness. Current ML systems already fit this pattern in a sense, such as the need for models to reduce the loss function, or the more obvious "rewards" and "punishments" in reinforcement learning.

Is AI conscious? Start with the definition of consciousness

Figure Note: Reward feedback mechanism for reinforcement learning

What about stronger moral actors? We often think that actors can only be morally responsible for their actions if they have the ability to understand right and wrong and are not fooled into taking other actions. For example, if a person convinces his friend to set a fire in the forest, if the friend is caught, no matter how much he argues that he was instigated to set the fire, it is the person who caused the fire (i.e., the friend himself) who bears the moral responsibility, not the person who persuaded him. But if a man trains his dog to set fire, in this case we place most of the moral responsibility on that trainer rather than his pet.

Why do we make human arsonists morally responsible rather than trained dogs? First, human arsonists have the ability to consider their choices and choose not to listen to their friends, while dogs lack the ability to reason about their choices. Second, the dog never understands that his actions are wrong, and never shows the disposition of doing something wrong—he just does what he is trained to do.

Assuming that an advanced machine learning system becomes a moral actor in this stronger sense, i.e. it is perfectly capable of understanding right and wrong, fully considering possible options, and acting on its own terms, does this mean that if a machine learning system does something wrong, those who create it should be exempted from moral responsibility?

Askell objected. In order to consider this question in more detail, she believes that the creators can be asked the following questions:

What is the expected impact of creating a particular entity, such as AI?

How much effort did the creators put into obtaining evidence of their impact?

To what extent can they control the behavior of their creative entity (either directly affecting its behavior or indirectly influencing its intentions)?

How much effort have they put into improving the behavior of entities within their means?

Even if creators make every effort to ensure that ML systems work well, they can still fail. Sometimes these failures are caused by the mistakes or omissions of the Creator. Askell argues that creating moral actors certainly complicates things because moral actors are harder to predict than automata, such as autonomous driving's judgment of road conditions. But that doesn't absolve creators of responsibility for the security of the AI systems they create.

3

How important is the work of studying AI awareness?

At present, there is very little research in the field of AI specifically on consciousness (and even other philosophical considerations), but there are already scholars conducting cross-disciplinary collaborative research on this topic. For example, after the advent of GPT-3, Daily Nous, a blog focused on philosophical issues, dedicated a section to discuss the philosophical thinking of language on AI.

But at the same time, Askell emphasizes that the discussion of AI consciousness should not stop at philosophical abstract speculation, but also focus on developing practical frameworks, such as establishing a series of efficient assessments of machine consciousness and perception. There are already some methods that can be used to detect pain in animals, and it seems that some inspiration can be drawn from there.

Conversely, the more we understand AI consciousness, the more we understand human beings themselves. Therefore, although the discussion of AI consciousness has not reached a unified consensus, the discussion itself is already a kind of progress. Looking forward to more AI awareness research work.

Reference Links:

https://askellio.substack.com/p/ai-consciousness?s=r

Is AI conscious? Start with the definition of consciousness

Leifeng Network

Read on