laitimes

When Thought and Machine Merge: Brain-Computer Interfaces with Humanity's Present, Dilemma, and Future

When Thought and Machine Merge: Brain-Computer Interfaces with Humanity's Present, Dilemma, and Future

When eye tracking is applied to capture the subject's visual information, EEG-related devices record the researcher's scalp surface neural activity... The human body, as the medium of perception, is increasingly participating in the transmission process of human-computer interaction. When thought and machine are deeply integrated, communication and dialogue through human brain thoughts may become a more convenient way, but is the penetration of technology into people constantly eroding human subjectivity and bringing ethical dilemmas?

Author | Su Chen

Edit | Chen Caixian

1

Explore and practice

Brain-Computer Interface (BCI) refers to the technology that establishes direct pathways between human brain nerves and external devices (such as computers, robots, etc.) to achieve information interaction and functional integration between the nervous system and external devices.

The development of this technology has lasted for nearly 100 years, and its coverage areas involve neuroscience, cognitive science, neural engineering, artificial intelligence and other disciplines. For applications and research scope, brain-computer interface technology is designed to help people with limb disabilities and spinal cord damage regain mobility, supplemented by the completion of various physiological functions.

In recent years, in addition to the exploration of the medical field, brain-computer interfaces have also been favored by many technology companies, defense agencies, venture capital and so on.

In 2017, Telas CEO Elon Musk founded Neuralink, a brain-computer interface technology company that aims to connect human brains and machines through machine learning techniques to help patients with severe brain damage. In 2021, the company released the latest research results based on the successful testing of brain-computer interface technology in monkey brains: a monkey named "Pager" can play a simulated table tennis video game just through brain ideas.

From Musk's live "pig walking", to monkeys playing video games with telepathy, to zhejiang university second hospital with brain-computer interface treatment of depression, high paraplegic elderly people through brain-computer technology to play mahjong, etc., technology continues to develop the conscious control of animals, but also in the intervention of human thinking mode, the interaction between humans and machines unified on the human body.

2

Pros and Cons: BCI's "Black and White"

War and casualties, sickness and torture have always caused mankind to suffer from flesh and skin. The emergence of brain-computer interface technology has made it possible for some patients to recover. Examples of this abound in media communication.

However, the cost of using brain-computer interface technology cannot be ignored.

At the turn of the century, researchers at Brown University's Neuroscience Laboratory used Brain Gate's second-generation brain-computer interface technology to install a system called BrainGate for a nonverbal paralyzed patient named Cathy Hutchinson.

The system includes an array of tiny electrodes implanted in the motor cortex, a plug inserted in the top of the head, a signal amplifier the size of a shoebox, and a computer-run software that can decode a patient's neural signals. With the help of a robotic arm, Hutchinson managed to grab the bottle and drink it with a straw!

The success of this experiment proved that the specific therapies of neuroscience worked, which was very surprising!

When Thought and Machine Merge: Brain-Computer Interfaces with Humanity's Present, Dilemma, and Future

Caption: Cathy Hutchinson

But Hutchinson's feat required her to take huge risks. The application of brain-computer interface technology is currently divided into three types: non-invasive, semi-invasive and invasive, and this test belongs to the latter. This means that the hole in her skull makes her vulnerable to infection. A square array of metal electrodes with needles thicker than 100 hairs will inevitably cause tissue damage.

Implanting these devices in the brain is like hanging a picture on a jelly. With each swing, the electrodes have the potential to disrupt cells and connections, or drift and lose contact with the original neurons. Hutchinson may spend months training specific cells to operate robotic arms, causing those cells to either die or run out of control.

Eventually, her body's defense system shuts down the experiment: Over time, scar tissue forms around the electrodes, isolating them from neighboring neurons and rendering them ineffective.

Avoiding the negative impact of brain-computer interface technology, some patients have chosen to give up even if they are expected to restore some of their physical functions through this technology.

James Raffetto is an example. He joined the Navy and trained as a special forces officer three years later. In an accident, he stepped on a clever explosive device triggered by a balsa wood pressure plate and lost both legs, left arm and three fingers of his right hand. After that, receiving treatment and help became a very difficult thing for him.

But James didn't choose to take the risk of having brain implants to rebuild his life.

He's wary of implantable medical devices, likening them to moody Bluetooth devices. He said bluntly, "It's terrible to add these questions to my body." Instead, he compliments his body's innate ability to adapt, for example, by learning to use bone spurs that grow from the self-healing femur to maintain balance and stability.

3

The Quagmire: The Metaphor of Theseus' Ship Will Become a Reality

In the 1st century BC, Plutarch asked the question: If the wood on the ship of Theseus was gradually replaced until all the wood was no longer the original wood, would the ship still be the original ship?

The metaphor of the ship of Theseus is a paradox about identity change, and today, brain-computer interface technology is causing people to think deeply about the relationship between technology and people.

What needs to be looked at is that brain-computer interfaces use aggressive, destructive methods to improve the lives of people like James and Hutchinson, and adventurous technology can easily put people into the moral quagmire.

The dynamic intervention of technology makes it possible for a person to restore his ability to act, but with the help of "agents", what happens between will and action?

Ludwig Wittgenstein, in his Philosophical Studies, said: "When 'I raise my arm,' my arm is raised. So the question is, if the person raises his arm and removes the arm upwards, what is left?"

Experiments with the brain suggest that Wittgenstein was right: If you interrupt the activity of a particular region, a subject who moves his arm will suddenly feel as if an alien entity is doing it for them; if you destroy a different region, the person may feel that they are desperately trying to make their arm move, but they can't affect it.

Scientists know only a little about this descriptive agency, far from having a causal understanding of it. The fact that they know so little should make the work of brain-computer interfaces impossible:

How does it distinguish between imagined and expected behavior?

What are the neural characteristics of sharp thoughts and outspoken comments?

How can a machine be expected to conjure up the missing variables in Wittgenstein's equations and make a raised arm from the pattern of neural activity?

In fact, the brain is a highly busy communications network, and computers must learn to interpret signals as much as possible. It works in much the same way that other machines automate your email and text content — by processing large amounts of historical data and using that data to guide future behavior.

And once the practical questions about brain-computer interfaces are answered, philosophical questions arise:

Suppose someone is strangled to death by a pair of robotic arms, and the main suspect claims that his brain-computer interface is to blame. Maybe his implant was broken; maybe his algorithm made the wrong decision, mistook an intrusive idea for a deliberate intention, or let anxiety trigger self-defense behavior.

If you don't know the neurotic traits of agency – just know, somehow, that the will turns into action – how do you prove his guilt or innocence?

If it turns out that his brain does want to kill people, does the machine have a responsibility to stop him?

None of these are hypothetical questions for the distant future. The more important difference is that the brain interface is part of the body, which makes responsibility more difficult to define.

In addition, there are major privacy and security concerns with brain interfaces.

Because many signals in the brain are accessible, a recording device can collect signals about your sensory experiences, perceptual processes, conscious cognition, and emotional states. Ads can target not your clicks, but your thoughts and feelings. These signals may even be used for monitoring. Ten years ago, members of the Jack Gallant Lab at the University of California, Berkeley, were able to fuzzily reconstruct visual scenes based on brain activity as people watched video footage.

As technology matures, someone may access your wireless neural receiver in the future, and what people think will become the content of surveillance. Through their own eyes and ears, one may unwittingly become the operator of a distributed circular telescope.

4

The light of technology, or an ethical catastrophe?

In 1985, Donna Haraway made the famous Cyberg Manifesto. She defines it as "a combination of inorganic machines and living organisms," such as bodies fitted with dentures, prosthetics, pacemakers, etc. These bodies blur the boundaries between humans and animals, organisms and machines, matter and immaterial.

In contrast, the current brain-computer interface technology has penetrated into the fields of medical health, military, entertainment, education, smart home and Internet of Things (IOT), which means that technology is integrating human body, action and thought.

The British writer George Olwee's 1948 final draft of Nineteen Eighty-Four predicted that under the leadership of the ubiquitous "Big Brother", there would eventually be only "pure minds" in society. Whether it is the invasion of user privacy by big data in recent years, or the monitoring of employee work efficiency by brain-computer interface technology, the evaluation of advertising effects, etc., the history of technology and human beings has become a relentless extension of the history of control of materials, plants, and animals, and perhaps one day the control of thought around the clock.

Admittedly, people's utopian hopes for technology and dystopian worries are always one after another. At this point, Vincent Mosco's reminder is necessary:

When technology becomes commonplace, it is when its influence reaches its social peak. While brain-computer interface technology can help people with impaired physical function, goodwill is more likely to mask another potential ethical catastrophe.

Therefore, before AI can interact with the human body and brain, we must jump out of the influence of technology. Otherwise, when humanity is precisely controlled, we will probably abandon the greatest de-liberalization of ideas since the invention of language.

Reference Links:

1.https://www.wired.com/story/when-mind-melds-machine-whos-in-control-brain-computer-interface/

Read on