laitimes

Scientists explore new problems with AI's "selective amnesia."

author:Overseas network

Source: Science and Technology Daily

Learn to forget, artificial intelligence is also very difficult

Scientists explore new problems with AI's "selective amnesia."

Science and Technology Daily Beijing, August 23 (Reporter Zhang Mengran) The "oblivion" of artificial intelligence is different from that of human beings, which is also a major challenge facing the field. According to the recent news of the "Wired" magazine website, as an emerging field in computer science, machine learning researchers have begun to explore ways to induce "selective amnesia" in AI, and its goal is to remove sensitive data from specific people or points from machine learning without affecting model performance. If that can be achieved in the future, then this concept will help people have more control over their data.

Machine learning aims to use computers as tools and is committed to simulating human learning in real time, in real time, which can divide existing content into knowledge structures and then be widely used to solve complex problems in engineering applications and science. Machine learning is now seen as the field of study with the most intelligent features, but scientists have asked a new question: Will machines learn, but will it forget? In fact, although their way of learning is imitating humans, the way of "forgetting" is very different from ours.

Machine learning's "oblivion" is intuitive for users who need it, that is, those who regret what they shared online. But at a technical level, the traditional approach to eliminating the impact of specific data points is to rebuild the system "from scratch," a potentially costly task that businesses can barely afford. Specifically, users in certain regions have the right to ask companies to delete their data if they change their mind about what they have disclosed. But it's hard to completely erase this, because once trained, machine learning systems don't change easily, and even the trainers themselves don't know how the system masters these capabilities because they don't fully understand the algorithms they've debugged or trained.

In 2019, scientists proposed that the source data of a machine learning project could be divided into multiple parts to achieve "forgetting" of a single data point, but it has recently been shown to be flawed. If a submitted delete request appears in a specific order, whether accidental or malicious, the machine learning system crashes. Therefore, to realize the concept of "selective amnesia", scientists may need to make a completely new exploration in computer science.

"When they (users) ask for data to be deleted, can we eliminate all the impact of their data while avoiding the full cost of retraining from scratch?" Aaron Ross, a professor of machine learning at the University of Pennsylvania, said his current research is all about finding some "middle ground." Perhaps in the near future, it is possible to find a path forward that can both control data and protect the value generated by data.

Editor-in-chief dots

In fact, not only are machines facing the problem of "selective amnesia", but humans have also not mastered this skill. Forgetting, often inadvertently, is passive. People can't accurately choose what to remember and forget, otherwise, where there are so many "toasts to dispel sorrow and more sorrow." The neural network training process of machine learning models is like "alchemy". It's hard to know what the elixir is made of, so you don't dare to easily change the heat and the elements that enter the furnace. Possibly, you have to know the path of machine learning very well to achieve precise data extraction. In short, this is indeed a problem that needs to be solved but is very difficult to rely on conventional thinking.

.AI

Read on