laitimes

Anxiety without dead-end snooping: How can AI counter privacy encroachment?

In the era of intelligence, the continuous breakthrough of algorithms and computing power has made AI technology develop rapidly. In industrial production, medical care, transportation, energy and other fields, AI and big data technology assist each other, liberating people from many meaningless, repetitive and dangerous tasks, and also bringing gains to the efficiency and safety of enterprises.

However, the perfect technology does not exist, we enjoy the dividends brought by data on the one hand, on the other hand, we are also facing the challenge of "naked running" of personal information, and more and more smart devices around us are peeping into our personal privacy.

We've probably all had these experiences —

Casually chat with colleagues, discuss where the hot pot, barbecue is delicious, open the public reviews, the home page is hot pot and barbecue recommendations;

Discuss with girlfriends and girlfriends which milk tea is delicious, want to drink a new taste, takeaway platform are the recommendation of this milk tea;

In discussing Liu Ruihong with his family, discussing weight loss, discussing yoga clothes, and then opening a treasure, he will also receive a full screen of fitness product recommendations.

The electronic devices around us are secretly observing our words and deeds in various ways, which is strange and uncomfortable. These snoops on our voices are also a kind of marketing means for various apps that are popular in the consumer market. By opening the privacy of permissions, our voice is transmitted to the system in real time, and the so-called AI is used to push customized services for us. In fact, these marketings that presuppose privacy are very troublesome for consumers.

Anxiety without dead-end snooping: How can AI counter privacy encroachment?

How can we avoid such situations? Some people choose not to authorize the microphone for a long time, one at a time, but this way is too cumbersome. In a more extreme way, some people deliberately use loud sounds such as external music or TV dramas to cover up the conversation during the conversation, but this experience of killing a thousand enemies and losing eight hundred is too uncomfortable. How can technical means be used to circumvent such situations?

Defeat magic with magic

Using AI to beat AI may be a good way out. A new AI system is online, and the circumvention logic of this AI system is to add some other sound "condiments" in the process of conversation, but these "condiments" are very weak, and will not be as noisy as the music and sounds played outside, affecting our normal conversation.

As long as the system is turned on during the process of people talking, a faint sound will be played in the space, and the sound of the conversation will be masked without affecting the normal conversation to avoid being heard by the microphone.

This AI system is a new method proposed by a research team from Columbia University in the United States. The system can be easily deployed in our commonly used electronic devices, as long as it is running on hardware such as computers and mobile phones, it can protect the privacy of users in real time.

Anxiety without dead-end snooping: How can AI counter privacy encroachment?

Using AI technology to intervene in microphones to obtain sound is not a novel idea. Previously, there were also related technologies to solve such problems, but because of the special situation of voice conversation, it is impossible to predict the word and speed of the conversation after a few seconds, which makes it impossible for ai to keep up with the rhythm of the dialogue between the two sides of the conversation, thus affecting the effect of dialogue coverage.

The new AI system can predict the characteristics of what the two parties will say next through deep learning algorithms, and achieve real-time performance based on two seconds of input speech. The right microphone noise generated in real time can effectively interfere with the acquisition of privacy for conversations.

The new algorithm uses a "predictive attack" signal that can interfere with any word that is transcribed by an automated speech recognition model. And when disturbing sounds are playing in a natural environment, a loud enough volume is required to interfere with any rogue "eavesdropping" microphones that may be at a distance. This system has proven to work well in real rooms with natural ambient noise and complex shapes. However, the current algorithmic system is only effective for the language of English communication, and the team is focusing more on the migration application of other languages.

In this battle, the AI system has a good chance of winning the neural network recommendation system behind the device. This research result is also radiating to multiple languages and scenes in the process of going out of the laboratory, and may be able to help us avoid the "harassment" of various dialogue privacy in the future. The impact of sound privacy on us is mainly the interference and intrusion of the consumer field, and in the video field, our portrait privacy is the hardest hit area.

New video "noise" means

In the field of video privacy, there are no boundaries to the privacy of the public. Everyone is impressed by the fact that in the real estate sales activities of a real estate company, customers wear their own helmets to buy houses. Many people may have a kind of mocking mentality when they first watch the news, and in the case of understanding the truth, they have no choice but to praise the wit of the owner. The main purpose of wearing a helmet is to avoid THE AI video recognition of real estate companies, so as not to be differentiated services and avoid the loss of interests in their own house purchases.

The unfair treatment of consumer groups in the video field is only the tip of the iceberg, and what is more serious is some grand privacy violations. In the coverage of the sky eye, the cameras on the street make everyone's video data run naked. Even if some people place cameras in their homes for safety, they are not exempt from the risk of being attacked by some hackers, and any move of users at home is spied on by a pair of ulterior motives behind them.

These video privacy, in addition to legislation can deter, is there a technical means to protect it in a targeted manner?

Researchers at the MIT Computer Science and Artificial Intelligence Laboratory have developed a new system that adds some noise data to the video to ensure that individuals are not recognized in the video, and public videos can also be used as data for analysis and investigation, which can better ensure the privacy of people who appear in surveillance video footage.

Anxiety without dead-end snooping: How can AI counter privacy encroachment?

We know that in the sky eye or in the community, the park surveillance video, the people who are recorded in the video have no privacy to speak of, all the face information is ingested and analyzed by the camera, although it can ensure the safety of the public domain, can monitor the density and flow of pedestrians and vehicles, and help the implementation of health and epidemic prevention measures, but this situation of sacrificing personal privacy should be gradually broken in the process of technology upgrading.

Some companies take the approach of blurring faces in videos, but this kind of practice may cause the system to lose some face data, which in turn will make some research impossible. The new AI system Privid can allow researchers to use video data to query while ensuring that individual identities cannot be identified and protecting the privacy of people who appear in video clips. In various videos and queries, Privid's accuracy rate is between 79% and 99% of non-proprietary systems.

The Privid AI system uses a differential privacy protection technology Differential privacy allows users to modify the data to a certain extent, add some noise data, but do not affect the overall output of the data, so that the attacker can not know the information about the individual in the data set, to achieve the role of privacy protection.

However, this system also has certain limitations, that is, the amount of noise data to be added cannot be determined. Ideally, of course, the added noise is just enough to hide everyone, but not so much that it's useless to researchers. But the reality is that in the process of adding noise to the data and ensuring the analysis and query of the video, it will cause a certain degree of interference, so that the results will not be very accurate, and the balance of this noise data requires in-depth and consideration of technology, while not affecting the actual reference value while ensuring privacy.

AI goes deep into privacy protection

In the audiovisual field, we are exposed to the open zone, and the data of ordinary people becomes the money and traffic in the consumption field, which is imported into various consumption scenarios. For those who have money and power, personal privacy data is more expensive, and it is likely to become the "fat meat" of blackmail in the eyes of hackers. Under the camera, every place you go is transparent, if someone obtains this data, it can establish a timeline of people appearing in fixed places, as long as the data can be summarized, you can capture the historical location of people and various types of information. As long as the hunter with a heart squats, he will always catch a fat and full of prey.

The more intelligent AI is, the more information it acquires, stores, and analyzes, and it will become more and more hidden. Although the neutrality of AI technology is the consensus, the application of large companies and hackers behind it is driven by interests, and once this information is not reasonably applied, it will cause various serious events.

We know that audiovisual life is a necessity for modern human entertainment and life, and no one can leave all kinds of electronic products embedded in cameras and microphones. The operation of cities, factories, and enterprises are inseparable from the assistance of various types of camera equipment, which also means that more society, enterprises and personal information are flowing in the data world.

Technological development is always faster than legal constraints. If it is constrained by legislation and ethics, there will be more and more loopholes, and security and privacy will not be guaranteed, which will also slow down the development of AI. Protecting privacy and security is key to technological evolution. Using AI to restrain the abuse of privacy by some AI technologies has become a must for cybersecurity technicians in the era of digital intelligence.

Anxiety without dead-end snooping: How can AI counter privacy encroachment?

However, the current privacy protection research based on AI deep learning is in its infancy, and there are still many challenges. For example, in the application of encryption algorithms, although encryption technology is the most direct and effective means of privacy protection, the technical cost and application cost of encryption technology, combined with deep learning algorithms that consume a lot of computing resources, will greatly reduce the performance of algorithms.

Another is the backwardness and lack of supervision. The characteristics of technological development make the regulatory level always run behind technology. Innovative regulatory approaches can be prepared in advance, rather than remedied after the fact. How to build a communication platform for cooperation between the regulatory level and other third-party technology companies, evaluate new applications that have not been launched together, and ensure the rational application of new technologies is also an important research topic in the future.

The development of the double-edged sword of technology is inevitable, but the relationship between privacy protection and AI technology can be compatible and coexist, and it is more than worth the loss to waste food for controversy and defects. Using smart technology to patch the privacy vulnerabilities of AI technology is also the best way to keep up with the development of AI. Although there are always all kinds of privacy weirdness and the birth of moths, the AI that magic can defeat magic has also reduced our worries and worries.

Privacy protection is a multi-dimensional, game-oriented process, and the solution we are currently exploring is also based on the premise of privacy vulnerabilities. So is there a way to root out privacy vulnerabilities? In fact, the best solution is to consciously avoid these possible scenarios that trigger privacy breaches at the initial stage of development and design. R&D technicians need to think more about the impact of some AI technologies on humans and society, and consider some areas that avoid controversy from the beginning of innovation. In the process of continuous improvement of the content of technical ethics and ethics, it also needs the implementation and enrichment of groups of technical personnel.

Technology is always neutral, unethical, illegal invasion of privacy, the last nailed to the column of shame is the enterprise and the technical developers behind it, the improvement of future technology and legislation, the punishment of three cups of fines will no longer have, after all, there will be people in the development of privacy and security technology to pay for their own behavior, sacrifice for the development of AI.

Read on