laitimes

The four departments of the state intervene to prohibit "algorithms" from becoming "calculations"

The four departments of the state intervene to prohibit "algorithms" from becoming "calculations"

(Please click "Read Newspaper" or "Service" to open the electronic version)

The four departments of the state intervene to prohibit "algorithms" from becoming "calculations"

The new regulations on Internet algorithm recommendations were released and implemented from March 1

The four departments of the state intervene to prohibit "algorithms" from becoming "calculations"

Recently, the Cyberspace Administration of China and four other departments jointly issued the Provisions on the Recommendation and Administration of Internet Information Service Algorithms, which will come into effect on March 1, 2022. The provisions are clear that algorithms must not be used to influence online public opinion, evade supervision and management, and monopolize and unfair competition.

Prohibition of algorithmic discrimination and "big data killing"

In recent years, while the application of algorithms has injected new momentum into political, economic and social development, the problems caused by the irrational application of algorithms such as algorithm discrimination, "big data killing", and inducing addiction have also profoundly affected the normal communication order, market order and social order, and brought challenges to the maintenance of ideological security, social fairness and justice, and the legitimate rights and interests of netizens. The introduction of targeted algorithm recommendation rules and regulations in the field of Internet information services is the need to prevent and resolve security risks, and it is also necessary to promote the healthy development of algorithm recommendation services and improve the level of supervision capabilities.

The Provisions make it clear that the application of algorithm recommendation technology refers to the use of algorithmic techniques such as generative synthesis, personalized push, sorting selection, retrieval and filtering, and scheduling and decision-making to provide information to users.

Fake messages cannot be "personalized recommendations"

The "Provisions" clarify the information service specifications for algorithm recommendation service providers, requiring that algorithm recommendation service providers should adhere to the mainstream value orientation, actively disseminate positive energy, must not use algorithm recommendation services to engage in illegal activities or disseminate illegal information, and shall take measures to prevent and resist the dissemination of negative information; establish and complete management systems and technical measures such as user registration, information release review, data security and personal information protection, and emergency handling of security incidents, and periodically review, evaluate, and verify the mechanism, model, and mechanism of algorithms. Data and application results, etc.; establish and complete feature databases for identifying illegal and negative information, and where illegal and negative information is discovered, corresponding disposal measures shall be taken; strengthen the management of user models and user tags, improve the rules for points of interest and user tag management that are recorded in the user model; strengthen the ecological management of algorithm recommendation service layout pages, establish and improve mechanisms for manual intervention and user self-selection, and actively present information that conforms to mainstream value orientation in key links; standardize the development of Internet news information services, It is not allowed to generate and synthesize false news information or to disseminate news information published by units that are not within the scope of state regulations; algorithms must not be used to influence online public opinion, evade supervision and management, and monopolize or unfair competition.

Users can choose to turn off the algorithm recommendation service

In response to the general concern of the public about the protection of user rights and interests, the Provisions clarify the requirements for the protection of user rights and interests of algorithm recommendation service providers.

The first is the right to know about the algorithm, which requires the user to be informed of the situation of providing algorithm recommendation services, and publicize the basic principles, purpose intentions and main operating mechanisms of the service.

The second is the right to choose the algorithm, which requires users to be provided with options that do not target their personal characteristics, or the option to conveniently turn off the algorithm recommendation service. Where a user chooses to turn off algorithm recommendation services, the algorithm recommendation service provider shall immediately stop providing the relevant services. Algorithm recommendation service providers shall provide users with the ability to select or delete user tags for their personal characteristics used in algorithmic recommendation services.

The third is to make specific regulations for algorithm recommendation service providers that provide services to minors, the elderly, workers, consumers and other entities. For example, algorithmic recommendation services must not be used to induce minors to indulge in the Internet, it shall be convenient for the elderly to safely use algorithmic recommendation services, and relevant algorithms such as platform order distribution, remuneration composition, payment, working hours, rewards and punishments shall be established and improved, and algorithms must not be used to implement unreasonable differential treatment on transaction conditions such as transaction prices based on consumer preferences, trading habits, and other characteristics.

This edition is a synthesis of Xinhua News Agency, "Netinfo China" WeChat public account, CCTV, The Paper, etc

Four-question algorithm recommendation

Algorithm is an intelligent Internet technology that relies on core data information such as massive content, multiple users and different scenarios to carry out independent mining, automatic matching and fixed-point distribution. At present, there are many types of algorithms that are closely related to people's production and life, including automatic synthesis algorithms that are longer than news creation, personality recommendation algorithms for online shopping, search and filter algorithms that are proficient in sentence recognition, and governance decision-making algorithms that fit online ride-hailing, and so on. The arrival of algorithmic society is unstoppable, from the perspective of information dissemination theory and practice, algorithms have not only provided great technical convenience for the public, but also had a profound impact on the development of network ecology.

Consumers have been suffering for a long time! Algorithms are tools, and calculations are people's hearts. A good program and algorithm should be goodwill and temperature, is based on integrity as the power source, it should meet people's needs, solve user pain points, promote people's advantages, and benefit users and mankind. Algorithms cannot become a weapon for calculating consumers and customers.

1 How does the algorithm make you addicted?

The stimulation of "technical dopamine" allows us to develop from addiction to addiction

"There's definitely addictive code in the world." Ramsey Brown's company was built on this creed. The company's website advertises that they use neuroscience theory combined with artificial intelligence machine learning to "use dopamine to make your app addictive."

The "customized service" they provide can access the backend of the customer APP to help customers track every user's behavior; then design "rewards" in some key places and points in time: pleasant sound effects, virtual currency, or suddenly jumping likes, thereby improving user retention, open rate and dwell time. This means better data and revenue for customers, and they are willing to pay for it.

The company's repeated publicity cases include a single customer in 2016: a "positive energy social networking" APP called "Brighten". Three-week tests showed that users who were "hit with dopamine" opened apps more often and spent time sending positive messages to friends and family by 167 percent.

This function of dopamine is not to produce pleasure, but to regulate desire, satisfaction, and reward. When you expect a good return on doing something, the amount of dopamine in your reward neural pathway increases, and if the return exceeds your expectations, there will be a second wave of increases in sopamine; but conversely, if the return is lower than you expected, dopamine will fall back to a lower level than you started. Whether you stick your hand in the chocolate or take on a new task in the game, the molecule that guides you in your pursuit of rewards is dopamine.

The mechanism of action of dopamine seems to give some technical products the ability to "manipulate behavior".

Initially, this kind of "harvest time" behavior was limited to graphic content, and they initially used exciting headlines and personalized pushes that fit the most basic preferences to harvest the "fragmented time" when people went to the toilet, waited for the car, etc.; then they attacked the short video field in a big way, and the harvest object became a large piece of the user's spare time.

When you watch a short video, you don't even need to make any reaction, the system will immediately push you a similar short video, trying to make your visual pleasure continue as long as possible. The more you watch, the more the system learns what you like and the more "precise" the push you are given.

In the era of mobile Internet, it keeps your attention on the screen and can't move a bit. The essence of the "standard answer" in the Internet product industry is to use data to finely understand the weakness of human nature and thoroughly implement it into the structural logic and interaction design of the product.

The stimulation of "technical dopamine" allows us to develop from addiction to dependence until addiction.

What exactly makes things so exaggerated? The capital and creativity behind technology immerses us in the beauty of the virtual, but worryingly, it doesn't care about everyone's real life.

2 How did the algorithm's "original sin" come about?

Excessive information gathering and excessive claims are common forms of "algorithmic evil"

News, social, shopping, dining, travel and short videos and other APP through various algorithms to recommend topics and content that interest users, deeply binding users to their carefully constructed "cage".

The rise of these technologies has benefited from the rapid development of new technologies and new applications represented by artificial intelligence and big data in recent years.

In the process of serving people's social life, these new applications collect and process big data through various algorithms, and profile users, accurately depict users' various preferences, and ultimately control user behavior or accurately push various types of advertisements.

It can be said that algorithms are the technical foundation of many apps, and even an important cornerstone of the development of digital society. Although these algorithms, especially AI algorithms, have created great value for social life, their drawbacks such as infringement of user privacy and big data killing have gradually been exposed.

Privacy issues are one of the serious problems brought about by algorithmic abuse, and although data security and privacy protection have always been a top priority in the field of cybersecurity, data security issues that have emerged to this day are still common.

Excessive information gathering and over-assertion are common forms of "algorithmic evil". The Personal Information Protection Law clearly stipulates that "personal information processors using personal information to make automated decisions shall ensure the transparency and fairness of the results of the decision-making, and shall not implement unreasonable differential treatment for individuals in terms of transaction conditions such as transaction prices; through automated decision-making methods to individuals to push information and commercial marketing, they shall also provide options that do not target their personal characteristics, or provide individuals with convenient ways of refusal", etc., from the legal level to excessive information collection and excessive claims, saying "no".

Technology is a double-edged sword, and behind the "original sin" of the algorithm, there will always be people.

3 What will people face after the infinite expansion of algorithms?

The likelihood that people will become "prisoners" of algorithms increases

The risk of network ecological imbalance and distortion is increasing

Undoubtedly, the technology-driven algorithm dividend is more and more extensive and profoundly affecting people's lives: online shopping is inseparable from "algorithm price comparison", business operations are inseparable from "algorithm publicity", daily travel is inseparable from "algorithm navigation", and even job search marriage also needs "algorithm matching". However, behind the seemingly rational and neutral algorithms, there are also certain technical biases: big data "kills" and "cheats", algorithms invade privacy and even trigger group polarization. The series of impacts brought by the prevalence of algorithms to the network ecology are worthy of vigilance and deep consideration.

On the one hand, the prevalence of algorithms can easily lead to the weakening of the role of "gatekeepers", and the possibility of people becoming "prisoners" of algorithms has increased sharply. Although the algorithm has brought about a significant improvement in personal information and service levels, under the leadership of algorithm technology, personalized distribution has been strengthened as never before, while the "gatekeeping role" such as information, product and service editing and proofreading is often weakened or even absent. Once the design and application of the algorithm is improper, individuals are likely to be clamped down or even imprisoned by a single algorithm in many aspects such as cognitive judgment, behavioral decision-making, and value orientation, and become the "prisoner" of the algorithm.

On the other hand, the prevalence of algorithms is easy to strengthen the "information island" effect, and the risk of network ecological imbalance and distortion may continue to increase. Algorithms heavily influence how quickly people connect and match with certain types of information, but they also automatically filter out other potentially valid information. Under the narrowing of information, the public is prone to form the illusion that "many people are this kind of thinking and value orientation", and this kind of "selective" contact, filtering and belief will not only block the communication with different opinion groups, but also cause the vision in self-repetition and self-affirmation to be trapped and self-contained. At the same time, it will also lay a huge hidden danger of manipulation for the network public opinion field that breeds prejudice and lacks stickiness, and even falls into a vicious circle, induces offline mass incidents, and undermines the clear stability of the network ecology.

Simple and crude, one-size-fits-all "algorithm resistance" is not advisable, the establishment of a more perfect regulatory system, the public implementation of more transparent industry technical guidelines, is imminent; at the same time, we must abandon the "algorithm cult", carry out more comprehensive and professional algorithm designer quality training, and strengthen the literacy education of algorithm users.

4 Inverse algorithm, what exactly is "anti"?

Users are more than against new technologies

Rather, it is dissatisfaction with the underapplication of new technologies

The world does not want to miss the technological changes brought about by big data, and a large number of enterprises hope to stir up the market through big data and occupy market opportunities. Unexpectedly, anti-big data and anti-algorithm enterprises have also begun to gain development opportunities and have been recognized by the capital market, and big data can be called a "double-edged sword".

Tel Aviv,Israel's cybersecurity technology company D-ID, which may be the first anti-image recognition technology company, can generate photos and videos that the algorithm cannot recognize, but at the same time maintain similarity to real faces to protect personal privacy and identity information from being maliciously read by facial recognition technology. The goal is to protect data that is already being used for authentication, while ensuring that the data is not "read" in the first place. Relying on this set of algorithmic counter-image recognition algorithms, the effect of countering conventional "algorithms" was achieved, and it was previously announced that it had just received a $4 million seed round of financing.

In contrast, social and research sites Are.na more thoroughly inverse algorithms. There are neither ads nor algorithmic tracking, and the content collected on the site has nothing to do with popularity, nor a button to like. This set of anti-social games, contrary to Facebook and Twitter, although the total usage is not high, but the monthly growth rate is 20%.

Although there are no similar websites and technologies in China, the "travel frogs" that have only recently become popular in China can see some trends. This game does not integrate other functions to retain user time, but to achieve the effect of "running out, but going back", the psychology of the user behind it is indeed worth paying attention to.

At present, the application of big data and algorithms will also be the mainstream, which can bring changes to the lives of ordinary people. The emergence of anti-big data, anti-algorithms and anti-social phenomena, users are more dissatisfied with the lack of application of new technologies than against new technologies.

The first problem is that big data is "too stupid". For example, after a user searches for or purchases a product on an e-commerce website, he opens a webpage and finds that the advertising space is a product recommendation that has been purchased before, which is a situation that most users have encountered. There are also such problems in applications characterized by personality recommendations.

The "deviation" of big data applications in personality recommendations is also reflected in news clients and social media. At the moment of information overload, in the face of a large amount of information, the user's choice will tend to be interested in the content, or to meet personal desires, based on this personality recommendation will continue to amplify desire, entertainment gossip occupies the hot search is an example, which is not a good application.

In addition, there are boundary problems in the application of big data technology. How to determine whether some data interconnection is necessary and which is excessively interconnected is the next step to consider.

Networks make human connections tighter and more convenient, but overconnection speeds up the spread of information and makes human society more vulnerable.

Relying on big data to achieve the Internet of Things has further expanded the amount of connected data, security, privacy and other issues have become uncontrollable, and a local small problem can easily turn into a large-scale problem.

News sharp reviews

Algorithms are not "calculations" but also ethics

A few days ago, the "Report on the Perception of the Chinese Public's "Great Security" released by the Internet Development Research Center of Peking University showed that 70% of the respondents felt that the algorithm could obtain their own preferences and interests, so as to "calculate" themselves. When the algorithm is reduced to "calculation", the user is overwhelmed and overwhelmed. In this context, it is also necessary to establish rules and regulations for algorithm recommendation services and strengthen norms.

In fact, it is not the algorithm recommendation service that needs to be governed, but the people and platforms that control the algorithm recommendation service. As the relevant person in charge of the Cyberspace Administration of China said, formulate targeted algorithm recommendation regulations to "clarify the main responsibility of algorithm recommendation service providers". When providing algorithm recommendation services, what kind of qualifications should be possessed, what kind of obligations should be undertaken, and what kind of boundaries should be maintained, etc., the Provisions have established strict and clear specifications for algorithm recommendation service providers.

It is worth mentioning that the Provisions also clarify the basic rights and interests enjoyed by users. For example, the right to know the algorithm, the platform should inform the user of the situation of providing algorithm recommendation services, and publicize the basic principles, purpose intentions and main operating mechanisms of the service. Another example is the right to choose the algorithm, and if the user chooses to close the algorithm recommendation service, the algorithm recommendation service provider should immediately stop providing the relevant service. These two system designs not only protect the legitimate rights and interests of the majority of users, but also mark the red line for algorithm recommendation service providers.

Algorithms also need to talk about ethics, and in the final analysis, they need platforms to talk about ethics. This ethics is not only reflected in "not using algorithm recommendation services to engage in activities prohibited by laws and administrative regulations such as endangering national security and the social public interest, disrupting economic and social order, and infringing on the legitimate rights and interests of others, and not using algorithm recommendation services to disseminate information prohibited by laws and administrative regulations", but also in "not setting up algorithm models that infringe laws and regulations or violate ethics and morality, such as inducing users to indulge and consume excessively".

The "Provisions" are not a negation of the algorithm recommendation, let alone a ban, but a "good use" through "manageable". When each platform adheres to ethics, actively uses mainstream value-oriented control algorithms, and follows the principles of fairness, openness and transparency, scientific rationality and honesty, algorithm recommendation will be more in line with the public interest and more in line with public expectations.

Read on