Is the "AI scam wave" really coming?

author:Bright Web

Through AI face changing and onomatopoeia technology, 4.3 million yuan was cheated in 10 minutes; AI virtual humans screen out victims in chat, and manually relay to implement scams... Recently, a number of cases claiming to use AI technology to commit fraud have attracted attention.

The "Xinhua Viewpoint" reporter recently verified with the public security department to confirm that the news of the "national outbreak of AI fraud" is untrue, and the proportion of such fraud cases is currently very low. However, the public security organs have noticed this new criminal method and will increase their efforts to carry out technical countermeasures and publicity and prevention in conjunction with relevant departments.

Experts said that as AI technology accelerates iteration, fraud risks are accumulating due to unclear usage boundaries, and a high degree of vigilance is required.

"Face Swapping" scams cause anxiety: Will you be tricked by the faces of your loved ones?

Recently, the police in Baotou, Inner Mongolia, reported a case of using AI to commit fraud, and Mr. Guo, the legal representative of a company in Fuzhou City, was defrauded of 4.3 million yuan in 10 minutes. It is reported that scammers use AI face swapping and onomatopoeia technology to pretend to be acquaintances to commit fraud.

After the case was revealed, many reports reported that it was necessary to be wary of the arrival of the "AI fraud wave" and exposed a number of similar cases. For example, Xiao Liu in Changzhou, Jiangsu Province, was impersonated by scammers to send voice and video calls to his classmates, and Xiao Liu believed it after seeing the "real person" and "borrowed" 6,000 yuan to the scammer.

So, is the "AI scam wave" really coming?

The reporter's investigation learned that AI can indeed change faces and foley in technology, but it needs to be used to carry out "wide net" fraud.

A police officer listed in the expert database of the Ministry of Public Security told reporters that if this kind of fraud is successful, it must be done: collect the personal identity information of the person being changed, a large number of face pictures, and voice materials, and generate fake audio and video through AI; Steal the WeChat ID of the object of the face; Fully grasp the personal identity information of the fraud target and be familiar with the social relationship between the target and the person being changed, and the cost of comprehensive crime is very high.

He said: "The typical details of some cases are not accurate enough in the relevant reports. AI fraud cases are still sporadic cases. He said that mature typed fraud crimes often have the characteristics of concentrated outbreaks in many places across the country, but there are currently no large-scale AI fraud cases.

The public security organs have judged that the recent rumors of "AI face-changing fraud breaking out nationwide" are untrue, and there have been less than 10 such cases nationwide, but this trend deserves high attention. Apps and mini programs with one-click face changing function on the Internet have the risk of technical abuse, and it is necessary to strengthen technical prevention and countermeasures.

AI has entered a period of rapid iteration, and the risk of fraud and crime is accumulating

"The current development of AI technology has reached an inflection point of spiraling, and the technical iteration will be calculated on a monthly basis in the next few years." Xiong Hui, Associate Vice President and Director of Artificial Intelligence of the Hong Kong University of Science and Technology (Guangzhou), said.

According to the Ministry of Industry and Information Technology, with the rapid development of AI technology, the threshold of synthesis technology continues to decrease, gradually evolving to the direction of low computing power and small sample learning, which can be completed by using mobile phone terminals, and the requirements for computing power and data have decreased significantly. At the same time, with the technical blessing of AI large models, it is gradually developing from face synthesis to full-body and 3D synthesis, and the effect is more realistic.

Zhao Jianqiang, a special expert of the National Development and Investment Group and general manager of Xiamen Meiyapaike AI R&D Center, said that AI technology is accelerating the penetration of online fraud, false information, pornography and other fields. For example, on some online platforms, impersonating celebrities and public figures to generate video images to attract netizens. In addition, AI technology may also be used to commit crimes on a large scale, such as batch and automatic maintenance of online accounts, sending false information, and simulating manual online chat.

It is worth noting that the current AI technology is no longer a semi-finished product of the laboratory, and the "face changing" and "foley" technology that has aroused heated discussions has relatively mature open source software, and the threshold for use is low.

The reporter noticed that there is no lack of AI face changing tutorials on the Internet. Enter "face change" on a well-known domestic app, and the high-frequency search record that pops up shows "face changing software", "face changing app free", "how to do face changing video", "face changing algorithm" and so on. A message called "The strongest AI face changing software in history is officially launched!" The technical threshold is greatly reduced", introducing a face changing software, through video demonstration tutorials, hand-to-hand teaching how to use.

"As the old saying goes, 'seeing is believing,' but what the eye sees in the future is not necessarily real." Yang Hucheng, a partner at Beijing Tian Yuan Law Firm, said that in the future, illegal crimes such as fraud and extortion involving AI synthesis technology, and civil infringement issues such as portrait and reputation may gradually emerge.

"From the existing cases, these technologies have been used by criminals. Such as fake celebrity face change live broadcast, one-click undressing, rumor-mongering, making pornographic videos, etc. Although AI fraud cases are not yet popular, this trend is worth paying attention to and must be guarded against. An anti-fraud police officer said.

The relevant person in charge of the Ministry of Industry and Information Technology said that with the continuous development of AI technology, it is possible to synthesize specific videos through a small number of pictures and audio information, and use artificial intelligence models to design fraud scripts in batches, objectively reducing the difficulty of implementing telecom network fraud, and further improving the possibility of new AI crimes.

Improve relevant laws and regulations as soon as possible to establish a planning line for the development of AI technology

Zhou Jing, deputy director of the quality management department of China Mobile Information Security Center, told reporters that in recent years, all walks of life at home and abroad have been actively exploring effective governance paths for deep synthesis technology, studying and judging the risks and potential threats brought by AI technology to society, and are trying to incorporate the development of AI technology into certain rules to achieve security and controllability.

Industry insiders suggest that research on AI countermeasures technology should be strengthened and "AI should be used to make AI". Some technology companies are strengthening research on image and sound forgery technology, which has been applied in video authentication scenarios in public security and finance. Some front-line police suggested that we should strengthen the application and research and development of AI security technology, apply AI technology to crime identification, early warning, and confrontation, and realize the use of "white" AI against "black" AI.

Second, strengthen source governance and industry guidance, update and improve relevant laws, standards, and rules in a timely manner, and escort the development of AI technology.

"Data is the source of AI crimes, and protecting citizens' personal privacy data security can minimize the ability of AI crimes." Xiong Hui said.

Hao Zhichao, director of the regulatory support department of the Internet Society of China, suggested that the development of AI technology must also have relevant laws and regulations to draw red lines and step on the brakes. It is necessary to further strengthen attention to the issue of personal privacy data leakage, clarify the red line of information supervision, make sure that there are rules to follow for the research and development, dissemination and use of AI technology, and timely improve the normative guidance of the behavior of technical service providers according to the actual situation of technological development.

In addition, it is necessary to strengthen anti-fraud propaganda in a targeted manner. Xiong Hui said that in the future, AI can create "real" that is extremely close to the real based on big data. "It is necessary to change the public's concept through continuous education, let people know that seeing is not necessarily believing, and there is a picture does not necessarily have the truth, and improve the recognition of online information." He said.

The relevant person in charge of the Ministry of Public Security said that at present, fraud groups use new technologies and new formats such as blockchain, virtual currency, remote control, and screen sharing to constantly update and upgrade criminal tools, and the offensive and defensive confrontation with the public security organs in communication networks and money laundering has been intensifying and escalating. The public security organs, together with relevant departments, have fought wits and courage with fraudsters, constantly studying and adjusting crackdown and prevention measures, and ensuring that they always maintain the initiative.

The Ministry of Industry and Information Technology said that in the next step, it will strengthen supervision and law enforcement, and actively cooperate with Internet information and public security departments to urge enterprises to improve deep synthetic information management and technical safeguard measures; Encourage technical research, unite the strength of industry, academia and research to improve the ability of deep synthesis risk technology prevention; Strengthen industry self-discipline, establish and complete industry standards, industry standards, and self-discipline management systems related to deep synthesis technology, and urge and guide deep synthesis service providers and technical supporters to formulate and improve business specifications, conduct business in accordance with law, and accept social supervision. (Mao Xin, Wang Cheng, Xiong Feng, Gao Qian)

Source: Xinhuanet