laitimes

"AI fraud" rushed to the hot search, how to "go good" after AI "came out"?

author:Vigorous Finance

Titanium Media

How can people guard against AI scams? What is the current progress of the specification construction of generative AI? How can the application of AI technology avoid legal risks? Will frequent infringement and fraud incidents restrict the development of the industry?

"AI fraud" rushed to the hot search, how to "go good" after AI "came out"?

Recently, the Telecom Network Crime Investigation Bureau of the Baotou Public Security Bureau released a case of using intelligent AI technology to carry out telecom fraud, and the victim was deceived of 4.3 million yuan in 10 minutes, and the topic of "AI fraud" rushed to the hot search.

As AI technology becomes more mature, the degree of fidelity and deception are getting stronger and stronger, with fake AI singers, star-like anchors, etc., when the appearance and voice can be "high imitation", how should people prevent it?

What is the current progress of the specification construction of generative AI? How can the application of AI technology avoid legal risks?

Will the current frequent infringement and fraud incidents restrict the development of the industry?

This issue of "Titanium Hot Review" specially invited senior media personnel to rush to the hot search on the topic of "AI fraud", how to "do good" after AI is "out"? There was a discussion, and here is a collection of some of the views.

About how people should guard against when appearance and voice can be "imitated".

Jia Xiaojun, the manager of Bedo Finance, said that tools are neutral, and the key lies in how to use them, and "AI fraud" is a typical example. For ordinary people, the scams generated by this type of AI technology are very difficult to discern, especially if sound and video are simulated.

The reason why AI fraud can occur is essentially a problem caused by personal information leakage, the other party can grasp the sensitive information of the relevant party, accurately provide account, address and other information, and can also obtain the address book, and then defraud the user funds.

For ordinary people, when it comes to situations such as funds, it is necessary to maintain sensitivity, carefully check whether there is a problem, and it is best to be able to cross-verify and keep an eye on your wallet.

From the perspective of supervision, the introduction of scientific and technological personnel should be strengthened, and relevant countermeasures should be innovated to ensure timely and effective reminders and prevention. At the same time, strengthen the entry threshold of the platform and take reasonable interception measures.

Guo Shiliang, an expert at the "Whale Platform" think tank, said that everyone is talking about AI, but people see more benefits of AI, but rarely see the negative impact of AI. Someone was scammed of 4.3 million in 10 minutes, the key is that the scammer used AI face changing technology, and the victim also passed the video verification before transferring the money, unexpectedly turned out to be a scam, and this is a high-end scam, it seems to behave flawlessly.

Later, with the full assistance of the bank, it took 10 minutes to successfully intercept the 3,368,400 yuan of defrauded funds in the fraudulent account, but 931,600 yuan of funds are still being recovered. AI fraud methods are very clever, using both voice synthesis technology and AI face changing technology, and can even invade the contact information of the other party's friends and successfully steal the friends' money.

The advent of the AI era has brought opportunities and tests. At this time, scammers use new technologies to defraud, anti-deception means also need to keep pace with the times, everyone's anti-deception awareness also needs to be improved, but all sensitive content involving transfers should be in the spirit of twelve. With the arrival of the AI era, the relevant laws and regulations also need to be followed up, technology is improving, and regulatory enforcement capabilities and anti-fraud technologies also need to keep pace with the times.

Jiang Han, a senior researcher at Pangu Think Tank, said that with the development of artificial intelligence technology, AI fraud has also begun to appear. AI fraud is a kind of fraud that uses artificial intelligence technology, and the advantage of this fraud method is that it can achieve higher frequency of attacks, more accurate targeting and more efficient deception effect, and it is also a double-edged sword in the process of technological development. How should we view and respond to such fraud?

First of all, frequent fraud incidents require everyone to continuously improve their learning ability and identify potential fraud risks. AI technology strengthens the attack methods of criminals from basic data analysis, model training, artificial intelligence decision-making, etc., for consumers, to prevent such fraud, it is necessary to improve their technical awareness and risk identification capabilities. You need to try not to give unknown phone answers, not to be gullible, not to be deceived, and to protect your privacy and property safety.

Second, enterprises need to strengthen self-discipline, supervision needs to guide market norms, and individuals should further strengthen their awareness of risk prevention and control. AI technology can bring efficiency and competitive advantage to enterprises, but they should also pay attention to the balance between innovation and compliance, especially when it comes to sensitive areas such as user information and privacy, enterprises should strengthen self-discipline and compliance control. For regulators, it is necessary to guide the development of market norms to prevent such illegal and criminal activities. For individuals, they should actively participate in the management of public safety, learn new knowledge about fraud prevention, and enhance their ability to identify potential fraud.

In the long run, the AI of generative large models continues, but how to use AI as a tool to become better requires everyone's joint efforts. In the process of AI development, it is necessary to pay attention to the compliance and safety of technology, and promote the realization of AI technology and traditional laws, morality and ethics. AI development can not simply pursue speed and efficiency, but at the same time should be based on the study of human nature, not forgetting the original intention to make good use of artificial intelligence to better serve the society, explore and optimize the paradigm of AI, prevent potential abuse and harm, and help the benign development of AI.

Bi Xiaojuan, editor-in-chief of the New Economy Observation Group, said that the so-called road is one foot tall, the magic is one foot high, and science and technology have always been a double-edged sword. In the past few years, we can see that the development of big data and sharing economy has brought considerable economic and social benefits, but the by-products are a large number of leaks of personal privacy information, telecom network fraud, and harassment text messages. However, with the investigation of regulatory deficiencies, the timely follow-up of policies and regulations, and the improvement of public awareness of prevention, similar fraud incidents have been alleviated to a certain extent.

Now the same process is playing out in the field of AI. The commercial value of AI technology continues to be released, but it also makes fraudsters AI "like tiger wings": by changing faces, sound simulations and other means to impersonate users' relatives and friends, almost can achieve fake real fraud, users are difficult to distinguish, easy to take the bait. Not only that, with the use of AI technology, criminals can simultaneously commit fraud against a large number of users, bringing a wider range of harm and greater property losses to victims.

But as an ordinary user, it is definitely not overwhelming in the face of AI. First of all, it is necessary to raise awareness of risk prevention of personal property and be vigilant against this new type of fraud; Second, strengthen the protection of personal information, do not register a large number of unofficial and dating apps; Third, if the other party requests to borrow money or transfer money through audio and video means, no matter how eager the other party shows, try to use multiple means and conditions to conduct offline verification and then transfer, involving large transfers, or even go to the bank counter to handle it; Finally, if you are unfortunate enough to be deceived, you should call the police as soon as possible and contact the relevant bank to stop the most.

In terms of laws, regulations and supervision, we should follow up in a timely manner, set thresholds and firewalls for the development and application of AI technology, and promote AI for good. The good news is that in early April, the Cyberspace Administration of China drafted the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments), which was open to the public for comments, with the aim of promoting the scientific, orderly, healthy and standardized development of the mainland's AI industry and eliminating chaos and infringement.

For AI practitioners, it is necessary to follow the trend of national regulatory policies, maintain sensitivity to AI ethics and morality, and realize the unity of technological development and corporate responsibility.

Industry observer Fumiko said research on AI security dates back to 2008. So far, many fields have been included. In a 2022 survey of the natural language processing (NLP) community, 37% agreed or slightly agreed that AI decisions could lead to disasters "at least as bad as all-out nuclear war." However, there are also some different voices, such as Andrew Ng, an adjunct professor at Stanford University, who likens it to "worrying about overpopulation on Mars before we even set foot on Earth." ”

Dr. Samuel Bowman's point is a more pertinent understanding of the problem – rather than stopping research on AI/ML around the world, people should ensure that all sufficiently robust AI systems are built and deployed responsibly.

Zhang Jingke, founder of the Internet Beijing Diary, said that for the massive information produced by AI at high speed, human beings will inevitably face the problem of false information more than 100 times in the era of Internet information explosion.

If you want to protect most people from it, you can refer to traditional news and copyright protection models. That is, the source of information dissemination needs to be marked, and if it is AI writing, the author needs to be marked. This is also a necessary price paid by netizens and capital to eliminate information dissemination intermediaries, and in the long run, free information without supervision will no longer be fair and efficient.

What is the current progress of the specification construction for generative AI? How can the application of AI technology avoid legal risks?

Chu Shaojun, the director of the matter, said that first of all, in the field of communication and public opinion, often "good things do not go out, bad things spread thousands of miles", the use of AI fraud, in essence is not a new thing, and is only one of thousands of fraud methods, but because of the current popular AI technology and application associated with the binding, it is easy to ferment into a hot event, by the attention of all public opinion, and even to a certain extent formed a crusade against new technologies and new fields, the call for supervision of new technologies will burst out in a short time. But at present, there is no need to pay too much attention to new technologies and new fields and prematurely special supervision, after all, the development of any new technology and new field needs to develop and grow, it takes time and tolerance, and sometimes even a certain amount of "barbaric growth" time and space.

Second, for new fields and new technologies, regulation and legislation are often lagging behind, this matter requires more self-discipline of enterprises and industries, enterprises in the development of business and technology, more consideration of their social responsibility, in the research and development and design stage, there is better prediction and prevention. At the same time, it is necessary to strengthen user education and science popularization, so that users and ordinary audiences can also understand new technologies and new fields, and prevent problems before they occur. At the same time, users and audiences themselves should also learn and pay more attention, and at the same time must have more eyes, earn money and borrow money to transfer money and other ways to check, and do not believe that "pie will fall from the sky".

Third, new technologies can be used for fraud, but from another perspective, the emergence of new technologies can also be used for fraud prevention and anti-fraud. Many times the technology is neutral or neutral, and the key depends on how it is applied and regulated. At present, frequent infringement and fraud incidents will not restrict the development of the industry, but will promote the standardized development of the industry, after all, after the barbaric growth, the frequent occurrence of negative events, on the one hand, will trigger public opinion supervision, will naturally trigger enterprises and industries to be more self-disciplined, on the other hand, the perspective of supervision and legislation will definitely accelerate, and eventually the industry will be pushed to a more standardized development.

Finally, to prevent extreme thinking and excessive regulatory thinking, even if negative events occur, on the one hand, regulatory and legislative norms are required; On the other hand, it is also necessary to consider tolerance and time while regulating and legislating to give the industry enough room for development. After all, AI, especially in the fields of AIGC, the future development may not only be related to the enterprise industry, but also to the country's voice in the field of technology, and all parties have the responsibility to make the development of new technologies better and more stable.

Internet technology blogger Pi Pitui said that AI artificial intelligence technology is becoming more and more mature, is there a risk, and how to avoid it? The answer given by Tongyi Qianwen is: the development of AI artificial intelligence technology has indeed brought many conveniences and opportunities to people, but it has also brought some risks and challenges.

Mitigating AI risks can start with the following points, starting with data privacy: the collection and use of personal data needs to comply with strict privacy and protection regulations to ensure that the data is not misused or leaked. Second, transparency and explainability: AI systems should be transparent and explainable so that people can understand their decisions and behaviors. Third, security: ensure the security of AI systems, including preventing malicious attacks, data backup and recovery, error correction, etc. Fourth, legal compliance: Ensure that AI systems comply with relevant laws, regulations and ethical norms to avoid conflicts with laws. Fifth, the technical level: ensure the technical level of the AI system and prevent the abuse or misuse of technology. Finally, cooperation and collaboration: Cooperate and collaborate with other stakeholders to ensure that the development and application of AI systems is in the public interest and social responsibility.

In short, mitigating AI risks requires ensuring that AI systems are developed, used, and managed in accordance with ethical, legal, and safety standards, as well as strengthening education and public participation to promote the healthy development of AI technologies.

Zheng Yang, director of the strategic development department of Seedfire International, said that the current types of AI fraud mainly include voice synthesis, AI face changing, and the use of big data and AI technology to screen and filter information to target people. Technology is a double-edged sword, SMS phone scams, number theft, P2P and other historical experiences have verified a truth again and again, that is, every iteration of new technologies will bring endless technical fraud.

Using reverse thinking to think, first of all, the core of AI fraud is that it can be false, and the focus of prevention and supervision is how to identify AI-generated content more simply, and how to let ordinary people understand the risks that may arise from AI technology (common sense popularization). Secondly, the foundation of AI fraud technology lies in data, and the focus of prevention and supervision is how to prevent the leakage and abuse of personal information.

In addition, from the perspective of supervision, on the one hand, it is necessary to carry out accurate education and prevention and control of vulnerable groups in advance, such as empty nesters and fanatical groupies. On the other hand, channels such as online marriage, dating, loans, and online games can be strictly supervised.

In the process of marketization of any technology, its social value and economic value need to be balanced. As far as the current development of AI technology is concerned, the prominent social issues include privacy and data protection risks, intellectual property infringement risks, and moral and ethical risks. These problems are bound to affect the marketization speed of AI technology, but this itself is the necessary process of technology landing.

AI technology wants to fundamentally reduce legal risks, on the one hand, it requires the production companies of AI technology and its tools to self-regulate the platform, such as Google will mark every AI-generated image created by its tool is a good start. On the other hand, relevant departments need to establish relevant regulatory laws, regulations and standard systems as soon as possible, and clarify legal boundaries and responsible entities, such as the regulations issued by the Cyberspace Administration of China in April, which require AI services for organizations and individuals that need to bear the responsibility of content producers.

Wei Li, founder of Vigorous Finance, said that from the current released AI fraud case information, the technology involved in AI fraud is mainly deep synthesis technology, including AI face changing technology, speech (voice) synthesis technology and text generation model. Criminals can use AI face-swapping technology to create fake videos or photos to impersonate others to deceive.

At the technical level, there are also many technology companies and researchers actively practicing to identify such AI deep synthetic content through "technical countermeasures", which are usually also based on deep learning techniques, such as identifying AI-generated videos by analyzing video features, processing traces, and detecting "inconsistencies" of facial features.

In terms of legal prevention and control, state organs, platforms, and individuals work together, and while platforms continuously improve their auditing and monitoring capabilities in accordance with various regulatory requirements, individuals also need to be vigilant at all times. Perhaps, there will also be a need to establish collaborative and shared databases, collect and store confirmed video samples, or establish enforcement practices such as anti-AI fraud alliances.

How to make artificial intelligence technology "science and technology for good" has become an urgent problem to be solved. First of all, data privacy and security are very important. The supervision and management of AI technology needs to be strengthened to prevent the misuse and misuse of personal information. Second, AI technology needs to be strengthened in terms of safety and reliability, transparency and fairness. Finally, it is necessary to strengthen the education and popularization of artificial intelligence technology, improve the public's scientific and technological literacy and safety awareness, and avoid being deceived and victimized.

Will the current frequent infringement and fraud incidents restrict the development of the industry?

Zu Tengfei, a veteran media person, said that "technology itself is not guilty", and the key depends on who uses it. The new technology of AI is not only available today, and the "bad guys" who make profits by fraud are not only available today, and they cannot be stopped because of choking. From a global perspective, AI is almost the direction of development in all countries, and it can be applied in many fields to liberate productivity and improve production efficiency.

Tracing back to the root, the source of AI fraud is personal information leakage. Earlier, because of personal information leakage, everyone was harassed by phone calls, bombarded with text messages, coupled with the current AI fraud, personal information protection has reached the point where it has to be done. All kinds of apps require to read phones, photos, geographic locations, etc. when downloading, so is this information effectively protected by relevant software companies? Or is it resold and profited by people with ulterior motives?

To deal with the new type of AI fraud, it is still necessary to achieve the knowledge points of each anti-fraud blogger, such as protecting personal information, verifying news, not transferring money and not paying, and so on. At the same time, relevant enterprises should also strictly abide by relevant policies, improve AI technology ethics, and strengthen safety supervision measures.

Tang Chen, the director of Tang Chen, said that negative cases in the development of AIGC are inevitable, and AI fraud, digital humans, and face changes are all the manifestations of the capabilities of artificial intelligence tools. How to avoid the negative effects brought about by the development of artificial intelligence is fundamentally to regulate who uses artificial intelligence. Two days ago, Sun Yanzi responded to the copyright dispute caused by "AI Sun Yanzi", saying: "Everything is possible, everything doesn't matter, I think it's enough to think pure and be yourself." Sun Yanzi's response has been highly praised by the public, in addition to the simple good writing and large praise, the more significant meaning is that she presents the confidence of people who are people, which is also the confidence of human confidence that artificial intelligence cannot replace human beings in a short period of time. Her mentality should also become a reference for the public.

Again, this logic applies to any industry that AI is transforming. As OpenAI CEO Sam Altman put it, AI technology will reshape society as it is known, probably "the greatest technology mankind has ever developed" and will greatly improve human life. But facing it squarely carries real dangers, people should be happy to be a little afraid of it. Only with awe can we use artificial intelligence technology for human beings in the development of science and technology. In this process, people are not worried about the technology itself, just as Sun Yanzi is not too worried about AI Sun Yanzi, but the motivation of humans to use technology. Everyone should pay more attention to what kind of technology do you want to evolve into? What will non-technology transform humans into? This may be the essence of the problem.

"Titanium Hot Commentary" is a hot event observation column launched by Titanium Media, which mainly invites media people and industry practitioners with unique insights and in-depth observations on the development of different industries and different business models to comprehensively display the impact and significance of the event through multi-angle interpretation.