laitimes

"AI scams" are rampant, and artificial intelligence security needs to be increased urgently

author:21st Century Business Herald

The "double-edged sword" effect of artificial intelligence is becoming increasingly apparent.

AI technology has shown full potential to spark technological change and improve productivity and efficiency, but at the same time, the security risks it poses are becoming more and more prominent. The use of deepfakes to create illusions, manipulate public opinion, and even carry out fraudulent activities has disrupted normal public life, and new methods of "AI scams" have emerged one after another, making it difficult to prevent them.

It should not be ignored that the emergence of malicious AI models, phishing attacks, and the security risks of LLMs themselves have amplified or even spawned new security risks at different levels, bringing threats to economic security, information security, algorithm security, and data security. Analysts at China Galaxy Securities believe that artificial intelligence has the risk of infringement in terms of intellectual property rights, labor rights and interests, and ethical guidelines.

With the issuance of AI-related bills such as the Global AI Governance Initiative of the Chinese mainland, the Executive Order on Safe, Reliable and Trustworthy Artificial Intelligence of the United States, and the European Union's Artificial Intelligence Act, the relevant laws and regulations on AI security have been improved, and the legal practice on AI security issues is expected to be gradually implemented.

At the macro level, the regulator has "shot". At the micro level, the current situation has also put forward corresponding requirements for various subjects. At the enterprise level, analysts from the Institute pointed out that enterprises in the AI field should consider building corresponding AI security and compliance systems to form a systematic guarantee for AI security and compliance issues, and for individual citizens, staying vigilant against false information is the first step to strengthen their own protection in the face of threats.

"AI scams" are rampant, and artificial intelligence security needs to be increased urgently

Image source: Visual China

"Pervasive"

Since the advent of generative AI, visible and invisible changes and disruptions are continuing, because of its convenience and wide range of applications, AI has penetrated into all aspects of people's work and life, and the corresponding risk of AI abuse is increasingly revealed. Among them, deepfakes are "notorious".

Deep fake is a technology that uses artificial intelligence technology to synthesize human images, audio and video, making the fake content more realistic, which may breed a variety of new types of illegal crimes, such as using AI to synthesize fake character images or pornographic videos for insult and defamation, extortion, and manipulating public opinion to create or promote online violence, which poses a threat to social security.

With advances in technologies such as generative AI full-body motion generation, the credibility of deepfakes has increased further. The use of AI tools to create fake pornographic videos for dissemination, blackmail and other malicious behaviors has attracted widespread attention. In June last year, the Cybercrime Complaint Center (IC3) issued a public service bulletin in which the Federal Bureau of Investigation (FBI) warned the public to be wary of malicious actors using images and videos posted on victims' social media to AI tamper, extort and harass them.

The second is the threat posed by malicious AI models.

Malicious AI large models refer to a type of illegal large models that are manipulated by illegal organizations or criminals, imitate legal models such as ChatGPT with the help of open-source models, and are bred based on harmful corpus training, and are specially used for illegal behaviors such as cybercrime and fraud.

It is reported that the first malicious large model, WormGPT, was released on the dark web in July 2021, mainly used to generate complex phishing and business email attacks and write malicious code.

The direct purpose of the malicious model is to use it for all kinds of illegal behaviors, and the threat brought by it has a clear direction. Malicious large models mainly run on the dark web, which is more concealed and harmful, and causes harm to national security, industry innovation, production and life, etc.

AI is closely related to data and algorithms, which also brings potential security risks, mainly involving data privacy and protection, algorithmic bias and discrimination, and cross-border data flow.

On the one hand, AI may collect a lot of private or sensitive data during its interaction with users, which is also used to train AI models. This data includes, but is not limited to, personally identifiable information, location data, spending habits, and even biometric information. If they are not adequately protected or mishandled, they can lead to serious privacy breaches, which in turn harm the rights and interests of individuals and even threaten the public safety of society. On the other hand, AI can collect countless seemingly irrelevant data fragments based on it, and obtain more information related to users' personal information or privacy through in-depth mining and analysis, resulting in the ineffectiveness of current security protection measures such as data anonymization.

Algorithmic bias and discrimination can lead to unfair decision-making. At the same time, according to the operating principle of some AI products, when users interact with each other in the dialog box, the relevant Q&A data may be transmitted to the product development company located overseas, and the cross-border flow of data during this period may cause cross-border data security issues.

On this basis, the high dependence of AI products on algorithms, computing power, and data may lead to an oligopoly of technology companies.

In addition to these threats posed by AI, large language models (LLMs) also pose security risks. The "2024 Artificial Intelligence Security Report" points out that AI and large language models themselves are accompanied by security risks, and the research and attention to potential impacts in the industry are still far from enough.

OWASP has released 10 major security vulnerabilities in LLMs, including insecure processing inputs, poisoning of training data, supply chain vulnerabilities, sensitive information disclosure, and over-reliance, and Tencent Suzaku Lab has also released the AI Security Threat Risk Matrix (hereinafter referred to as the "Matrix"). The matrix looks at various security problems that may exist in the field of artificial intelligence from the perspective of the whole life cycle, involving the whole process from the creation to the use of AI models such as environmental contact, data collection and collation, and model training, and points out the potential risks that may exist in each link of AI, including malicious access to Docker in the environmental contact stage, data leakage attacks in the model use stage, etc.

It should be noted that the threat related to hardware sensors will harm human health and even life safety more directly. Highly autonomous AI systems, such as driverless cars and medical robots, can cause serious security incidents in the event of data leakage and poor network connectivity.

In February 2021, the European Union Cybersecurity Agency (ENISA) and the Joint Research Centre (JRC) released a report titled "Cybersecurity Challenges for the Adoption of AI in Autonomous Driving". The report summarizes four cybersecurity threats: sensor jamming, blinding, spoofing, or saturation, in which attackers can manipulate AI models to reduce the effectiveness of automated decision-making; The second is a DoS/DDoS attack, which interrupts the communication channels available to the autonomous vehicle, so that the vehicle cannot see the outside world, and interferes with the autonomous driving and causes the vehicle to stall or malfunction; the third is to manipulate the communication equipment of the autonomous vehicle, hijack the communication channel and manipulate sensor readings, or misinterpret road information and signs; and the fourth is information leakage.

In addition, the threats posed by AI include the use of AI to automate attacks, malware, phishing emails, password blasting, captcha cracking, and technical support for social engineering.

Risks and benefits are coexisting, and they are almost "pervasive" in modern society.

AI security governance is on the way

At the legislative level, AI security has taken an important step.

This year, the Artificial Intelligence Act (AIA), the world's first comprehensive law on artificial intelligence, was formally approved by the European Parliament last week, and the relevant provisions will be implemented in stages. The Act classifies AI systems according to risk level to define "high-risk" AI systems that pose a significant risk to human health and safety or fundamental rights, while for certain specific AI systems, only a minimum transparency obligation is imposed. One of its main objectives is to prevent any threat to health and safety posed by AI and to protect fundamental rights and values.

After a series of negative social events brought about by deepfakes, the U.S. government signed the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence at the end of October 2023. The executive order includes eight goals, including establishing new standards for AI security and protecting the privacy of the American people. Analysts at China Galaxy Securities believe that the bill focuses on a high degree of protection of security and privacy, preventing the aggravation of unfairness and discrimination caused by artificial intelligence, protecting individual rights, and improving the government's control over artificial intelligence data and technology.

Mainland China will guide technological development and strengthen supervision "go hand in hand", and is committed to relevant departments and industries to jointly improve the AI regulatory system, and relevant legal documents have been launched one after another to provide a legal basis for guiding the development and supervision of AI.

In June 2023, the State Council announced that the "Artificial Intelligence Law" was on the legislative agenda. In October 2023, the Cyberspace Administration of China ("CAC") released the Global AI Governance Initiative, which systematically elaborated China's AI governance plan on AI development, security, and governance, emphasizing that countries should strengthen information exchange and technical cooperation in AI governance, so as to jointly promote the establishment of a framework of AI governance norms and industry standards.

At the industry level, recently, dozens of Chinese and foreign experts, including Turing Award winners Joshua Bengio, Jeffrey Hinton, Yao Qizhi, etc., jointly signed the "Beijing International Consensus on AI Security" in Beijing, requiring that no AI system should copy or improve itself without the explicit approval and assistance of humans, including making exact copies of itself and creating new AI systems with similar or higher capabilities. "Assisting bad actors" means that all AI systems should not assist in enhancing the capabilities of their users to the level of experts in the field of cyber attacks that design weapons of mass destruction, violate biological or chemical weapons conventions, or enforce cyberattacks that cause serious financial loss or equivalent harm.

The above bills and regulations all emphasize the risks of AI in social security, algorithm security, and technological ethics security, and give corresponding guidance and suggestions.

In terms of combating AI disinformation generation such as deepfakes, the Biden administration has introduced regulations and measures that require AI-generated content to be marked and watermarked, and to protect users from chatbots. European Commission Vice President Vera Jourova also announced in June last year that more than 40 tech companies, including Google, Douyin International, Microsoft, Facebook and Instagram's parent company, would be asked to detect AI-generated images, videos and text and provide users with clear markups. The EU Bill also requires users of AI systems that generate deep synthetic content to inform that the content is generated or manipulated by AI and is not the real content.

At the same time, countries and regions such as the mainland, the United States, and the European Union regard data security as one of the most important issues in AI security. According to analysts at King & Wood Mallesons, the EU Act requires transparency requirements for high-risk AI systems throughout the life cycle of the system, and requires AI system providers to disclose specific information to downstream AI system deployers and distributors.

In 2022, the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued the Opinions on Strengthening the Ethical Governance of Science and Technology, which is the first national-level guiding document on the ethical governance of science and technology in mainland China, and puts forward the principles and basic requirements for the ethical governance of science and technology. The newly released Measures for the Ethical Review of Science and Technology (Trial) in October 2023 put forward unified requirements for the basic procedures, standards, and conditions for ethical review of science and technology, marking a new stage in the construction of the regulatory system for AI ethical governance in mainland China.

The construction of laws and regulations on AI security has been put on the agenda, and the introduction of various laws and regulations has gradually drawn a "dividing line" for the emerging field of AI.

What AI can and should not do, where it should be regulated, and what should be strictly prohibited will gradually become clear in these legal events. However, the effectiveness and timeliness of these measures still need to be tested.

There are different opinions on the specific response measures to the security threats brought by AI, but there is generally a common direction, that is, branch division and collaborative governance.

Analysts from King & Wood Mallesons Research Institute believe that in the current practice in mainland China, enterprises in the AI field should consider building corresponding AI security and compliance systems, so as to form a systematic guarantee for AI security and compliance issues. In terms of internal compliance management system, it is recommended that enterprises in the AI field judge whether they need to apply for special qualification certificates according to their actual conditions, and at the level of external cooperation, it is proposed that enterprises should consider the requirements of AI security, and at the level of Internet applications, it is pointed out that enterprises should consider the risks caused by information security and data security when involving Internet applications such as the AIGC platform, and revise user agreements, privacy policies and other documents.

AI security is an issue that cannot be ignored and updated in real time, and there is a long way to go, and there may be challenges beyond the existing understanding, which requires the unremitting pursuit of every subject.

For more information, please download the 21 Finance APP

Read on