laitimes

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

author:Think Tank of the Future

(Report producer/author: Soochow Securities, Wang Zijing)

1.AI Development raises new security issues

The development of new technologies also brings new security issues. Generative artificial intelligence represented by ChatGPT is reshaping and even subverting the production and consumption patterns of digital content, and is increasingly influencing the transformation of all walks of life. Since the beginning of this year, the rapid integration and development of large model technology and AIGC, the content generated by large models can achieve the effect of "fake and real", which makes the application threshold continue to decrease, and everyone can easily achieve "face change" and "voice change". Misinformation, bias, discrimination, and even consciousness penetration caused by AIGC abuse are unavoidable, and there are greater risks to individuals, institutions and even national security. Therefore, the regulation of generative AI services has become a major issue in global governance. At present, the new security issues brought by AI mainly include AIGC content security and data security issues.

1.1. AIGC Content Security Issues

Generative AI may be directed to generate text, photos, videos containing harmful content, or for improper purposes, which may raise cyber, social security, and ideological concerns.

1) Network security: LLM can be used to generate phishing emails, and make LLM imitate the language style of specific individuals or groups through prompt words, making phishing emails more credible. Security agency Check Point Research said in a recent report that hackers have been found on the dark web trying to bypass restrictions and use ChatGPT to generate phishing emails. In addition, LLM can assist in the generation of malicious code, thereby lowering the barrier to entry for cyberattacks.

2) False information: 1. Deep synthesis has become one of the fraud methods. Scammers can use AI face swapping and onomatopoeia to pretend to be acquaintances to commit fraud. 2. False content adversely affects society. Generative AI makes disinformation easier, faster and cheaper, and AI-generated disinformation can adversely affect society. On the morning of May 22, US time, an AI-generated image showing an explosion in an area near the Pentagon went viral on social networks. According to a report by the global network, the US stock market fell markedly at the moment when the picture began to circulate.

3) Ideology: In order to improve the performance of AI in the face of sensitive and complex problems, developers usually add answers containing the correct concepts that developers believe are correct into the training process, and input them into the model through reinforcement learning. This can lead to AI generating biased responses to complex questions such as politics, ethics, morality, and more. According to OpenAI's March article GPT-4 System Card, GPT-4 models have the potential to reinforce and reproduce specific biases and worldviews, and model behavior can also reinforce stereotypes or derogatory harm. For example, models tend to take a evasive approach when answering questions about whether women are allowed to vote.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

1.2. Data Security Issues

The data breach raised concerns and ChatGPT was disabled. ChatGPT leaked information due to open-source library bugs, and the leaked data were "device information", "meeting content" and "subscriber information". According to Cyberhaven, the potential risk of a data breach could be even higher: 319 out of every 100,000 employees entered sensitive company data into ChatGPT in a week, up from 2 months ago. So far, Apple, JPMorgan Chase, Samsung and many other companies have banned their employees from sharing confidential information with chatbots such as ChatGPT.

The problem of data leakage is difficult to solve through traditional technical means. The risk of data security is that the prompt words entered by the user during the interaction with LLM may be used in LLM iterative training and provided to other users through interaction. Most DLP solutions are designed to identify and block the transfer of certain files and certain identifiable PII. However, the text input by users into LLM is more diverse, the definition of confidential data by different enterprises is more different, and the input file formats will be richer with the development of LLM to multi-modality, which makes the data leakage problem difficult to solve through traditional DLP means.

2.AI Regulation: Policies and regulations take precedence

Considering the security issues that AI may bring to society, safety standards, laws and regulations, and self-regulation are important cornerstones for AI regulation. At the government level, there is an urgent need to introduce regulatory policies to regulate it, achieve full regulatory coverage, and standardize AIGC-related elements in stages and processes. At the enterprise level, it is also necessary to be regulated to eliminate society's distrust of AI big models. Take OpenAI as an example: AI regulation in European countries is becoming stricter, and OpenAI adjusts data management measures to meet regulatory requirements. 1. Represented by the Italian government, European countries have successively launched investigations on ChatGPT on the grounds of data security. On March 31, the Italian Data Protection Authority temporarily banned ChatGPT for violating the General Data Protection Regulation (GDPR), and has since made a series of rectification requests. Subsequently, Germany, France, the European Union and other countries issued data regulatory measures. 2.OpenAI actively cooperates with government regulation and adjusts data management measures. On April 5, OpenAI held a meeting with the Italian Personal Data Protection Authority and expressed its willingness to cooperate. OpenAI then adjusted ChatGPT data management measures on April 25 to give users the right not to share data with OpenAI for model training.

2.1. Government: Ensure the orderly prosperity of the AI industry with laws and regulations

From a worldwide perspective, in order to build a credible AI ecosystem, China, the United States and the European Union are all exploring AI governance and regulating AI development by introducing responsive laws and regulations, and AI regulation is imperative in the process.

2.1.1. Strong regulation in Europe and breakthrough progress in legislation

The EU regulates AI as a whole through special legislation. In April 2021, the European Commission proposed the Artificial Intelligence Act. On June 14, 2023, the European Parliament passed the draft Artificial Intelligence Bill, ushering in the latest breakthrough in the governance of human intelligence in the EU. According to the legislative process, the next step of the bill will be to formally enter the process of tripartite negotiation and consultation between the European Commission, parliament and member states to determine the final version of the bill. The bill does this by classifying AI applications into different risk levels and imposing different degrees of restrictions on different levels of risk. As the world's first comprehensive AI governance legislation, the bill will become the standard for global AI legal regulation and is widely referenced by regulatory agencies in various countries.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

The EU has become a pioneer in AI legislation, and the Brussels effect is expected to reappear. The Brussels effect refers to the EU's externalization of its unilateral regulation globally through market mechanisms, so that regulated entities are ultimately subject to EU law outside the EU. There are two main reasons for this: 1) The EU has a larger consumer market than the United States and richer than China. For many companies, the benefits of entering the EU market outweigh the costs of adapting to the EU's stringent standards. At the same time, the EU has established a comprehensive institutional framework and has used political determination to implement its provisions. 2) The EU has wide-ranging sanctions powers and the ability to prohibit products or services from entering the EU market. This possibility of eliminating market access effectively deters non-compliance by companies and encourages them to comply with EU regulations. Leads companies to voluntarily promote EU standards to manage their global operations. EU standards become global standards.

The EU's Artificial Intelligence Act has extraterritorial effect and is currently about to enter the final stage before the EU starts regulation, and is expected to further promote global AI regulation through the Brussels effect after its formal implementation.

2.1.2. Weak regulation in the United States encourages industry self-regulation, and regulation has accelerated recently

In October 2022, the White House released the Blueprint for the AI Bill of Rights, and in January 2023, the U.S. Department of Commerce released the AI Risk Management Framework. The Blueprint, a set of principles for protecting individuals from harm and discrimination, accompanied by technical solutions, identifies specific ways in which AI systems can influence these principles, as well as general steps to address adverse impacts, while the Framework provides tools to implement the principles of the Blueprint in a variety of organizations. Unlike the European Union's AI Act, the Blueprint and Framework are non-mandatory guidance documents that do not have the force of law and are intended for voluntary use by institutions designing, developing, deploying and using AI systems. Weak regulation in the United States encourages industry self-regulation. Although the White House has issued guidance federal documents on AI hazards, it has yet to develop a unified approach to controlling AI risks. The United States has previously been in a state of weak supervision of the development of artificial intelligence at the legislative and institutional levels, encouraging enterprises to rely on industry self-discipline and consciously implement government safety principles to ensure safety. Recently, the government's attention has increased, and artificial intelligence supervision has accelerated. In recent months, senior White House officials have met two to three times a week to discuss AI issues, according to a White House statement; U.S. Senate Majority Leader Chuck Schumer unveiled the legislative framework for artificial intelligence regulation and said that an artificial intelligence bill at the federal level would be enacted within a few months; A bipartisan, bicameral panel of lawmakers submitted the National Artificial Intelligence Commission Act.

2.1.3. China's first generative AI service regulatory document was released, focusing on AIGC content security

China's first generative AI service regulatory document was released, focusing on AIGC content security. The Provisions on the Administration of Deep Synthesis of Internet Information Services was promulgated in November 2022 and came into force on January 10, 2023, imposing legal restrictions on deep synthesis technology represented by "AI for face". On April 11, 2023, the Cyberspace Administration of China (CAC) issued the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments), and on July 13, 2023, the Cyberspace Administration of China, together with the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio, Film and Television, promulgated the Interim Measures for the Management of Generated Artificial Intelligence Services, which will come into force on August 15, 2023. The Measures focus on the content security of AIGC, put forward the principle that the state adheres to the principle of attaching equal importance to development and security, promoting innovation and governance according to law, takes effective measures to encourage the innovative development of generative AI, and implements inclusive and prudent and categorical supervision of generative AI services. AIGC regulatory policy implementation, artificial intelligence law is on the agenda. On 20 June, the Cyberspace Administration of China released the first batch of deep synthesis service algorithm filing lists in accordance with the "Provisions on the Administration of Deep Synthesis of Internet Information Services," with Baidu, Ali, Tencent, and others listed. The State Council's 2023 Legislative Work Plan shows that the Artificial Intelligence Law has been included in the legislative plan, and the draft is ready to be submitted to the Standing Committee of the National People's Congress for deliberation within the year.

2.2. Enterprise level: strengthen self-regulation and standardize industry order

Enterprises strengthen self-regulation and standardize industry order. AI companies are proactive towards regulation, and on April 25, 2023, the Business Software Alliance (BSA), a U.S. technology advocacy organization representing Microsoft, Adobe, IBM, Oracle and other AI giants, publicly issued a document calling for the development of rules governing the use of AI on the basis of national privacy legislation. And four clear appeals were made to the US Congress in an attempt to guide its legislative direction. On April 25, 2023, Zhihu issued an announcement to crack down on accounts that publish AIGC-type content in bulk. On May 9, 2023, Douyin released the "Platform Specification and Industry Initiative on AI-generated Content", which will provide unified AI-generated content identification capabilities to help creators mark and facilitate user differentiation. On May 22, 2023, three OpenAI co-founders signed an article asking the government to consider forming an "International Atomic Energy Agency" for the AI industry in line with the nuclear weapons regulatory model to set up global rules for the industry.

3.AI Regulation: Balancing innovation and safety, with a variety of regulatory means

AI supervision is mainly aimed at the illegal application of AIGC technology to deal with the harm that AIGC may bring to society, politics, finance, and education, which is also the key to national security and social security. Therefore, for AI supervision, it is necessary to break through from the level of security mechanism and technical means.

3.1. Introduction of Security Mechanisms

At this stage, the content security mechanism of domestic AIGC applications mainly includes: 1) training data cleaning: the data of training AL capabilities needs to be cleaned to clean up the harmful content in the training library; 2) Algorithm filing and security assessment: AI algorithms need to file algorithms in accordance with the Provisions on the Administration of Internet Information Service Algorithm Recommendations and provide security assessments; 3) Prompt word filtering: The platform needs to filter and intercept prompt words, prompt content, etc. to prevent users from uploading illegal content; 4) Generation content blocking: The platform filters and intercepts the content generated by AI algorithms to avoid generating harmful content; 5) Prominent identification of AI-generated content: When generating multimedia content, relevant AI authoring tools can add identification metadata to the metadata of multimedia files, so that different platforms and tools can recognize each other's identification metadata.

3.2. Rich means of technical supervision

1) Use AI technology to identify whether content is AI-generated: For example, the "deep synthetic content detection platform AIGC-X" released by the National Key Laboratory of Joint Dissemination of Content Cognition by People's Network adopts algorithm fusion and knowledge-driven artificial intelligence framework, uses deep modeling to capture implicit features such as puzzlement and sudden frequency, and learns the distribution difference between machine-generated text and human-generated text. The platform can serve the content risk control needs of media and Internet platforms, and provide AI-generated content identification and false information identification services. The public test data shows that AIGC-X has an accuracy rate of more than 90% for all kinds of AI-generated content platforms.

In terms of technology, in the face recognition scenario, we can start from three aspects: Generate defects: Due to the lack of relevant training data, the deepfake model may lack some physiological common sense, resulting in the inability to correctly render some human facial features. Intrinsic properties: refers to the noise fingerprint inherent in the generation tool, camera light sensor. High-level semantics: Detect problems such as the coordination of facial action units (muscle groups), the alignment of facial areas, video microscopic continuity, etc., because these details are difficult to model and difficult to replicate, and it is easy to grasp the handle.

2) Use AI technology to identify illegal content: For example, Xinhuanet's fact verification robot, based on the AI algorithm independently developed by Xinhua Zhiyun, has four functions such as text detection, image detection, video detection and audio detection, which can conduct security verification on text, image, video, audio and other media, standardize the writing of news reports, build a human-computer interactive review platform, build an intelligent and efficient security protection system, and help enterprises reduce costs and increase efficiency.

3) Use AI technology for security supervision and anti-fraud: Statistical analysis: use comparative analysis, trend analysis, distribution analysis, funnel analysis and other data analysis methods to mine data consistency, concentration and other characteristics to find fraud rules, and specifically use data analysis technology + customer group classification + scenario-based prior knowledge assumption technical means, you can obtain models with good indicators. Rule + simple statistical model: based on user registration, login, consumption, transfer information to construct statistical characteristics, fitting characteristics and classification characteristics, etc., docking exponential moving average algorithm, LOF, IForest, Holt-Winters, ARIMA algorithm to find anomalies. Supervised learning algorithm based on fraud knowledge base: deeply explore hidden fraud patterns from the existing precipitated knowledge base and provide online real-time prediction services. Commonly used calculations are XGBoost, DeepFFM, XDeepFM, Wide& Deep, DIN and so on.

Use machine learning to improve expert rule strategy: 1) Automatically adjust the threshold and weight of scenario rules driven by data algorithms to ensure the continuous validity of rules. Machine learning is used to modify the rule thresholds and weights of rules, which specifically involve processes such as feature discretization, feature selection, feature dimensionality reduction, and weight parameter regression. 2) In terms of discovering new rules, Apriori and FpGrowth algorithms are mainly used to mine data sets based on Boolean association rules and quantitative association rules. Deep learning + time series detection algorithm: The sequence algorithm can identify anomalies on a long window behavior sequence. Graph association data mining algorithm: is a more extensive data representation, the relationship between data is expressed in the form of graph, graph mining algorithm can find and identify risks through association relationship in a short time section. Introduce the relationship definition of the association map, and construct a complex relationship graph based on different resource dimensions, such as account graph, device map, phone number map, etc. through the definition of relationship such as sharing, sharing, and connection pointing.

4) Automatic detection tool for supervising large models: Basic large model of counterfeit detection industry: For example, the basic large model of counterfeit detection industry developed by Ruijian after three years. For key industries such as public safety, financial security, and Internet content security, Ruijian has gradually accumulated forgery detection capabilities by industry and scene, forming a systematic capability base of core technology-AI infrastructure-industry-based large model, with parameter magnitude reaching 6 billion. Once the new forgery generation technology is available, through fine-tuning, the corresponding detection model can be quickly differentiated on the basis of the base model.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

Develop an AI security detection platform to "detect AI with AI". Ant Group and Tsinghua University jointly released the all-data AI security detection platform "Antjian 2.0" for AIGC large models, which generates massive test data sets through intelligent confrontation technology, and interactively induces AIGC generative models to find the weaknesses and security problems of large models, and can identify multiple risks in data security, content security, and scientific and technological ethics, covering various data and task types such as tables, texts, and images. AntJian 2.0 can detect hundreds of dimensions of risk confrontation such as personal privacy, ideology, illegal crime, prejudice and discrimination on the content generated by the big model, and generate a test report to help the large model be continuously optimized in a more targeted manner. In addition, to solve the model black box problem, AntDetector 2.0 incorporates interpretability detection tools. Integrating AI technology and expert prior knowledge, through visualization, logical reasoning, causal inference and other technologies, the interpretation quality of AI systems is quantitatively analyzed from multiple dimensions such as completeness, accuracy, and stability, helping users to more clearly verify and optimize explainable schemes.

3.1.AI Regulation is expected to give birth to a market of 100 billion yuan

In 2030, the scale of the core industry of artificial intelligence is expected to exceed one trillion. At the 2023 Sohu Science and Technology Summit, Bai Chunli, former president of the Chinese Academy of Sciences and academician of the Chinese Academy of Sciences, said in his speech that the next 5-10 years will be a critical period for the development of artificial intelligence, according to iMedia prediction, the scale of the core artificial intelligence industry in mainland China will exceed 1 trillion yuan in 2030, and the global artificial intelligence market will reach 16 trillion US dollars in 2030. AI regulation is expected to spawn a 100 billion market. The proportion of security in general information investment is at least 5%-10%, and due to the particularity of AI large models, AI security will become a problem that all participants must consider in the future, running through the whole process from data labeling, model training and development, content generation, and application development before-during-after the event, the investment is no less than traditional security investment, so assuming that the proportion of AI supervision in the entire industry chain is measured by 5%-10%, we expect domestic AI by 2030 The regulatory market size of large models will reach 50-100 billion yuan.

3.2.AI Regulatory industry competition factors: brand, technology, market

Downstream customers in the AI supervision industry need integrated solution capabilities and good confidentiality, and manufacturers with perfect product and service capabilities and the endorsement of state-owned shareholders are the first choice for downstream customers. On the one hand, with the increasing strictness of supervision and the improvement of security requirements, the security and regulatory output capabilities of the full cycle of AI are the most needed by customers. Therefore, whether the manufacturer has a complete security solution for mature large customers will be an important consideration for future customers to choose AI supervision companies, and we believe that manufacturers with a deep history of content-side supervision will form certain barriers in terms of industry experience and resources, which is also expected to be a prerequisite for obtaining customer orders. On the other hand, customers are extremely sensitive to data information, so manufacturers with state-owned shareholder backgrounds are more likely to be favored by customers. R&D strength is also an important moat for future leading enterprises. AI as an emerging technology, with the gradual improvement of policies and regulations, will also be expected to be widely used, and this process will make the customer's system face increasing safety and regulatory problems, the construction of related systems will show a trend of increasing complexity, so the technical capability threshold in the field of AI supervision will also be very high, it is expected that enterprises with strong R&D strength or cooperation with domestic leading research institutions can better meet customer needs.

4. Analysis of key companies

4.1. People's Network

People's Daily was officially launched on January 1, 1997, as "People's Daily on the Internet". The company is a listed cultural media company controlled by the People's Daily, and is also one of the largest comprehensive network media on the Internet. As of April 23, the company has a number of holding companies such as People's Online, Overseas Network, Global Network, People's Health, People's Audio-visual, People's Information Technology, People's Video, People's Venture Capital, People's Sports, People's Science and Technology, etc.; Its industrial funds have invested in dozens of projects.

The company's main business includes advertising and publicity services, content technology services, data and information services, and network technology services. Among them, the content technology strategy of People's Network has started steadily in recent years. The company develops a "risk control brain" based on artificial intelligence, builds an artificial intelligence technology engine, and provides technical services for Internet content and information security management; The "National Key Laboratory of Communication Content Cognition" was built under the supervision of the People's Daily.

Overall revenue remained relatively stable, with content technology services accounting for approximately 22% of total revenue. From 2020 to 2022, the company's operating income was 2.100 billion yuan, 2.183 billion yuan and 1.978 billion yuan, respectively. The company achieved revenue of 332 million yuan in 2023Q1, a year-on-year increase of 3.78%. The company's content technology business is supported by artificial intelligence, big data, blockchain, etc., to provide customers with content risk control services and aggregation and distribution services, as well as content operation services. The proportion of content technology service revenue is stable at about 22%, and the revenue will be 494 million yuan, 463 million yuan and 446 million yuan in 2020-2022, respectively.

AIGC-X officially started the network-wide public beta on March 1. AIGC-X is the first AI-generated content detection tool in China jointly launched by the National Key Laboratory of Communication Content Cognition built by People's Daily, the University of Science and Technology of China, and the Artificial Intelligence Research Institute of Hefei Comprehensive National Science Center. According to reports, by adopting algorithm fusion and knowledge-driven artificial intelligence framework, using deep modeling to capture implicit features such as confusion and sudden frequency, AIGC-X can detect fake news, content plagiarism, and spam generated by AI technology, and the accuracy of Chinese text detection has exceeded 90%, and there are broad application prospects in content security and content risk control such as content copyright, phishing, false information and academic fraud detection. In the future, AIGC-X will also be extended to a general-purpose intelligent recognition model for AI-generated text, images, and even videos.

4.2. Xinhuanet

The company is a media and cultural listed company controlled by Xinhua News Agency, and is an important component part of Xinhua News Agency's construction of an "online news agency" and an important carrier for building a pattern of simultaneous internal and external communication. Relying on Xinhua's authoritative position as a state news agency and its global information network as a worldwide news agency, Xinhuanet has authoritative content resources, a broad user base, high-quality customer resources, and strong brand influence, and on this basis, it carries out its main business of online advertising, information services, mobile Internet, network technology services, and digital content.

Operating income grew steadily, with online advertising and information services as the main sources of revenue. From 2020 to 2022, the company's operating income was 1.433 billion yuan, 1.724 billion yuan and 1.941 billion yuan, respectively. The company achieved revenue of 325 million yuan in 2023Q1, a year-on-year increase of 7.06%. Online advertising and information services are the company's most important sources of revenue, with the combined revenue of the two businesses accounting for more than 60%, the revenue of online advertising business in 2020-2022 is 514 million yuan, 587 million yuan and 650 million yuan, respectively, and the revenue of information services in 2020-2022 is 384 million yuan, 551 million yuan and 568 million yuan, respectively.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

Xinhua Zhiyun, a state-owned cultural digital technology enterprise jointly established by the company and Alibaba, promoted the promulgation of the "Machine Production Content (MGC) Standard" in November 2022. This standard is the world's first content automation production standard, which will be applied to news organizations such as newspapers, radio, television, news agencies, and media application and research institutions. Xinhua Zhiyun launched a fact-checking robot, based on the AI algorithm independently developed by Xinhua Zhiyun, to achieve unified review of video, audio, pictures, text and other content. Through machine review assisted artificial intelligence, content analysis is used as a means to help journalists conduct content security verification, build an intelligent and efficient security protection system, and help enterprises reduce costs and increase efficiency.

Xinhuanet, the Institute of Computing Technology of the Chinese Academy of Sciences (hereinafter referred to as the "Institute of Computing Technology of the Chinese Academy of Sciences") and other industry institutions jointly developed and built the "Generative Artificial Intelligence Content Security and Model Security Detection Platform" (AIGC-Safe), and held an invitation test conference. AIGC-Safe platform products are based on the technical foundation of digital asset and data element management of the national version chain (national data chain), relying on the technical accumulation of the Institute of Computing of the Chinese Academy of Sciences, and have formed two core capabilities of AIGC deep fake content detection and model detection, and can openly empower various AIGC detection business scenarios. Model security ensures the whole process from training to inference from three aspects: training data security, model anti-attack, and model input security. Content security covers the detection of text, images, audio and video, ensuring the authenticity and compliance of content, and achieving dual security protection. AIGC-Safe platform content security function detection can be widely used in fake news, AI face swapping scams, live attacks, copyright content protection and academic integrity and other detection scenarios, and can be applied to media, education, finance, public security and other AIGC security governance fields. The AIGC-Safe platform also launched the content security detection function, which mainly supports: 1) Text detection supports AI-generated identification; 2) Image and video detection can cover deep synthesis forgeries such as face generation, face editing, face replacement, and expression migration, as well as AI generation and PS tampering detection; 3) Audio forgery detection supports TTS and VC audio synthesis detection, covering mainstream audio synthesis algorithms.

4.3. Meyerpike

The company is a leading enterprise in the field of electronic data forensics in China, mainly serving domestic judicial organs at all levels and administrative law enforcement departments. Public safety big data and electronic data forensics are the two cornerstone businesses of the company; The company actively expands the new cyberspace security sector, and its business extends from post-event electronic data investigation and evidence collection to the whole track of "cyberspace security" before, during and after the event; Relying on the leading advantages of public safety big data, the new smart city sector has expanded to the field of social governance based on the key results achieved in the fields of crime fighting and national security.

Affected by the construction period, Q1 revenue declined, and public security big data platform and electronic forensics business were the main sources of revenue. From 2020 to 2022, the company's revenue was 2.386 billion yuan, 2.535 billion yuan and 2.280 billion yuan, respectively. The company achieved revenue of 147 million yuan in 2023Q1, a year-on-year decrease of 53.92%. Public security big data and electronic data forensics products are the company's most important sources of revenue, accounting for 41.1% and 36.2% of total revenue in 22 years, respectively.

In 2019, the company established a special research team for deep synthesis technology to address the safety issues that may arise from the use of artificial intelligence technology. The company independently developed and created a series of integrated intelligent equipment for video image detection and identification, such as "AI-3300 Insight Video Image Authenticity Workstation". The equipment covers more than 40 kinds of video image authenticity identification algorithms, nearly 10 kinds of deep and false identification algorithms, and has two identification modes of intelligent identification and professional identification, supports file management and three kinds of identification document generation, and provides one-stop video image inspection and identification services for forensic experts.

4.4. Bohui Technology

Founded in 1993, the company is a scientific and technological enterprise focusing on the field of audiovisual big data, through the integration and use of core technologies such as audiovisual big data collection, analysis and visualization, it has built a R&D center support system with software and hardware products with independent intellectual property rights as the basic framework, and its business covers three main fields: media security, smart education and intelligent display and control.

Revenue growth is back on track, and media security revenue far exceeds that of other businesses. The company's revenue in 2020-2022 was 288 million yuan, 287 million yuan and 164 million yuan, respectively. In 22 years, affected by the macroeconomy, customer bidding work was delayed, and project delivery was delayed, resulting in a decline in the company's order volume and project delivery, and revenue decreased by 42.88% year-on-year. In 2023Q1, the company returned to the right track, achieving revenue of 40 million yuan, a year-on-year increase of 29.75%. Media security is the company's most important source of revenue, with revenue of 122 million yuan in media security products in 22 years, accounting for 74.4%.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

As one of the company's main businesses, the company has made many reserves for content security: 1) In terms of technology, the company has independently developed a multimodal recognition engine for text, pictures, voice and video, based on which it has built the "Wise" AI basic capability platform, and obtained relevant technical patents and software certificates. The company has obtained the invention patent of "a recognition method and system for one-person face changing short video based on neural network". The company's "audiovisual content tamper detection system" and "media asset video content AI algorithm model" both won awards. 2) In terms of products, the company has built a three-dimensional and multi-dimensional business system for radio and television and online audiovisual content supervision with the support of artificial intelligence multimodal recognition engine, effectively improving the efficiency and intelligence level of content supervision through process-oriented task management, efficient media content recognition, and convenient mobile release. 3) In terms of application, the company's related products have covered the State Administration of Radio and Television, the Central Radio and Television Station, and China Radio and Television Network Co., Ltd.; 28 provincial radio and television bureaus; 28 provincial-level new media broadcast control platforms; 30 provincial radio and television network companies; More than 30 provincial branches of telecom operators; Migu Video, CCTV, Mango TV and other network video platforms have established a good brand influence. In order to improve the video content review capability of the new media integrated broadcast control platform and move from manual to intelligent, Bohui Technology has created a "new media integrated broadcast control platform content AI review scheme", and applied a self-developed multi-modal AI recognition engine to optimize content quality, reject the spread of bad content, and purify video content to maintain compliance with the best practices of new media development.

1) Intelligent: The new media integrated broadcast control platform content AI review scheme comprehensively uses Bohui's self-developed multi-modal recognition engine and intelligent technical review engine to realize the two-pronged review and quality review of online/offline media content, and timely discover the illegal video and audio content involving pornography, violence, sensitive people, as well as the quality deterioration of black field, still frame, color bar, jitter, etc. The system is perfectly compatible with domestic software and hardware environment, and is widely used in radio and television supervision and broadcast control departments at all levels.

2) Convenience: This solution adopts the interactive page design combining B/S and C/s, which on the one hand meets the refined operation needs of manual review, and on the other hand, provides convenient task management circulation and statistical analysis methods, effectively improving the convenience of audit work. In addition, for the newly added sensitive samples, the system applies Bohui's patented audio and video fingerprint extraction and identification technology, which can achieve fast and accurate material library review without complete processing of all media assets, which greatly improves the timeliness of sensitive content screening.

3) Standardization: The system can be integrated with the user's existing media asset system through standardized data interfaces, which can automatically conduct intelligent review of the newly injected programs in the media asset system, and push the suspected illegal content to the business personnel for confirmation, and finally output standardized reports, which perfectly match the user's program content and technical quality review business process.

4.5. Oriental Pass

Dongfang Tong is a pioneer and leader in middleware in China. Relying on the technical accumulation of basic software, the company expands solutions for specific industries such as government affairs and finance, and provides users with basic security products and solutions, while continuing to provide leading information security, network security, data security and other products and solutions for traditional users such as telecommunications operators, relying on the two product systems of "security +, data +", puts forward the "wisdom +" strategy, and carries out product layout in the field of digital transformation of government and enterprises. The business field has expanded from traditional advantageous customers such as government affairs, finance, telecommunications, and transportation to emergency management, education, public security, national defense industry, energy and power and other industries.

Q1 Performance is under pressure, and security products have become the number one revenue source. From 2020 to 2022, the company's revenue grew steadily, with 640 million yuan, 863 million yuan and 908 million yuan, respectively. The company's security product revenue grew rapidly, achieving revenue of 346 million yuan in 22 years, accounting for 38.11%, surpassing basic software middleware to become the company's largest source of income.

Dongfang Tong launched the "product + technology" combined intelligent content security monitoring system, based on the independent innovation of AI content monitoring software and hardware products + AI generative technology in-depth research, to form a three-dimensional content security monitoring and management capabilities, to help build a clear cyberspace. Dongfangtong intelligent content security monitoring system mainly includes the following 5 parts:

1) Special equipment for traffic monitoring for generative artificial intelligence applications: Based on DPI technology, collect key network outlet traffic, combined with the information database and artificial intelligence application research accumulated by Dongfangtong for many years in the industry, it can identify the characteristics of "generative artificial intelligence applications" and monitor and deal with illegal applications. The equipment adopts dedicated domestic hardware, and a single device supports processing more than 100Gbps network traffic, which is mainly used in telecommunications operators and information security supervision departments.

2) Yaoguang intelligent content edge monitoring equipment: based on domestic GPU development of software and hardware integrated equipment, can be deployed outdoors, such as industrial parks, large commercial areas, campuses and other scenarios, can monitor and dispose of outdoor advertising large-screen publicity video, pictures, text and other diversified content, can find and deal with illegal content, deep synthesis content, illegal advertising, etc. in real time, can provide the general public with a cleaner, healthier and more peaceful daily life space.

3) Yaoguang content security monitoring system: According to the "Provisions on the Management of Deep Synthesis of Internet Information Services", "Provisions on the Administration of Internet News Information Services" and other regulatory requirements, based on the image recognition ability, multi-dimensional video content recognition ability, intelligent audio content recognition ability, image and video high-rate forgery detection technology, it can monitor the content security of new media platforms such as websites, microblogs, WeChat public accounts, mini programs, APP, video platforms, IPTV, etc., and can intelligently identify 200 Multiple types of illegal content and deeply synthesized content to realize the whole process management of content security governance. The system has been applied to telecommunications operators, radio and television, Internet companies and other government and enterprise units, in the field of radio and television, network audiovisual and new media, taking the Sichuan Radio and Television Monitoring Center Sichuan Radio and Television and Network Audiovisual Monitoring System as an example, the Oriental Tongyao Light Content Security Monitoring System provides powerful content supervision capabilities in scenarios such as content release and network access services, and can provide more accurate security monitoring for current risk scenarios such as AI face-swapping fraud.

4) TongGPT intelligent voice interaction system: In the field of fraud-related risks, with the access of artificial intelligence, the risk is more concealed, and the difficulty of anti-fraud work has been greatly increased. Dongfanggtong TongGPT intelligent voice interaction system relies on the research of Dongfangtong AI generative artificial intelligence technology, will launch TongGPT multimodal intelligent interaction model, and plan to implement intelligent voice reminder and intelligent customer service scenarios such as intelligent voice reminder and intelligent customer service for fraud risks before the end of the year. Especially in the face of AI-generated voice fraud, it can automatically remind "victims" to solve problems such as the efficiency and coverage of fraud-related risk reminders in the current telecom network fraud governance process, improve the ability to identify fraud-related risks, reduce the investment in manual identification, improve the timeliness and effectiveness of fraud-related reminders, reduce costs and increase efficiency, and improve the ability to prevent and control new types of illegal crimes such as telecommunications network fraud.

5) Generative AI Algorithm Security Detection Tools: In accordance with the requirements of the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) issued by the Cyberspace Administration of China of China, based on large model confrontation to achieve algorithm security detection, Dongfangtong has started research on the development of security assessment test methods and tools for new interactive AIGC algorithms such as ChatGPT, which can detect potential security risks of generative AI algorithms, including content security (including risks of violation of regulations, discriminatory risks, False information risk, etc.), privacy data security, algorithm robustness, malicious code risk, etc., to detect, prevent and manage AI content risks from the source of algorithms, and will provide comprehensive support services for regulatory authorities and algorithm service providers in the future.

4.6. Tolls

Founded in 1993, Tolls is the originator of Chinese full-text search technology and a leading provider of artificial intelligence, big data and data security products and services. As a provider of artificial intelligence, big data and data security products and services, the company empowers the digital intelligence of users in various industries. According to different industry applications, the company's business can be divided into five sections: digital government, financial media, financial technology, digital enterprise and public security; According to different technical fields, it can be divided into four fields: artificial intelligence, big data, data security, and information creation; According to the different service models, it can be divided into four modes: software products, big data services, subscription-based SaaS services, and software and hardware integrated products.

Q1 Revenue growth turned positive, and revenue from artificial intelligence software products and services bucked the trend. From 2020 to 2022, the company's revenue will be 1.309 billion yuan, 1.029 billion yuan and 907 million yuan, respectively. In 2023Q1, the company's revenue growth rate returned to positive, achieving revenue of 211 million yuan, a year-on-year increase of 10.62%. The revenue of the company's artificial intelligence software products and services grew steadily against the trend, achieving revenue of 119 million yuan, 181 million yuan and 212 million yuan respectively from 2020 to 2022, accounting for 23.37% of revenue in 2022.

Artificial Intelligence Industry Special Report: AI regulation creates a balance between innovation and safety

In the construction of platforms in related fields such as government affairs and media, Tolls covers business modules related to content review. For example, the government intensive platform service helps the government operation team to communicate more accurate information by automatically reviewing the content of information release; Another example is the construction of the integrated media platform, and multi-dimensional review of multi-modal content such as real-time legal compliance of text, pictures, and audio and video content. The SaaS cloud service platform launched by Tolls for automatic proofreading can conduct content review of published content accurately, comprehensively and intelligently, including text errors, such as typos, phonetic close words, shape close words, multiple characters, overlapping, upside down, traditional words, heteromorphic words, etc.; Filtering of sensitive words, such as unhealthy words involving violent terrorism, pornography, prohibition, insults, discrimination, etc., and officials who have fallen from power; Knowledge errors, such as improper expression, improper collocation, semantic errors, terminology nouns, place names, etc.; Common sense errors, such as punctuation, numbers, quantifiers, units of measurement, capitalization, time expressions, etc.

(This article is for informational purposes only and does not represent any investment advice from us.) For information, please refer to the original report. )

Selected report source: [Future Think Tank]. 「Link」

Read on