laitimes

The era of big models is here! How to deal with digital security risks? Xiaomanwaist AI Summit expert support

author:Southern Metropolis Daily
The era of big models is here! How to deal with digital security risks? Xiaomanwaist AI Summit expert support

What is the reason behind the explosion of big language models such as ChatGPT? Which industries are bounning? Where is the potential bubble risk? From May 25th to 26th, at the 2023 Xiaomanwaist Science and Technology Conference sub-forum - "Towards the Intelligent Era, Realize the Civilization Leap" AIGC special summit, more than 20 researchers and practitioners in the field of AI discussed the new paradigm of AIGC application and business, the new development path of various industries, as well as potential data security risks and ethical issues.

At the book release ceremony at the summit on the 26th, Long Zhiyong, author of "Big Model Era", former senior product expert and deputy general manager of Alibaba's business unit, and co-founder and chief operating officer of Silicon Valley AI startup, said frankly in an interview with Nandu that generative AI should take the normative and then development model, and deal with the potential bubble risk of large models with both technical means, such as large model self-assessment, compliance algorithm review, etc., as well as manual processes, and more importantly, the industry must have reasonable expectations for the difficulty and cycle of problem solving. in order to avoid the risks caused by excessive optimism.

The big model has set off a new round of intellectual revolution and industrial restructuring

The real intelligent "brain" behind generative artificial intelligence such as ChatGPT is the big language model! The technological breakthrough based on generative pre-trained large model brings multiple applications for individuals and deep industry, triggers a new round of intellectual revolution and industrial reconstruction, and builds a new brain-computer collaboration relationship.

The era of big models has arrived! Long Zhiyong revealed that "Big Model Era" provides in-depth analysis and elaboration of technology, application and industrial changes, vividly explains the principle behind ChatGPT large model, describes how the big model will drive society into the era of intelligent revolution and brain-computer collaboration, and summarizes the precautions and methodologies for enterprises to apply large models in their own business, and makes suggestions for individuals and enterprises to cope with change. According to him, the big model has been specifically applied in the fields of knowledge work, commercial enterprises, and creative entertainment, mainly bringing two kinds of innovation: incremental innovation and disruptive innovation.

In the keynote speech at the summit, artificial intelligence scientist Liu Zhiyi also mentioned that artificial intelligence empowers various fields of economic and social development, and the demand for large models for industrial upgrading in downstream fields continues to rise. It is estimated that the market size of the Chinese industry will be 370 million yuan in 2022 and is expected to reach 1.537 billion yuan in 2027, which is expected to continue to penetrate in downstream manufacturing, transportation, finance, medical and other fields to achieve large-scale application.

The era of big models is here! How to deal with digital security risks? Xiaomanwaist AI Summit expert support

"Big Model Era" was released at the 2023 AIGC Special Summit on May 26 "Towards the Intelligent Era and Realizing the Civilization Leap".

Generative AI poses risks such as trust erosion

However, with the widespread use of large models, potential bubbles have emerged. Less than 20 days after Samsung introduced ChatGPT, it was exposed to confidential data leakage. The legal risks brought by AI face changing and AI painting, as well as ethical and data security issues, have attracted attention.

When talking about "AI technology innovation and ethical governance in the era of big models", Liu Zhiyi said that generative artificial intelligence does have certain risks, and if these risks are not considered and mitigated when scaling up, it may slow down the speed of transformation. Models are continuously trained to improve performance, raising concerns about sensitive data, privacy, and security. All those involved in the development, consumption, discussion, and regulation of generative AI should strive to manage risks such as trust erosion, long-term employee unemployment risks, bias and discrimination, data privacy, and the protection of intellectual property.

Liu Zhiyi shared his three views in an interview with Nandu. He said that first, AI technology will naturally enter the national economy and social system fields, the risk will expand, because the technology itself is a black box, such as deep neural networks, through the calculation of technology and algorithms, no one knows how it achieves each step, it is opaque and unexplainable, and there are risks. Second, AI technology is often related to the creation of the digital world. For example, deep fakes, including forging sounds, images, is to turn physical identities into digital identities, the more developed the digital economy, the more need for these technical support, the stronger the dependence, but the greater the risk. Third, the mainland places great emphasis on application scenarios and ecology, and the landing of these application scenarios is inevitably innovative, which will inevitably bring risks, and these risks expand with scene innovation, so it will be pre-supervised, such as the "Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)" issued by the Cyberspace Administration of China, and the "Opinions on Strengthening the Ethical Governance of Science and Technology" issued by the Ministry of Science and Technology, etc., are all pre-considerations to consider some risks.

The era of big models is here! How to deal with digital security risks? Xiaomanwaist AI Summit expert support

Long Zhiyong, author of "Big Model Era", former senior product expert and deputy general manager of Alibaba's business unit, co-founder and chief operating officer of Silicon Valley AI startup, spoke at the book launch ceremony.

It puts forward requirements for the reliability and transparency of large model algorithms

"Data privacy is indeed an important issue for GPT large models", Long Zhiyong said in an interview with Nandu, OpenAI has recently made preparations in advance in response to inquiries in the United States, such as providing a personal option to turn off chat records in ChatGPT, and users can refuse large models to use their own private data for training; For enterprise customers, OpenAI will provide private deployment models to avoid worrying that their fine-tuning training data will be shared with competitors by large models, and these measures are likely to be adopted by large domestic models.

Regarding how to deal with the potential bubble risk of large models and how to balance the relationship between strong norms and development of generative AI, Long Zhiyong frankly said that generative AI should take the model of standardization first and development later. As the main bearer of legal responsibility for AI products, the service provider of the large model is responsible for the correctness and value orientation of the AIGC content, and its compliance pressure is still considerable, which is a strong standard," in the document "Several Measures to Promote the Innovation and Development of General Artificial Intelligence in Beijing", it is mentioned that generative AI is encouraged to achieve upward and good application in non-public service fields such as scientific research, and pilot projects in the core area of Zhongguancun to carry out inclusive and prudent regulatory pilots, which I think is a positive. A signal to strike a balance between regulation and development".

He mentioned that according to the regulatory idea, it actually puts forward requirements for the reliability and transparency of the large model algorithm. The "Big Model Era" warns of potential industrial bubble risks, and one of the important factors is the reliability and transparency of large models. Ilya, chief scientist at OpenAI, believes that the illusion or falsification of information in large models is the biggest obstacle to the application of GPT in various industries. The reason why the illusion problem is difficult to eradicate is firstly because of the training goals and methods of large models, and secondly, the black-box attribute that AI has had since the era of deep learning, which is opaque and cannot locate specific problems in the model. Considering that the emergence mechanism of new capabilities of large models is also opaque and unpredictable, the big model industry needs to pursue controllability in the runaway and seek development in the norm, which is the biggest challenge.

Produced by: Nandu Big Data Research Institute

Researcher: Yuan Jiongxian