laitimes

From technology to landing, the industry model is working backwards from the "small" scenario ToB Industry Observation

author:Titanium Media APP
From technology to landing, the industry model is working backwards from the "small" scenario ToB Industry Observation

What kind of generative AI do we need? On the one hand, major technology giants have launched their own large-scale model products and opened the crazy volume parameter mode, and on the other hand, with the gradual deepening of the understanding of large-scale model capabilities, more and more enterprises have begun to pay attention to the landing of large-scale model-related applications on the B-side.

Since the beginning of this year, the lightweight and small-parameter models perpendicular to the subtle scenes have begun to emerge, and it can be said that the situation of major manufacturers launching large-scale model products to form a "100-model war" is only a prologue, and now it is really entering the "battle" - how to better land large-scale model products on the industry side.

In one year, from 5 billion to 12 billion

According to the White Paper on the Innovative Application of Large Models in the Artificial Intelligence Industry in Beijing (2023), as of October 2023, there are a total of 254 large model manufacturers and universities and institutes with more than 1 billion parameters in China, distributed in more than 20 provinces, cities/regions. According to a report by business consulting firm Ai Analytics, the size of China's large model market will be about 5 billion yuan in 2023, and this figure is expected to reach 12 billion yuan by 2024.

As time goes on, the value of large models on the industry side will gradually emerge, and according to market research institutions, the global generative AI market will reach more than $10 billion by 2025. Among them, the enterprise generative AI market will account for a significant share and become one of the largest application areas.

According to the 2023 Global AI Adoption Index report, about 42% of enterprises surveyed globally are already actively deploying AI in their business. At the same time, the report shows that Chinese companies are firmly in the first camp of enterprise-level AI applications. Among them, nearly half of Chinese companies said that they are already actively adopting AI, 85% of Chinese companies said that they will accelerate the adoption of AI in the next 2-3 years, and 63% of Chinese companies also said that they are actively exploring generative AI.

Looking back at the era of cloud computing, although in terms of technology, the cloud technology of Chinese enterprises may not be the first in the world, but in terms of rich application scenarios and implementation capabilities, China is definitely far ahead in the era of cloud computing. And this time, when the wind of large models blows to all walks of life, I believe that in the process of the application of large models in the industry, China will also become the "main force" of the application of large models in the industry.

According to Li Gang, vice president and CTO of Digital China, although there are still some gaps between the large model products opened by Chinese enterprises and the world-class level at this stage, with the rapid development of open source model technology, coupled with the continuous optimization of the governance of knowledge and data by enterprises themselves, the gap in model technology itself has little impact on enterprise deployment, "In 90% of enterprise scenarios, the combination of open source models is enough to meet the needs of users." ”

At the same time, Li Gang told the titanium media APP that Chinese enterprise users have a high acceptance of new technologies and will take the initiative to try to accept new technologies, "which is a very big advantage for the landing of large models in the industry." ”

From technology to implementation, there are three major issues that need to be paid attention to

From the perspective of the current application, it is still in the early stage of development, and Li Gang also provided several suggestions for enterprise users in the application process.

First of all, enterprises need to continuously expand their horizons and use AI to empower business innovation, "The change of thinking and the acceptance of AI is the first step for enterprises to embrace the AI model." Li Gang said.

Second, enterprises need to improve their systematic understanding of AI models. Li Gang told Titanium Media APP that with the emergence of ChatGPT in the form of a "super application", although everyone has a certain understanding of the AI model, this cognition is fragmented, "The emergence of the AI model has completely subverted people's previous perception of AI," Li Gang pointed out, "Cognition needs to change from fragmentation to systematization and systematization, so as to avoid enterprises encountering some problems in the process of applying AI." ”

Third, enterprises need to prioritize the business scenarios in which they apply large models. There are some small scenes that are more valuable than the big ones, "but there are some small scenes that you may not see." Li Gang said.

At the same time, Li Gang emphasized that at this stage, when enterprises apply large models, they need to pay attention to task-level scenarios, rather than position-level scenarios, "Task scenarios may only be a part of the work content of employees' job responsibilities, and enterprises are more likely to generate greater value from these subtle scenarios." ”

Through the implementation of subtle scenarios, after the application matures, coupled with the accumulation of these scenarios, and by building a perfect framework, the AI large model can complete more complex tasks, "It is also through the continuous verification of small scenarios, the continuous accumulation of experience and data, and through the gradual 'assembly', can larger scenarios be superimposed, and even some super applications may appear." Li Gang said that this is also the thinking behind Digital China's launch of the Shenzhou Wenxue platform, "We need a platform to accumulate these AI capabilities on the platform, and then continue to assemble them on the platform to form more complex tools or AI models with more powerful capabilities." ”

Enterprises are more inclined to platformization and privatization

As Li Gang said, the difference in technology is no longer enough to affect the implementation of applications, and in the richness of application scenarios, China has a natural advantage, from the current enterprise-side applications, including finance, medical care, legal consulting, education and training and other service-oriented industries are expected to take the lead in the implementation of more mature generative AI.

Chen Xudong, chairman and general manager of IBM Greater China, has publicly stated that enterprise-level AI applications have a wider range of needs and potential than the consumer side, "IBM believes that generative AI has great opportunities in all aspects, including HR, financial and supply chain process automation, IT development and operation and maintenance, as well as enterprise asset management and data security." Chen Xudong emphasized.

However, at present, although enterprises in various industries are actively embracing the dividends brought by AI large models, in this process, there is one factor that everyone is primarily concerned about - safety.

At present, data has become an important asset/resource for individuals, enterprises, and even countries, and data security has risen to the level of national security. According to the 2023 Cost of a Data Breach Report released by IBM Security, the average cost of a data breach worldwide in 2023 reached $4.45 million for data breaches alone, the highest ever recorded in the report, and a 15% increase from the average of the past three years.

The training of large AI models, especially the training of large industry models, requires enterprises to provide a large amount of private domain data, so that the trained models can have the ability to empower the industry.

With the implementation of large-scale model-related products, the threat to data security is bound to increase. According to the IEEE survey, other more threatening cybersecurity issues will emerge in 2024, including ransomware attacks (37% in 2024, up from 30% in 2023), phishing attacks (35% in 2024, up from 25% in 2023), and insider threats (26% in 2024, up from 19% in 2023).

Obviously, with the development of large AI models, data security is also facing greater challenges. When enterprises deploy AI model-related products, how to ensure that enterprise data will not be leaked or even used by competitors is one of the core issues that enterprises are worried about when applying generative AI to empower their businesses.

From the perspective of current applications, when enterprises deploy large AI models, the vast majority of enterprises choose to deploy locally. Taking the Shenzhou Wenxue platform as an example, Li Gang told Titanium Media APP that the Shenzhou Wenxue product itself supports a multi-cloud deployment environment, "When demonstrating to users, we put the demo on the cloud to facilitate the demonstration of platform capabilities, and in the process of deployment, users choose to deploy in their own controllable environment." Li Gang pointed out.

In addition, whether it is a commercial model or an open-source model, the model produced after fine-tuning, pre-training, and other operations can be deployed locally to ensure basic privacy and security. Coincidentally, Yan Liang, general manager of Inspur Cloud, once told Titanium Media APP that in the process of applying large-scale model products in the industry, enterprises are more inclined to local deployment, and at the same time, they must have reliable security capabilities to form effective protection for the data of enterprises. Yan Liang emphasized.

In addition, enterprises need to adopt a series of security measures and technical means, such as strengthening data encryption and access control, establishing security audit and monitoring mechanisms, adopting adversarial defense technologies to improve the robustness of models, and improving privacy protection policies and mechanisms. In Li Gang's view, enterprises also need a complete knowledge access mechanism to ensure the safe and compliant use of data by employees.

To ensure security, the next thing enterprises need to consider is the difficulty and cost of using large models.

From the current way of using AI empowerment, enterprises can be roughly divided into three categories: embedded software, API calls, and enterprise-level AI platforms. The method of embedding software is not suitable for enterprise use because of its lack of differentiation ability and weak ability to adapt to scenarios. The use of API calls and platforms can not only meet the current needs of enterprises for AI technology, but also have long-term development potential, which can be said to be the best choice for enterprises to use AI to empower their businesses.

The Shenzhou Wenxue platform built by Digital China is a PaaS-like platform, which provides a wealth of open source models for enterprises to choose from, and at the same time, it also has the ability of localized deployment, and outputs the most powerful model capabilities in the most cost-effective way, which is the biggest advantage of this platform. "At the same time, Digital China will cooperate with the IT department of the enterprise to jointly explore the landing scenarios and build an AI platform to minimize the technical threshold and make the cost of use even lower, so as to truly achieve the benefit of large models." Li Gang said.

Making the large model easy to land and more inclusive has become the focus of attention of all parties. Coincidentally, Sun Siqing, chief technology officer of Inspur Cloud, also told Titanium Media APP that from the perspective of large model service providers, service providers need to have certain basic model capabilities, followed by rich computing resource capabilities, and finally have the ability of large model engines. "Large-scale model products developed based on distributed cloud architecture will become an important channel for the implementation of large-scale models in the industry." Sun Siqing emphasized.

In this regard, Inspur Cloud adopts a distributed architecture to adopt a centralized way in the pre-training process of the model, and better protect the privacy of the enterprise through localized capabilities combined with local data at the time of delivery, which not only meets the needs of enterprises for security compliance, but also maximizes the capabilities of the model.

However, everything is still in the exploratory stage, Li Gang also pointed out in the communication with the titanium media APP that although the emergence of ChatGPT makes people feel that AI technology is "finally" coming this time, the technology itself is not yet mature and is still in the development stage, "When AI technology is really applied and matured on the enterprise side, when enterprises use AI technology like cloud computing technology today, it is the time when AI technology really changes our productivity." Li Gang pointed out.

In 2024, more and more industry large-scale models will be applied, and the path to truly reduce costs and increase efficiency with the support of large-scale model capabilities will also be clearer. (This article was first published in Titanium Media APP, author|Zhang Shenyu, editor丨Gai Hongda)

Read on