Under the new wave of artificial intelligence, the scale of data has surged, and edge computing power will become an important part of AI models.
By providing IT services and cloud computing functions for users at the edge of the network, edge computing technology can effectively reduce the latency of network operations and service delivery and provide a better user experience.
The combination of edge computing and emerging technologies such as artificial intelligence, 5G, Internet of Things, and metaverse is driven by the demand of energy, transportation, manufacturing and other industries, and the industry market space is broad. #人工智能 #
According to Precedence Research, the global edge computing market size will be $4.55 billion in 2022, and the market size will grow at a CAGR of 12.46% to $116.5 billion in 2030.
Gartner report shows that more than 90% of enterprises will open their own unique applications in edge computing and will become a large-scale industry in the future, and the edge computing development strategy will become the general trend.
According to IDC's prediction, by 2025, 80 billion terminal devices will be connected to the Internet worldwide, and mobile data traffic will be as high as 160 zettabytes, which will put a huge load on cloud computing systems, and upload terminal device data to the cloud may cause long propagation delays. #边缘计算 #
Pay attention to Leqing Think Tank and gain insight into the industrial pattern!
Edge Computing Industry Overview
Line check | According to the industry research database, edge computing refers to infrastructure that provides computing and storage close to the data source or user side to provide services for edge applications. Compared with centralized cloud computing, edge computing solves problems such as long latency and large aggregate traffic, and provides better support for services with high real-time and bandwidth requirements.
Edge computing landing scenarios are mainly divided into three categories.
Edge computing landing scenarios mainly include three categories, the first of which is services with large traffic, including ultra-high-definition video and interactive live broadcast. The second category is large-scale IoT business, including smart logistics, smart city, etc. The third category is cloud gaming, autonomous driving, and industrial automation that require high latency and connection reliability.
Edge computing application scenarios:
Compared with cloud computing, edge computing has the advantages of low latency, less bandwidth requirements, and high security, which is more important in the metaverse era:
1) Low latency: In cloud computing mode, the data generated by the device needs to be transmitted to the cloud computing center for processing, and then the results are returned to the application, so the real-time performance is insufficient. Edge computing deploys computing power near the data source, greatly reducing latency.
2) Reduce bandwidth pressure: In the face of the rapid growth of data volume, traditional cloud computing relies on optical fiber, satellite, etc. for transmission, while edge computing performs data processing at the data source, reducing redundant data and reducing bandwidth requirements;
3) Reduce the risk of privacy leakage: For some industries with high data privacy, such as surveillance systems and face recognition, uploading video and photo data to the cloud will increase the risk of privacy leakage; At this time, the use of edge computing and localized storage analysis can reduce the risk of data leakage. #5月财经新势力 #
Edge computing industry chain
The edge computing industry has entered a period of rapid development, the industrial ecology has gradually formed, and upstream and downstream cooperation has been enhanced.
The upstream mainly includes software and hardware infrastructure, design chip-to-server vendors and edge software architecture. Midstream mainly provides edge services, there are three main types of enterprises: edge computing service platform, operator MEC and cloud computing service sinking enterprises downstream edge computing applications, with the increase of 5G application demand in the industry, industrial Internet solutions in various vertical industries will inherit edge computing to solve special scenario requirements.
Edge computing industry chain:
Source: West China Securities
Under the background of the deepening of the AI wave and the trend of the Internet of Things, as well as the continuous development of network construction such as 5G, the edge side related products and application side are expected to open up massive space. The two core subdivisions of edge chips and edge computing power of the industry chain are worth paying attention to:
Edge AI chip
According to the deployment location, AI chips can be divided into cloud (data center) chips and edge (terminal) chips.
Cloud chip deployment locations include infrastructure such as public, private or hybrid clouds, which are mainly used to process massive data and large-scale computing, and must also be able to support the computing and transmission of unstructured applications such as voice, pictures, and video, and generally use multiple processors to complete related tasks in parallel.
Edge AI chips are mainly used in embedded and mobile terminals, such as cameras, smart phones, edge servers, industrial control equipment, etc., such chips are generally small in size, low in power consumption, slightly lower in performance requirements, and generally only need to have one or two AI capabilities.
At present, the application field is moving from the cloud to the edge side, and the terminal has spawned a large number of chip requirements.
Source: Yiou Think Tank
From a functional point of view, AI chips are divided into two types: training chips and inference chips.
Due to the limitations of power consumption, computing power and other conditions, most of the AI chips at the edge end are edge inference chips.
From the perspective of technical architecture, it can be divided into four categories, namely general-purpose chips (GPUs), semi-custom chips (FPGAs), and fully customized chips (ASICs).
In edge computing, AI chips use data collected by sensors such as microphone arrays and cameras to reason according to the built model and output corresponding results.
Due to the many application scenarios of edge computing, the performance requirements for hardware such as computing power and energy consumption are also different, which has spawned a broader demand for AI chips.
Demand for AI computing power in different edge scenarios:
Source: iResearch, Huachuang Securities
Domestic chip manufacturers such as HiSilicon, Cambrian, Yuntian Lifei, Guoxin Technology, and Haiguang Information have successively launched cost-effective edge AI chips.
Cambrian "Siyuan" 220 chip is an edge acceleration chip specially used for deep learning, using TSMC's 16nm process, with high computing power (32Tops) and low power consumption (10W).
The DeepEye1000 adopts 22nm process, integrates dual-core vision DSP processor, built-in hardware-accelerated operator ACC, and has a peak computing power of up to 2.0Tops, which can support real-time analysis of 4K@30fps video and 4-channel HD video in parallel.
Edge hashrate
With the development of AI and other scenarios, the demand for data processing in data centers and intelligent terminals continues to grow, and edge computing will effectively solve the data processing problems caused by the high demand for computing power.
The solution for computing power development will be the simultaneous development of cloud and edge, of which the development of edge computing power is more worth looking forward to.
At present, the proportion of high-computing power devices is still relatively low, for example, less than 1% of computers or consoles can play "Microsoft Flight Simulator" at the lowest picture quality.
In the metaverse with higher requirements for computing power, if you want to include as many terminals and users as possible, you need to reduce the requirements for device configuration, and cloud rendering and video streaming are an inevitable idea.
According to IDC, the share of real-time data will increase over the next five years, reaching about a quarter of the total data generation by 2024.
The proliferation of real-time data has promoted the development of edge infrastructure, making edge computing power more and more important, and artificial intelligence applications will become more dependent on computing power at the edge.
By 2023, more than 50% of the world's new infrastructure will be deployed at the edge, and nearly 20% of the servers used to support AI workloads will be deployed at the edge.
Basic network architecture of edge computing power:
In addition, edge computing power plays an important role in "out-of-the-box cloud", mobile connectivity, universal user premises equipment (uCPE), satellite communications (SATCOM) for retail/finance/remote connectivity applications, etc.
Misalignment of computing power on the center and edge sides:
Edge computing power layout manufacturers mainly include Capital Online, Youkede, Netsu Technology, Sinnet and Xinju Network.
In general, edge computing can be deployed on demand in radio access clouds, edge clouds, or converged clouds. For low-latency scenarios, edge computing needs to be deployed in the radio access cloud near the base station or even in the terminal itself (such as security cameras and smart cars). For high-traffic hotspots with high bandwidth requirements, edge computing can be deployed in the edge cloud. For scenarios with massive connectivity, edge computing can be deployed in a higher-located converged cloud to cover business needs in a larger region.
Pay attention to Leqing Think Tank and gain insight into the industrial pattern!