On October 23, the 2nd China Cloud Computing Infrastructure Developers (CID) Conference was held in Shanghai. As the co-founder of the CID Conference, the co-chairman of the CID Conference, and the senior chief engineer of Intel Asia Pacific R&D Co., Ltd., Dong Yaozu was interviewed by the organizing committee of the conference a few days ago and shared the story behind the organization of the conference. Dr. Yaozu Dong is mainly engaged in the field of operating systems and cloud computing, and is also one of the main contributors to the earliest open source virtualization solutions in the world. In 2005, he single-handedly led the development of the world's first fully virtualized implementation of the Itanium architecture. From 2008 to 2009, he presided over the development of the first SR-IOV virtualization product in the world, and proposed and presided over the high-availability solution project of coarse-grained lock-step virtual machines in 2012.
Hello Dong Bo, I heard that you are also one of the founders of the CID Conference, what was the main idea on which you founded CID at that time?
Dong Yaozu: Since the first decade of the 21st century, virtualization technology has rapidly entered the IT field, splitting a single physical machine into multiple virtual computer instances through virtualization software. Based on virtualization technology and network technology, the business model of using computing resources, storage resources and network resources as on-demand and pay-per-use like tap water and electricity, that is, cloud computing, has begun to be proposed and received widespread attention. With the promotion of a number of leading enterprises at home and abroad, cloud-based business services have begun to appear in large numbers, and have gradually become a new form of IT deployment.
In the past decade, the development of domestic cloud computing technology has been quite rapid, the number of developers related to cloud computing infrastructure is huge (personal conservative estimates are more than 100,000), and the number of participating enterprises is also very large. For such a group of developers, they need an occasion to communicate with each other about the latest cloud computing technologies, and then master and develop the latest cloud computing technologies.
Around May 2020, I started to think about organizing such a cloud infrastructure developer conference, and in a private opportunity, I talked to Chen Xu of Alibaba Cloud and my colleague Feng Xiaoyan of Intel, and the three of us agreed very well, and everyone felt that this was a very necessary move. The three of us immediately agreed to do it, and Chen Xu and I began to act separately, contacting relevant head enterprises in China to find promoters and initiators with common volunteers. We contacted Zhang Yu of ByteDance, Chen Lidong of Tencent and Liang Bing of Huawei, and everyone recognized this idea very much, so the preparatory committee of the first CID conference was formed. At the same time, we also found an old friend and associate professor of Tsinghua University, Professor Chen Yu, and launched the first CID Conference together.
Because of the epidemic, the first CID Conference and the 1024 Programmers Festival were held together, mainly in the form of online, but also attracted many offline participants. Considering the improvement of the epidemic and the original intention of CID to provide an opportunity for developers of cloud computing infrastructure to communicate and communicate with each other, this year's CID conference we chose to hold in an offline-based format. We met with our developer friends in Shanghai on October 23rd to communicate and improve together.
As the® world's largest chip manufacturer and the first basic computing power provider in the field of cloud computing, can you talk about Intel's® latest layout and research and development achievements in cloud computing and cloud computing infrastructure?
Yaozu Dong: Over the past three decades, Intel has established long-term cooperative relationships with China to build and innovate digital infrastructure to promote industry and economic development. At present, there are "four super technology forces" in the world, which are the core of the industry's digital transformation, and these super technology forces enable us to drive innovation, exploration and growth. One of the most important technological forces is "cloud-to-edge infrastructure," which creates a dynamic and reliable path to connect computing and data, enabling an infinitely scalable cloud of scale and capacity combined with an infinitely extended intelligent edge.
Intel has a broad layout and technology background in the data center, cloud, 5G, and intelligent edge fields, providing industry customers with powerful performance and workload optimization, accelerating the development and deployment of a variety of complex workloads such as artificial intelligence, data analytics, and high-performance computing, and fully empowering the digital transformation of the industry.
In April 2021, Intel introduced the first 3rd generation Intel Xeon Scalable processor (Ice Lake) designed and manufactured using a 10nm process. In August 2021, Intel announced a new industry-standard data center architecture, Sapphire Rapids, featuring a new performance core and multiple accelerator engines. It also announced Intel's new infrastructure processor (IPU) and the extraordinary data center GPU architecture Ponte Vecchio, which has Intel's highest compute density to date.
Intel® attaches great importance to the future of cloud native, and has increased investment and research in this area, can you please tell us about the specific cloud native research and development trends?
Dong Yaozu: The topic of cloud native is very big, and there are many technologies and open source projects involved, if you are very interested in technology, you can check out the cloud Native Computing Foundation (CNCF) vision. As far as cloud computing technology is concerned, cloud computing infrastructure architecture has evolved to the second or even third generation of architecture with cloud native architecture as the core, and its technical basis is centered on containers and container orchestration technology.
Intel® in the cloud computing infrastructure hardware technology, in addition to computing, networking, storage, security and other products and technologies, we have also invested a lot of resources and manpower in software research and development, technological innovation, we attach great importance to cloud native technology research and development, innovation and related application scenarios landing, in the entire cloud computing software ecosystem, our contributions include a lot, such as Linux kernel, virtualization technology KVM, lightweight virtual machine Cloud Hypervisor, Kata Containers, container runtime and container orchestration technologies, etc.
In order to allow users to get the best performance experience on Intel's® hardware platform, we also work with industry partners to study and optimize workload, Microservice benchmark, FaaS (Serverless), etc., and at the same time, according to the needs of our customers, we will also work with ecological partners to develop multi-cloud management, edge cloud and other solutions. As you will see, in addition to the fact that cloud native is already a megatrend, technologies such as multi-cloud and edge cloud have become very hot, and the industry has begun to deploy related products and services.
In terms of specific technology research and development, we will not repeat the wheel, like Kubernetes has become the de facto cloud native container orchestration technology standard, then our R & D investment in Kubernetes is to make Intel's® full line of products (including computing, networking, storage, security), such as CPU, GPU, VPU, FPGA, Optane storage products, E810 networking products and SmartNIC can be quickly integrated on the Kubernetes platform and are stable in use. At the same time, it also provides more functions by integrating various device plug-ins and resource management technologies to accelerate and optimize its container business.
Taking CRI-RM (Container Runtime Interface – Resource Manager) as an example, we dynamically divide system resources on the node, cooperate with the Kubernetes scheduler, achieve optimal task orchestration at the node level, and adapt the features of the Intel® platform to the Kubernetes cluster environment. With CRI-RM, we can get more than 10% performance improvement in large-scale enterprise database operation scenarios.
As another example, we recently proposed in the CNCF a complete upgrade to Kubernetes' device management, called The Container Device Interface. Through CDI, we can provide devices on K8s cluster nodes with more fine-grained distribution and management that are more in line with application usage logic. This proposal is currently in the KEP phase, and we have had in-depth discussions with community partners and received a positive response.
Of course, with the evolution of cloud-native technology architectures, service-mesh serviceless, service mesh service mesh has also been deployed by cloud service providers on a large scale, and the use cases are becoming more and more mature.
K8s and containers are hot technologies in recent years, can you talk about Intel's® views on containers and k8s and Intel's® support and contribution to the development of containers and the K8S community?
Dong Yaozu: Indeed, as you said, these two technologies are very hot, and Intel® also attaches great importance to the rise and development of these new technologies. Container technology itself has become the standard for application software delivery in the cloud era, and it is the foundation of immutable infrastructure in cloud-native architectures. According to the definition of OCI (Open Container Initiative), it involves two cores, one is the container image format standard, and the other is the image runtime specification, that is, the full lifecycle management of containers.
For these common container technologies, the community has been very active in advancing, Intel's® main contributions focus on the runtime interface, runtime resource management, and security isolation, which is what developers see Container Runtime Interface (CRI), Container Runtime Interface – Resource Manager (CRI-RM), and secure container Kata Containers open source project.
Kubernetes has become the de facto standard for container orchestration, deployed on a large scale, and as the first CNCF hosting project, it has been very stable and mature from the technology, and the availability is also very good. As one of the first CNCF members, Intel's® contribution to the Kubernetes community has always been among the best, and our specific contributions are mainly in the underlying resource management and optimization, such as CPU management, NUMA topology management, memory large page allocation and management, device resource plug-in management, etc.
In addition, with the large-scale deployment of microservices, no services, FaaS and other services, the industry has put forward strong demand for container security isolation, fast start-up, low resource consumption, etc., we have also developed a lightweight virtual machine project Cloud Hypervisor with partners to meet the efficient resource isolation under the cloud native architecture, and Cloud Hypervisor has become the default virtual layer of security container Kata Containers.
Intel's® contributions to k8s and containers are many, such as: using Kubernetes' API capabilities to expand its platform capabilities to make the system compatible, integrating more hardware functions and device resources to enrich Kubernetes' networking, storage, accelerators, GPU, security and other capabilities. This will greatly improve the security and performance of cloud workloads, which are open source projects that can be accessed: https://github.com/intel/intel-device-plugins-for-kubernetes
Another example is autoscaling, we hope to work with ecological partners and k8s community to solve some technical challenges in autoscaling: horizontal autoscaling of container services, vertical autoscaling of container services, and autoscaling of large-scale clusters. We have a lot of related technologies that can help users and our customers in the community. The adoption of these technologies will greatly improve the resource utilization of cloud service providers. If you see these challenges and would like to work together to solve them, please contact us.
Kata Containers is Intel's® improvement on traditional containers, taking into account the simple deployment and security isolation of containers. What is Intel's® attitude towards future secure containers?
Dong Yaozu: Intel® is the earliest initiator and staunch supporter of secure containers, which provide virtual machine-level security isolation, performance isolation, and fault isolation, and multi-core naturally support multi-tenant business. Secure containers have become the first choice for many Internet cloud vendors to do business, especially Serverless.
In the future, Intel® will continue to innovate with partners in two aspects:
The first is to further improve the agility and performance of secure containers, so that secure containers can meet common business use needs;
The second is confidential containers, as cloud-native carries more and more application scenarios, security has become the core factor of many high-value services and data cloudification, confidential containers seamlessly integrate confidential computing into cloud-native and container ecology.
At the same time, Intel® has introduced hardware security technologies, such as Intel® SGX® technology and Intel® TDX® technology, which we will integrate into the open source projects Kubernetes and Kata Containers. Intel® will unswervingly promote the development of related technologies with the community and ecological partners to create a better future for confidential containers and secure containers.
Dr. Dong, you mentioned the concept of Cloud First Client proposed by Intel® earlier, can you please talk about its technical details and landing scenes?
Dong Yaozu: Cloud First Client (CFC) assists in cloud integration, using the concept of cloud computing to flexibly orchestrate computing on the cloud and on the end, and is committed to providing the best user experience. Service providers adopting Cloud First Client can have many benefits, such as:
Leverage the elasticity of the end's ability to provide more services;
Protect user data by bringing computations closer to the data source;
Developers can develop services only once, and these services can run flexibly on the cloud or side.
About technical details. CFC leverages the concept of cloud computing, based on container orchestration and Kubernetes, to give services the flexibility to deploy in the cloud or on the side. Unlike traditional cloud computing, CFC is orchestrated by clients with computing power to better provide resiliency, rather than controlled by the cloud. This provides more scalability and resiliency, while also protecting the user's device from being completely taken over by the cloud. While the devices on the side can provide computing power, CFC leverages Kubernetes' powerful orchestration capabilities, and we can get the containers that provide the services to run on the end device. With limited capacity at the end, we can switch the service back to the cloud through DNS switching.
There can be many corresponding scenes, take a specific example - speech-to-text scenes. In cloud computing, the user's voice needs to be uploaded to the cloud, and the text conversion is transferred by the cloud to the client. After CFC is enabled, the speech-to-text service is deployed on the client, and customers can convert text locally without uploading voice. Adopting CFC reduces presentations, saves bandwidth, cloud computing resources, and also protects user privacy. (End)
Dong Yaozu: Cloud native is the trend, and cloudy and edge clouds have also become very hot