laitimes

How to use Google Kubernets for cluster management? Rancher China CTO Jiang Peng shared by hand

author:Quantum Position

<h5 class="pgc-h-arrow-right" > jingshao from The Temple of Oufei

Qubits reports | Official account QbitAI</h5>

Kubernetes, as an open source project released by Google in 2014.

It is an open source platform that automates the deployment, scaling, and operation of application containers, making it quickly available to an infrastructure that is truly container-centric development environments.

After all, in today's cloud-native fashion, containers will become the most important form of computing resources.

What about kubernetes' technological developments over the past year?

Probably the word maturity and stability are most summed up.

It is worth mentioning that more and more heavyweight players are beginning to enter the cloud-native market.

It is no longer the era of start-up companies that are keen on technological innovation.

Rancher must still remember this pattern change.

Rancher Labs was founded by Liang Sheng, the father of CloudStack, and has always been the first "vanguard" to enter the field.

Its flagship product, Rancher, as an open source enterprise-level Kubernetes management platform, took the lead in realizing the centralized deployment and management of Kubernetes clusters in hybrid cloud + on-premises data centers, and has completed localization and localization in China in February 2020.

In this regard, Rancher China CTO Jiang Peng felt the same way, in 2017-2018, when more cloud service providers regarded containers as a kind of their own services, but not the most core one.

How to use Google Kubernets for cluster management? Rancher China CTO Jiang Peng shared by hand

But today, every player will elevate the cloud-native service represented by containers to the core service category.

Of course, this change is also concentrated in the fact that participants recognize cloud-native fields or technology stacks while thinking more about the "landing" of the business.

For example, the operation of applications, the governance of microservices, and even the management and security of Kubernetes clusters, including the integration with new technology AI.

In this reasoning, it may be more concerned about innovation at the ecological level, more flexible to adapt to changes in user needs, and solve many problems in the process of cloud native promotion or landing through innovative project products, which may be the key to enterprises winning the cloud native war.

In addition, when it comes to the landing of cloud natives, K8S multi-cluster management, container edge deployment, including integration with AI technology, are all difficult points that cannot be avoided.

If you "make a mistake" in the process of container practice, you may wish to look down, perhaps you can eliminate some of the doubts in the concept of consciousness.

From Focus on Development to Responding to Real-World Combat: Cluster Management Says "Has Something to Say"

If our inferences are fairly reliable, one of the keys to actually working on cloud natives can focus on the management of Kubernetes clusters.

Just like Rancher's earliest product positioning: focus on multi-cluster management or hybrid cloud, multi-cloud multi-cluster management.

What exactly is a multi-cluster?

In practice, for most enterprises that have just started to use cloud native or Kubernetes technology, the use of cloud-native or Kubernetes technology is not a single cluster, but at this time it is more simply defined as a small number of clusters.

For example, many enterprises have a cluster in the development environment and a cluster in the production environment, which may be a typical scenario when they are just starting to go live.

However, as the volume of business development expands, the adoption of container platforms within the enterprise becomes higher and higher, and users will find that more applications need to be migrated to the cluster.

Transition from a single data center deployment to a multi-data center, and even multi-cloud scenarios will occur, which will create multi-cluster management problems.

Jiang Peng further revealed that the "many" in multi-cluster management is not only reflected in the magnitude of the cluster, but also in different teams, there will be no small difference in concept.

For platform construction teams, multi-cluster management technology means how to help shield the differences in the underlying infrastructure to provide consistent authentication and authorization, as well as management and operation capabilities.

How to use Google Kubernets for cluster management? Rancher China CTO Jiang Peng shared by hand

For application teams, it is more desirable to deploy and use these clusters in a unified and consistent manner, which can provide a consistent upper-level support capability on different clusters. Deploy monitoring, alerting, log collection, or applications including microservice governance to multiple clusters more quickly is key.

Taking the multi-active data center of financial users as an example, it is a typical architecture of two places and three centers, and for the application team, their focus is more focused: can some core business systems be quickly deployed to the data center in one click to achieve cross-data center disaster recovery or multi-activity?

Based on this, Rancher has made some enhancements to its products, including multi-cluster, multi-tenant monitoring capabilities, and the deployment and management of a single application across multi-Kubernetes clusters.

Specifically, on the basis of defining the cluster template, you can seamlessly deploy the application system of the app store to multiple projects in any number of clusters with one click.

When it comes to security, it has always been a matter of great concern to enterprises, and it is no exception in the scope of container cluster management.

If we talk about security from the perspective of the whole platform, we generally talk about security issues, and it must be an end-to-end solution.

From the perspective of the security of container platforms, it is often not only the security of the cluster itself that is concerned

Instead, it covers more of the security process from application development to delivery of containerized runs.

For example, targeted security.

How to ensure that the built-in components or services of the entire container image do not have some security vulnerabilities, and it is relatively common for users to pay attention to this.

When it comes to cluster security, it becomes important to meet the best recommendations in the industry.

For example, follow the security benchmark, close the anonymous access port, use two-way TLS encryption between components, and determine whether the relevant components are started with the least privilege.

In addition, the operational security of the cluster also involves some configuration of the application runtime at the higher level of the cluster, such as runtime for containers.

In a multi-tenant scenario, how to do network isolation between different tenants will also be considered, and even with the help of technical support from some professional security vendors.

Although container security is very complex, security management cannot be ignored.

Where is the difficulty in cluster management in edge-side scenarios?

In fact, container technology is not new to use on the edge side, and Jiang Peng agrees with this thesis.

For example, Microsoft's Azure Azure, the configuration of Azure IoT Hub is an objective instance of the industry.

The difference is that Azure IoT Hub is based on Docker container technology, and there may not be some orchestrations like Kubernetes at this stage.

More importantly, container technology naturally exists as a way to deliver applications or package delivery.

That is, this "natural" fits uniform standardized deployment in large-scale applications such as the edge side.

Such a summary is even more commonplace.

Despite the innate properties of containers, the process of deployment management is not as simple as deploying on cloud, data centers, or even heterogeneous infrastructure.

Intuitively, the magnitude of clusters on the edge side is no longer the case of dozens or dozens of clusters in traditional data centers.

Instead, it may be thousands or tens of thousands, or even hundreds of thousands of such a cluster level.

More importantly, the biggest difference between the edge scene and the traditional cloud or data center scene is that the edge scene is a very diverse or fragmented scene.

Compared with the data center, we use the standard X86 server and unified storage, Kubernetes can provide a consistent API to support the standardized operation of the business.

However, on the edge side, there will be great differences in both the use of the device itself and the protocol used in the business scenario.

"As a simple example, some manufacturing customers will have a lot of Windows systems on their production lines than Linux systems, and even more may not be Windows Servers. If these devices interact with other devices on the production line through some protocol, the difficulty can be imagined. ”

So how exactly do you manage such a hyperscale cluster?

At present, there is no unified data center scenario or platform launch to form a unified specification.

In the edge scenario, users need to carry out containerization or cloud-native transformation of some systems or applications, and there needs to be a process of gradual adaptation and transformation.

However, Jiang Peng also admitted that the use of container technology on the edge side is indeed a trend and demand that needs to be carried forward urgently.

"We've seen the Docker engine used on the edge side, but whether to implement more powerful orchestration capabilities on the edge side is still to be discussed."

Qubit understands that most of the cloud service providers that devote themselves to containers are still more selective to deploy standard Kubernetes on the edge side for effective management; but Rancher is an exception, which is why K3s should be out of time.

Reduce the complexity of deploying and managing Kubernetes, eliminate the need to manage complex components, and deploy with one click right out of the box.

You also don't have to go to great lengths to maintain relatively new key-value data like ETCD.

Starting from the above, reducing resource consumption and allowing users to run Kubernetes clusters on low-compute resource devices may be the advantage of K3s.

In addition, the recently officially announced open source Fleet is also Rancher's sub-cluster management department in the way of managing battle, ensuring the centralized management advantage experience of massive Kubernetes clusters.

"The focus is no longer on the application deployment of a cluster, but on the cluster as a cluster group, from a higher dimension to manage."

Container + AI, what can it do?

Nowadays, whether it is a professional manufacturer in the AI industry or a scene application of actual AI training, there are more and more cases of running AI services on top of containers, which has gradually become one of the important issues in exploring business landing.

For example, in the training of AI models, a large number of data, pictures or source files that are accessed or read are doomed to the consumption of large-scale computing power, but the typical large-scale computing scenario is exactly what the container needs.

However, there are mixed emotions, and there are some challenges in the actual landing of AI in the container scenario.

Let's say the granularity of resource sharing divisions.

Today, Kubernetes itself is not too strong for sharing and scheduling CPU resources.

Jiang Peng said that because Nvidia does not have an implementation like vGPU, the granularity of resource scheduling is relatively coarse in the standard Kubernetes or community version of Kubernetes clusters, and the utilization of resources is not too high.

In addition, in the containerized scenario, the performance performance of small files and massive file processing that is fragmented in model training may not be ideal.

Still, Gartner made the "attitude" of a forecast for AI released in 2019.

While enterprise CIOs are making AI the number one thing to think about, the key role of Kubernetes should not be underestimated.

According to Gartner, Kubernetes will be the preferred operating environment and platform for AI applications within enterprises.

Containers and Serverless will enable machine learning models to be serviced as stand-alone capabilities, allowing AI applications to run with less overhead, giving a bright future to Kubernetes+ AI.

What do you think about the maturity and stability of Kubernetes?

Attached: Interview guest profile

How to use Google Kubernets for cluster management? Rancher China CTO Jiang Peng shared by hand

— Ends —

Qubit QbitAI · Headline signing

Follow us and be the first to know about cutting-edge technology developments

Read on