laitimes

After reading this introduction to Kubernetes, its principles finally understood

author:Efficient O&M
After reading this introduction to Kubernetes, its principles finally understood

Kubernetes has become the king of container orchestration, it is a container-based cluster orchestration engine, with a variety of features such as scaling clusters, rolling upgrade rollback, auto scaling, automatic healing, and service discovery.

This article will take you to a quick look at kubernetes and what we're talking about when we talk about kubernetes.

Kubernetes architecture

After reading this introduction to Kubernetes, its principles finally understood

Look at the overall architecture of kubernetes from a macro perspective, including Master, Node, and Etcd.

The master is the master node and is responsible for controlling the entire kubernetes cluster. It includes components such as Api Server, Scheduler, Controller, and so on. They all need to interact with Etcd to store data.

  • API Server: Primarily provides a unified entry point for resource operations, which blocks direct interaction with Etcd. Features include security, registration, and discovery.
  • Scheduler: Responsible for scheduling Pods to Node according to certain scheduling rules.
  • Controller: Resource Control Center, which ensures that resources are in the expected working state.

Node, or worker nodes, provide computing power to the entire cluster and are where containers really run, including running containers, kubelets, and kube-proxy.

  • kubelet: The main tasks include managing the lifecycle of containers, monitoring in conjunction with cAdvisor, health checks, and regularly reporting node status.
  • kube-proxy: Mainly uses the service to provide service discovery and load balancing within the cluster, while listening for service/endpoints changes and refreshing the load balancer.

Start by creating a deployment

After reading this introduction to Kubernetes, its principles finally understood

Deployment is a controller resource for orchestrating pods, which we'll cover later. Taking deployment as an example, let's take a look at what the components in the architecture do to create deployment resources.

  1. The first is kubectl initiating a request to create a deployment
  2. Apiserver receives a request to create a deployment and writes related resources to etcd; all components interact with apiserver/etcd afterwards in a similar manner
  3. The deployment controller list/watch resource changes and initiates a create replicaSet request
  4. The replicaSet controller list/watch resource changes and initiates a pod creation request
  5. The scheduler detects unbound pod resources and binds them through a series of matches and filtering to select the appropriate node
  6. Kubelet found itself creating a new pod on the node, responsible for pod creation and subsequent lifecycle management
  7. Kube-proxy is responsible for initializing service-related resources, including network rules such as service discovery and load balancing

At this point, after the division of labor and coordination of various components of kubenetes, the whole process from creating a deployment request to the normal operation of each specific pod is completed.

Under

Among the many API resources of kubernetes, pods are the most important and basic, the smallest deployment unit.

The first question we have to consider is, why do we need pods? Pod can be said to be a container design pattern, it is designed for those "ultra-intimate" relationship containers, we can imagine servelet containers deploy war packages, log collection and other scenarios, these containers often need to share network, shared storage, shared configuration, so we have the concept of pods.

After reading this introduction to Kubernetes, its principles finally understood

For pods, the external network space is uniformly identified between different containers through the infra container, and the storage can naturally be shared by mounting the same volume, such as a directory corresponding to the host.

Container orchestration

Container orchestration is kubernetes' housekeeping skill, so it's worth taking a look. Kubernetes has many orchestration-related control resources, such as orchestrating the deployment of stateless applications, orchestrating statefulsets of stateful applications, orchestrating daemonsets, and orchestrating offline business job/cronjobs.

Let's take the most widely used deployment as an example. The relationship between deployment, replicatset, and pod is a relationship of layers of control. In simple terms, the replicaset controls the number of pods, while the deployment controls the version properties of the replicaset. This design pattern also provides the basis for two of the most basic orchestration actions, namely horizontal scaling of quantity control and update/rollback of version attribute control.

Horizontal scaling capacity

After reading this introduction to Kubernetes, its principles finally understood

Horizontal scaling is very easy to understand, we only need to modify the number of pod copies controlled by the replicaset, such as changing from 2 to 3, then the horizontal scaling is completed, and vice versa, horizontal shrinkage.

Update/roll back

After reading this introduction to Kubernetes, its principles finally understood

Updating/rolling back reflects the necessity of the existence of the replicaset object. For example, if we need to change the version of the application 3 instances from v1 to v2, then the number of pod copies controlled by the v1 version replicaset will gradually change from 3 to 0, and the number of pods controlled by the v2 version replicaset will be annotated from 0 to 3, and when there is only a v2 version of the replicaset under the deployment, the update will be completed. The action of rolling back is the opposite.

Rolling updates

As you can see, in the above example, we update the application, the pods are always upgraded one by one, and the minimum pods are available, and the maximum 4 pods are served. The benefit of this "rolling update" is obvious, once the new version has bugs, the remaining 2 pods will still be able to serve, while facilitating a quick rollback.

In practice, we can control the rollover policy by configuring the RollingUpdateStrategy, maxSurge indicates how many new pods the deployment controller can create, and maxUnavailable refers to how many old pods the deployment controller can delete.

Networks in kubernetes

We understand how container orchestration is done, so how do containers communicate with each other?

When it comes to network communication, kubernetes must first have a "three-way" foundation:

  1. There is communication between node and pod
  2. Nodes can communicate between pods
  3. Pods between different nodes can be communicated
After reading this introduction to Kubernetes, its principles finally understood

In simple terms, different pods communicate with each other through the cni0/docker0 bridge, and node access pods also communicate through the cni0/docker0 bridge.

There are many implementations of pod communication between different nodes, including the vxlan/hostgw pattern of flannel, which is now more common. Flannel learns the network information of other nodes through etcd, and will create a routing table for this node, which eventually enables cross-host communication between different nodes.

Microservices—Service

Before we can understand what's next, we need to understand an important resource object: service.

Why do we need a service? In a microservice, a pod can correspond to an instance, so the service corresponds to a microservice. During the service call, the emergence of the service solves two problems:

  1. The IP of a pod is not fixed, and it is not practical to use a non-fixed IP for network calls
  2. Service calls need to be load balanced across different pods

Service selects the appropriate pods through the label selector to build an endpoints, that is, a list of pod load balancing. In practice, we usually label similar pod instances of the same microservice

app=xxx

tag, and create a tag selector for the microservice at the same time

app=xxx

the service.

Service discovery and network calls in kubernetes

With the networking foundation of the "tee" mentioned above, we can start how network calls in microservices architectures are implemented in kubernetes.

This part of the content is actually talking about how Kubernetes implements service discovery has been more clear, more details can refer to the above article, here is a simple introduction.

Inter-service invocation

The first is east-west traffic calls, that is, inter-service calls. This section mainly includes two invocation methods, namely clusterIp mode and DNS mode.

clusterIp is a type of service in which kube-proxy implements a VIP (virtual ip) form for a service via iptables/ipvs. You only need to access the VIP to load balance the access to the pods behind the service.

After reading this introduction to Kubernetes, its principles finally understood

The diagram above is an implementation of clusterIp, including the userSpace proxy pattern (which is largely unused) and the ipvs pattern (which performs better).

Dns mode is well understood, and for a clusterIp pattern service, it has an A record yes

service-name.namespace-name.svc.cluster.local

, pointing to the clusterIp address. Therefore, in the general use process, we can directly call service-name.

Out-of-service access

After reading this introduction to Kubernetes, its principles finally understood

North-south traffic, i.e. external requests to access the kubernetes cluster, mainly consists of three ways: nodePort, loadbalancer, and ingress.

nodePort is also a type of service, which gives the ability to call a specific port on the host to access the service behind it via iptables.

loadbalancer is another type of service, implemented through a load balancer provided by the public cloud.

Our access to 100 services may require the creation of 100 nodePort/loadbalancers. We want to access the internal kubernetes cluster through a unified external access layer, which is what ingress does. ingress provides a unified access layer that matches different service on the backend through different routing rules. ingress can be thought of as a "service of service". ingress is often implemented in conjunction with nodePort and loadbalancer to complete the function.

So far, we've had a brief understanding of the concepts of kubernetes, how it works, and how microservices run in kubernetes. So when we hear people talk about kubernetes, we can know what they're talking about.

Author: Fredalxin Address: https://fredal.xin/what-is-kubernetes

April 21-22, XOps Weathervane! GOPS Global Operations Conference 2022 · Shenzhen station officially opened, domestic and foreign Internet, finance, communications in the field of TOP issues hit ~ , pay attention to GOPS, operation and maintenance transformation does not back the pot ~

After reading this introduction to Kubernetes, its principles finally understood

Recent good articles:

This article fills in the gaps in your Redis knowledge

The "Efficient Operation and Maintenance" public account sincerely invites the majority of technical personnel to submit articles.

Submission email: [email protected], or add contact WeChat: greatops1118.

After reading this introduction to Kubernetes, its principles finally understood

Read on