laitimes

Generate "myriad" test environments based on swimlane technology

author:Flash Gene
Generate "myriad" test environments based on swimlane technology

"Dynamically generate test environments according to requirements" was originally a requirement put forward by our QA team, and when we first received this request, our first reaction was, how could we have such an urgent idea?

Here's a look at our branch management strategy:

Generate "myriad" test environments based on swimlane technology

Through visits and investigations, it was finally determined to follow the principle of one demand and one development branch, which is convenient for management and traceability, parallel development, and does not interfere with each other.

Generate "myriad" test environments based on swimlane technology

Since the code branches of the R&D little brother are all demand-oriented, the test lady hopes that the test environment can also be demand-oriented, which is not too much, not too much!

After receiving this requirement, the first way to think of the implementation is to use docker, jenkins, gitlab, etc., to build a complete full project environment for each requirement!

It's the easiest way, but it's also the least feasible!

If there are more than a dozen or even dozens of requirements in an iteration, and each one requires a full environment, then the server resources will have to be multiplied dozens of times, and the construction time is unacceptable.

Generate "myriad" test environments based on swimlane technology

As a hard-working operation and maintenance boy,

In order to eat with a mouthful of food, no matter how difficult it is, you have to stand on it!

Look down on the landing plan!

Design Principles:

Generate "myriad" test environments based on swimlane technology

First, a main swimlane is fixed, containing the complete services of the system, and the rest of the swimlanes correspond to a requirement branch, if the requirement changes one or more services, then the corresponding swimlane will also contain one or more services.

Then, each swimlane is defined with a specified entrance, such as feature-id.test.project.oa.com, the domain name and the swimlane are one-to-one, when accessing the specified domain name, the services existing in the corresponding swimlane will be preferentially accessed, and if the services that do not exist in the swimlane will fall back to the main swimlane, so as to complete the invocation relationship of the entire link.

Implementation steps:

1. Use containers and container orchestration technologies to deploy services

(In principle, swimlane sharing technology can be implemented without using containers, but with the help of containers and container orchestration technology, the whole implementation process can be made easier, so we will directly explain the container-based implementation)

  • Why use containers?

First of all, the docker container has high isolation and security, and the image can be put on any server with docker installed to run quickly, which can greatly improve the efficiency of service deployment and the convenience of migration.

Secondly, as a hard-working O&M boy, I have never believed that servers are reliable, there will always be various reasons for servers to crash, containerized deployment - making services and servers independent is the king!

Generate "myriad" test environments based on swimlane technology

(Our docker image contains everything you need for the service to run)

  • Why use container orchestration?

Containerization of services may seem like a lot of effort to deploy, but there are still areas where manual intervention is required, and container orchestration systems can be fully automated. Let's compare the handling of the two scenarios:

A: When the server load is too high and you need to expand the service node

The processing method of the containerless orchestration system is as follows: List all servers in the cluster, view data such as CPU, memory, network, load, and I/O usage, and then select a suitable server based on human flesh, deploy the selected server, and after the deployment is completed, the server will be served and traffic will be introduced into the service from the LB.

After defining the scaling threshold and scope, the container orchestration system automatically selects the appropriate server node when the load is too high, automatically scales out the service, and automatically introduces traffic.

B: When a server fails or needs to be taken down for maintenance

The processing method of the containerless orchestration system is to cut off all traffic sent to the machine from the LB layer, stop the services deployed above, and repeat the process of selecting the server above to import the traffic after the deployment is completed.

When the server fails, the orchestration system will automatically select the appropriate server from the existing normal server cluster and automatically deploy the exact service node, and for the server to be removed, you can directly execute the eviction command to smoothly migrate the service of the specified server, which is convenient and fast.

We chose K8S as the container orchestration system, because K8S has stood out from the competition with major orchestration systems in the past period of time, and won the throne of the container orchestration system in one fell swoop, and with the popularity of K8S, the ecosystem is becoming more and more perfect.

Generate "myriad" test environments based on swimlane technology

(kubenetes的集群架构图)

2. Use Helm for orchestration configuration management

There are many benefits to the orchestration system, but can the K8S orchestration system be used once and for all?

Let's first popularize the arrangement configuration of K8S:

Each service needs to prepare a configuration before putting it into the k8s orchestration system, and tell k8s how to do the orchestration, just like we have a factory with a variety of automated lathe assembly lines, but before starting the assembly line, we need to have a drawing to tell how the lathe operates, and this drawing is what we call the orchestration configuration.

So here's the question:

If each service has an orchestration configuration, and suddenly there is a requirement to change a parameter that all services have, the conventional practice is to modify all the service orchestration configurations, but each service corresponds to 4 environments (development, testing, pre-release, production), and each environment corresponds to a configuration...

To put it simply, if we have 100 microservices, we have to manually modify them 400 times!!

Hey, I touched the top of my sparse head and found that my time was running out!!

Generate "myriad" test environments based on swimlane technology

How to efficiently manage the orchestration configuration of K8S?

First of all, the service abstraction is classified into front-end, middle-layer, dubbo back-end, springboot back-end, etc., and each class writes an abstract orchestration configuration template that can be compatible with each environment.

Similarities are wrapped in abstractions, and the different service-related features are stored in a separate yaml configuration file.

Profile Snippets:

# dubbo provider
zhenai-crm-login-provider:
  base:
    group: crm
    project: zhenai-crm-login
    service_path: zhenai-crm-login-provider
    type: dubbo
  image:
    from_image: inner.harbor.oa.com/crm/openjdk-8u171-jdk-alpine:latest
    files:
      - 'target/zhenai-crm-login-provider.zip:/usr/local/app:unzip'
    workdir: /usr/local/app/zhenai-crm-login-provider
    entrypoint: sh /dubbo_start.sh
  k8s:
    helm_template_name: zhenai-crm-demo-provider
    livenessProbe:
      tcpSocket:
        port: 9089


# tomcat server
zhenai-crm-center-server:
  base:
    group: crm
    project: zhenai-crm-center
    service_path: zhenai-crm-center-manager
    type: tomcat
  image:
    from_image: inner.harbor.oa.com/crm/tomcat-8.0.53-jre8-alpine:latest
    files:
      - 'target/zhenai-crm-center-web.war:/data/web-app/:unzip'
    workdir: /usr/local/tomcat
  k8s:
    helm_template_name: zhenai-crm-demo-server
    livenessProbe:
      httpGet:
        path: /_health.do
        port: 8080           

Base section:

The basic information of the project, specifying which project it belongs to and the path of the service in this project

image section:

It is mainly to make an abstraction of the parameters of building the docker image, what basic image to build from, what files need to be put into the image when building, and you can also specify workdir, entrypoint, etc.

K8S part:

The Helm template used to render the Helm configuration and health check information is documented.

In addition, we have introduced helm to manage the configuration of k8s, and each type of configuration will be made into helm charts, which will be stored in the helm repository ChartMuseum.

helm chart目录结构如下:

zhenai-crm-demo-server
├── Chart.yaml
├── Chart_tpl.yaml
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
├── values.yaml
└── values_tpl.yaml


2 directories, 10 files           

Chart.yaml records the basic items (name, version, etc.) of the project, and values.yaml records the characteristic variables (such as health check ports, URLs, and number of replicas) that are different between the projects.

Chart_tpl.yaml and values_tpl.yaml are template files, which will be rendered into Chart.yaml and values.yaml respectively according to the characteristics of the service.

In this way, even a few hundred microservice configurations can be completed with just a few templates!

It is important to emphasize the importance of ensuring the uniformity of the project structure!!

3. Inter-lane service routing implementation

Now that we've talked about containers and container orchestration, let's talk about how to implement inter-lane service routing based on these technologies

Removing components such as LB, NGINX, ZK, etc., our services can be abstracted into two layers:

Layer 1: Includes front-end and Dubbo Consumer (RESTful API)

第二层:dubbo provider(供consumer调用)

The service hierarchy is abstracted and looks like this

Generate "myriad" test environments based on swimlane technology
  • Service routing implementation at the front-end and consumer layers

This is the outermost layer of service access, and a unique domain name will be generated when each environment is built, such as: tapd-123.test.project.oa.com, when we access this domain name, the traffic will enter the specified nginx-ingress, and this nginx-ingress will make the first layer of routing for the service traffic, and the traffic will be prioritized to the service of the environment, if the environment does not have a corresponding service, it will hit the main swim lane, which is our main test environment。

For example, one of the ingress configurations (fragments):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: project-ingress-tapd-123
  namespace: project-crm
  kubernetes.io/ingress.class: nginx
  nginx.ingress.kubernetes.io/configuration-snippet: |
    proxy_set_header zone tapd-123;
spec:
  rules:
  - host: tapd-123.test.project.oa.com
    http:
      paths:
      - backend:
          serviceName: zhenai-crm-invite-web-tapd-123
          servicePort: http
        path: /invite
      - backend:
          serviceName: zhenai-crm-app-server-test
          servicePort: http
        path: /api/app           
  • Service routing implementation at the Dubbo provider layer

The Dubbo framework has its own service registry, and K8S does not interfere with its invocation strategy, so it is necessary to manipulate Dubbo's service invocation mechanism.

Dubbo's consumer will call the loadbalancer interface when selecting a provider, so what we need to do is to rewrite the loadbalancer interface of dubbo so that its scheduling can also conform to our policy.

So the idea is that when the provider registers, bring its own swimlane information, tell the registry which demand environment it belongs to, and then when the consumer calls the provider, it selects the corresponding provider according to the traffic ID, and if the corresponding backend is not found, it will also fall back to the main swimlane.

In addition, in the case of intermodulation between providers, it is said above that http can record the identity of the traffic through the header, so how can the rpc protocol of dubbo maintain the traffic identity? you can rewrite the dubbo filter API, intercept the rpc request, and then pass the implicit parameters of dubbo (RpcContext.getContext().setAttachment, RpcContext.getContext().getAttachment).

The following figure shows the invocation relationship after adding service routes

Generate "myriad" test environments based on swimlane technology

(Service Invocation Procedure.gif)

I can't see the name of the service of the animated picture.,I specially present an ultra-high-definition big picture to all the big guys.

Generate "myriad" test environments based on swimlane technology

(Service Invocation Procedure.jpg)

4. Use Jenkins Pipeline to automate the construction and deployment of services

Our entire Jenkins pipeline can be described in a single diagram

Generate "myriad" test environments based on swimlane technology
  1. Code build stage: Use the corresponding build tools to build according to the project writing language
  2. Check phase: Perform code detection and code scanning
  3. Image build stage: Combine the code build product and the basic image into a service image
  4. K8S orchestration configuration generation stage: mainly for orchestration configuration rendering
  5. K8S Cluster Update Phase: Uses the configuration generated in the previous phase to roll over the cluster's services

Environment Creation Process:

  1. QA students enter a requirement ID on the build page
  2. Automatically find all associated services by this ID
  3. Concurrently compile, image, and deploy matching services
  4. Once the deployment is complete, a URL is generated to access the demand environment
  5. Click on the URL to enter the environment built from the requirement

Pipeline build process

Generate "myriad" test environments based on swimlane technology

Due to the concurrent build, the entire environment generation only takes 2-3 minutes to complete!

Effects:

After the build is complete, a test environment corresponding to the requirement ID will be generated:

Generate "myriad" test environments based on swimlane technology

Get!

Hurry up and let Xiaohua accept it!

Author: Choi Joon Jonny

Source-WeChat public account: Zhenai network technical team

Source: https://mp.weixin.qq.com/s/EUacgzfTsweOHHYWxjpMKg

Read on