laitimes

Technical Dry Goods | Practice and exploration of interface automation

author:Flash Gene

1

Preface

In this issue, we would like to share with you our practice in interface automation.

From the perspective of the test pyramid model, the interface test is located in the second layer, compared with UI automation, the interface will be relatively stable, and the interface test can not only pay attention to the rationality of the value of a single interface parameter, but also verify the integrity and correctness of the product function. Therefore, based on a reasonable interface design, it is an ideal solution to maintain a stable and usable set of automation cases.

Technical Dry Goods | Practice and exploration of interface automation

Test the pyramid

2

Analysis of the current situation

With the increasing number of customers, Guanyuan's products are currently in the stage of rapid development, with rapid iteration of product versions, and a series of iterative releases every month or even every week, while the business complexity is getting higher and higher, followed by a large and increasing number of test cases, and there are currently 4000+ use cases in the BI product line alone. Before each product version is released and launched, a lot of testing work needs to be done, including new requirements testing, trunk branch testing, regression testing, etc., and because Guanyuan is a privatized deployment model, it is also necessary to maintain multiple different versions of the product at the same time, and the testing workload will be doubled, and the business testing pressure can be imagined.

Technical Dry Goods | Practice and exploration of interface automation

To a certain extent, the above status quo and the business pain points brought by it push us to improve human efficiency through automation as soon as possible. Interface automation has a high input-output ratio, finds defects relatively "in real time", and can also shift the test to the left and find problems early, which is one of the reasons why we promote test automation.

3

Solution selection

There are many automation platforms/testing tools in the market, and the more popular open-source automation frameworks, such as unittest, JUnit, TestNG, Robot Framework, etc. Each framework has its advantages and disadvantages, which one is the one that meets our current needs? We have selected a few of the more representative frameworks to make a simple comparison.

frame unittest JUnit TestNG
Scripting languages python java java
Use case classification execution Classes and methods can be tagged by @pytest.mark Grouped execution use cases are not supported They can be grouped by @group
parameterization You need to rely on the DDT library Use @ParameterizedTest annotations Utilize @Parameter annotations and add parameters to a given test method
assertion Supports a wide range of assertion formats Supports a wide range of assertion formats Support for multiple assertion formats (assertEqual, assertTrue, etc.)
report 使用HTMLTestRunnerNew库 Provide data in XML file format Automatically generate test reports
Fail to rerun In the tank Not supported In the tank
Parallel test execution Not supported In the tank In the tank

Through the comparison of parameterization, assertion, reporting and other dimensions, it can be seen that the TestNG testing framework is more powerful in terms of functionality. It is scalable, easy to maintain, and comes with annotation tags, reports, and case configuration flexibility. Guanyuan currently has multiple product lines, and TestNG allows us to customize and package various protocol interfaces, which can integrate the personalized project framework of each product line. However, the native TestNG is not particularly able to meet our needs in some functions, such as the native test report is relatively simple and not readable, and we hope to display the test version, pass rate, failure details and other information more intuitively; the native TestNG only supports grouping tests through xml configuration, which is not flexible, and we need to execute different priority use cases in different demand scenarios. Therefore, based on the actual situation of Guanyuan's business, project architecture and product technology stack, we expanded on this basis and built a set of continuous integration testing framework suitable for Guanyuan's business model.

4

Introduction to the framework

Let's first look at a simple diagram to understand what the continuous integration testing framework does in the next step.

Technical Dry Goods | Practice and exploration of interface automation

Update Check Pre-Environment Preparation: Automatically maintains the test environment to make the environment available and meets the test requirements, and ensures that the subsequent test execution is correct. Automatically create an automated test execution plan every day, and do not execute it if there is no code update on the same day, reducing server resource consumption.

API Check API monitoring: Monitors new APIs on a daily basis and notifies the responsible person to ensure automatic API coverage.

API Testing automated testing: image deployment, test execution, report collation, and message notification.

The above constitutes a one-stop automated test of Guanyuan, which only needs to actively or automatically initiate a test plan on Jenkins, and the test will be carried out according to the selected configuration until the test report is received. Let's focus on the core part of API Testing.

Technical Dry Goods | Practice and exploration of interface automation

As can be seen from the preceding figure, a case is executed by the following modules in API testing:

  • Jenkins module: Test students actively select the test type, test branch, and other optional parameters (such as priority, etc.) to trigger the build action, and also support timed triggering.
  • Use case data ingestion module: obtains the data in the test file (use case description, input parameters, expectations, ignored values, etc.). At the same time, in order to facilitate management, the business module is divided according to the criterion, such as the user module, the system setting module, etc., and the data of each module is isolated.
  • Use case execution module: Based on the test data, the benchmark data is preprocessed and API requests are triggered.
  • Assertion module: obtains key responses and provides diversified assertion capabilities.
  • Report module: Integrate third-party plug-ins to generate reports, and integrate Jenkins and DingTalk capabilities for message notification.

5

Practice and exploration

Having a technically feasible framework is actually only the first step, we pay more attention to the maintainability and ease of use of the framework, if it requires a lot of maintenance work in the future or the actual development is jerky and difficult to use, then it will not improve human efficiency. We also encountered some problems in actual use, the TestNG framework requires a certain learning cost, everyone's code foundation is different, and it is still difficult for weaker students to use. In order to reduce the cost of getting started, the Guanyuan automated test framework has done some processing in data access, assertion service, etc.

Templated data ingestion

In fact, in the iteration process of Guanyuan BI products, the changes in benchmark data, especially the expected data, are still relatively frequent. In many cases, the failure of the use case may be just because the interface adds or deletes a field, which has nothing to do with the interface request itself.

In order to solve similar problems, we support data access with files, and currently supports two test case forms: YAML and Excel, which are managed on the test server. The file contains field values such as use case description, input parameters, expectations, and ignored values.

Technical Dry Goods | Practice and exploration of interface automation

Read the file through smb, and return it to the test class in Object[][] format, which is the request input parameter.

/**
* 主要用于 DataProvider 提供测试用例
*
* @param path
* @return Object[][]
* @throws Exception
*/
public static Object[][] read(String path, InputStream io) throws IOException {
    log.info("当前要读取的文件文件名称为:" + path);
    Object[][] result = new Object[][]{};
    if (path.endsWith(".xlsx")) {
        result = ExcelReader.readInputStream(path, io);
    } else if (path.endsWith(".yaml")) {
        result = YamlUtil.getYamlObject(io);
    }
    io.close();
    if (result == null) {
        throw new RuntimeException("获取文件异常");
    }
    return result;
}           

When a specific use case is executed, the package body information is obtained through the @DataProvider of TestNG.

@Test(dataProvider = "dataPro", testName = "XXXXTest", groups = {"P2"})
public void enableUserTest(String description, String parameter, String expected) {
    // TODO
    smbUtils.read("excel path");
}           

When a field change occurs in a use case, you only need to modify the contents of the excel file. At the same time, if we need to expand the interface scenario, we only need to add a new record in Excel to reduce the development cost to a certain extent. Recently, we have also developed a B/S architecture use case management platform, which is easier to use than data management in the form of files, and can complete the update of use case benchmark data through interface interaction, which is more friendly to students with weak code foundation.

Interface requests

Before executing a use case, there is often a lot of preparation work, such as environment switching, user login, etc. For example, when a user logs in, you need to enter a domain name, user name, and password, call the login API, and parse and return the package body to obtain a token, which are cumbersome and repetitive. Therefore, we inherit the ApiAbstract parent class from all test classes, and the parent class uses @BeforeSuite and @AfterSuite annotations to prepare parameters such as http connection, user login, and environment version acquisition, as well as interface time calculation. In the future, if there is a full demand, it is also convenient to expand.

public void setUp() {
    client = HttpClientGenerator.getHttpClient();
    user = new User(client, re);
    auth = new Auth(client, re);
    setting = new Setting(client, re);
    publicAPI = new PublicAPI(client, re);
    Response res1 = auth.signIn(Env.LOGININFO);
    // 获取ssotoken登录token
    getSsoToken();
    // 获取publicapi
    getPublicToken();
    // 获取server版本号
    Env.version = PodServices.getVersion();
}           

At the same time, in order to reduce the impact of network instability, all test cases are connected to the listener, and the failed use cases are retried, and if they still fail after three times, they are considered to have defects or performance problems.

public class RetryListener implements IAnnotationTransformer{
    @Override
    public void transform(ITestAnnotation testannotation, Class testClass,
                          Constructor testConstructor, Method testMethod)    {
        IRetryAnalyzer retry = testannotation.getRetryAnalyzer();
        if (retry == null)    {
            testannotation.setRetryAnalyzer(Retry.class);

        }
    }
}
public class RetryTestNGListener extends TestListenerAdapter {
    @Override
    public void onTestSuccess(ITestResult tr) {
        super.onTestSuccess(tr);
        // 对于dataProvider的用例,每次成功后,重置Retry次数
        Retry retry = (Retry) tr.getMethod().getRetryAnalyzer();
        retry.reset();
    }

    @Override
    public void onTestFailure(ITestResult tr) {
        super.onTestFailure(tr);
        // 对于dataProvider的用例,每次失败后,重置Retry次数
        Retry retry = (Retry) tr.getMethod().getRetryAnalyzer();
        retry.reset();
    }
}           

Diverse assertions

After the interface request is completed, it is important to assert the return packet. Due to the diversity and inconsistency of product scenarios, different interfaces often have great differences in structure, which requires us to have different assertion processing for different interfaces.

The framework provides several assertion methods for the most common json format in Guanyuan BI products, such as regular expression verification, struct inclusion verification, and struct verification that ignores order, mainly jsonPath assertion. We will continue to share the usage scenarios and validation logic of each assertion method in the future.

Technical Dry Goods | Practice and exploration of interface automation

Test reports that can quickly locate problems

The purpose of the test is to ensure the stability and accuracy of the interface results, and when the case execution fails, we need to quickly locate the cause of the problem. At present, most third-party test reports will only prompt some error messages or the number of error lines of code, which will take a lot of time for test students to confirm the cause of the problem, and may need to be re-executed to confirm.

After we end the assertion, if the case result does not meet the expectation, we will throw an exception and generate a detailed report. We have templated the test report, and the request error message and error report can be seen in the final automated report, so that the R&D and testing students can quickly locate the cause of failure. At the same time, after the test case is executed, the Jenkins script will summarize all the test reports and publish them to Tomcat for easy online viewing and future backtracking.

Technical Dry Goods | Practice and exploration of interface automation
Technical Dry Goods | Practice and exploration of interface automation

Expansion of automation capabilities

The framework can make automation run, but it is not a sustainable integration framework in the strict sense, and the establishment of automation execution plans, monitoring of interface changes, and timely follow-up of automation results need to be further satisfied. ’

Technical Dry Goods | Practice and exploration of interface automation

Open up continuous integration to trigger automation, automate code submissions, rapid verification of new feature releases, and establish automated smoke tests. Automated execution result record keeping, including failure rate, number of executions, etc., to facilitate subsequent analysis of quality weaknesses. Automation needs to be used at more stages of the development process to continuously ensure the quality of the code submitted to the development. So far, we have completed some of them, and we are continuing to optimize:

  • Automatically create automated test execution plans, such as daily tests, regression tests
  • Monitor new APIs on a daily basis and notify the responsible person to ensure API automation
  • The test report is automatically sent to the responsible person and notified by DingTalk, and the automation execution result record is saved

6

summary

Interface automation has been practiced in many product lines of Guanyuan, and has achieved remarkable results in improving the efficiency of regression testing and shifting left in testing. Combined with Jenkins and Git, we have established an automation plan, monitored interface changes, and followed up on automation results in a timely manner to continuously ensure the quality of the product, and more detailed CI/CD practices will continue to be shared in the future.

Author: Test Xiao Zhang, Guanyuan Test Development Engineer, in-depth product business, implementation of automation specifications, committed to improving testing efficiency through technology, ensuring product quality.

Source-WeChat public account: Guanyuan Data Technical Team

Source: https://mp.weixin.qq.com/s/kGdlGSiC4ooyVbTdhRMVrA

Read on