Tame your Locust or Stress Tests with Locust.io

Project Stories

Introduction

Surprisingly, the Locust stress testing platform is no longer widespread in the IT community. At the same time, the open-source project Locust.io has existed for 9 years, and you can find a number of instructions on the Internet on how to get started quickly and efficiently on the Internet.

In contrast, the set of case studies on more complex implementations of "stress tests" is relatively modest. The ambition of this article is to expand it and show why Locust is worth using on a larger project.

What it Will be About?

Locust as the Basis for a Robust Framework for Stress Tests

Managing Data Sets during Test Execution

Integration into BI Reporting Solutions

Dynamic Control of Test Scenarios

Assignment

The task was clear: The web pages generated by the CMS should be stress tested. 3 test scenarios have been identified verify the optimal settings of the shared infrastructure:

  1. Basic: the classic workload generated by regular website visitors

  2. Extended Basic: workload from regular visitors combined with a situation where the CMS system builds the final website based on the edited content for the site

  3. Editing: the workload generated by editors while changing content, when the CMS system prepares a preview of what the page with the given modifications will look like

Technologies Used

The FRONTEX application is created in React, the JavaScript framework, the application layer over .NET platform in Azure cloud. The development is controlled by TFS-GIT, which also applied to the stress test code.

The Locust tool was tested and selected as the cornerstone of the stress test solution thanks to the following key features:

  • High performance allowing to simulate 1k – 10k virtual users from a mainstream PC

  • Flexibility and community support of Python language to utilize existing and/or write new, complementary libraries and to address less standard to specific tasks in implementing test scenarios

  • Ready for integration into CI / CD pipelines

  • Readiness of the framework for integration into a selected solution for processing, analysis and visualization of data captured during the test run (in this case Elastic + Kibana)

For other advantages of the Locust platform, you consult directly the documentation on Locust.io, where examples of basic principles and uses are also available. In addition, you will find for example this and many other tutorials on the Internet. If you are meeting Locust for the first time, I recommend peep into at least the listed links before continuing with the article.

Locust thus formed the core of the framework, the overall architecture of which is shown in the following figure:

We will take a closer look at each module.

Framework Modules

Configuration of Tests and Test Data

Because one of the goals was to allow developers to run tests whenever needed, i.e., to get by without the specialized performance engineer when using the tests daily, test run parameter settings and test data have been extracted to the configuration file:

This is a common Python file, which allows freedom in defining test parameter data structures. E.g., LOAD_STAGES and SITES are variables of native Python data types, but they are still very readable and understandable for test users (of course also thanks to the appropriate comments).

Run-time data, i.e., data used during the test run, should be encapsulated into dedicated classes:

The code sample above demonstrates how easy it is to build simple or more complex logic for data operations on data objects. This is useful, for example, if data records are used repeatedly and one data record cannot be used under multiple parallel virtual users. Examples of use will be shown later in the chapter on the implementation of test scenarios.

ELASTIC - Storing Test Run Data

Although Locust's web AI allows you to track several statistics that are sufficient for a basic overview of the stress test run, Locust does not have its own robust data analytics or reporting layer. Nowadays, with a number of specialized, very effective tools available for this purpose, this seeming shortcoming is compensated several times by the straightforwardness, which the Locust can integrate into any BI solution with. Whether in our case it is an integration to Elastic or some other data solution, we always face the task of storing two categories of data:

  1. Monitored performance quantities of the loaded application, typically response time, job completion time, level of utilization of infrastructure resources (CPU utilization, memory full, number of active nodes, etc.)

  2. Environment parameters of the current test run. For example, the current number of virtual users, the number of requests processed in parallel

If you are already familiar with the basics of Locust, you know that each request and subsequent response is captured by a basic set of statistics rendered on the Web AI. However, how to send the data on the basis of which statistics are calculated to an external database? You may also have noticed that Locust visualizes the number of virtual users in the graph, which is a figure that would certainly be useful to us as well. Follow the instructions for the following code to add listeners over the selected Locust events:

The first two additional_success_handler and additional_failure_handler basically only additionally forward the natively captured statistics in case of unsuccessful / successful request "somewhere" further using forwarder.add (message).

The implementation of the forwarder object, which encapsulates the integration on Elastic, will not be discussed here. Everything you need can be found in the article by Karol Brejna Locust.io Experiments — Emitting Results to External DB.

The other two on_test_start and on_test_stop, which run when the test starts and ends, contain examples of custom loggers implemented beyond those of the Locust natives: log_VUs and log_memory. Note that loggers are started with the gevent.spawn() command, which causes the logger to run in a so-called Greenlet. This allows the loggers to run in parallel with the running test but does not block the entire CPU process.

Greenlet architecture itself is interesting. Locust is also built on its implementation for Python in the form of the gevent library. And it is no secret that thanks to it Locust outclasses some traditional stress test tools, such as jMeter, in terms of performance. The key idea of the Greenlet is to assume that larger tasks can always be divided into smaller sub-tasks that can be performed "at random” (so-called ‘context switching‘). The jobs can then be settled simultaneously within one CPU process, in contrast to the processing pattern in parallel threads, where each task consumes exactly one process, i.e., one CPU thread. You can find out more on the homepage of the gevent projector, for example, in a nice tutorial Gevent For the Working Python Developer.

The log_VUs and log_memory loggers are instances of the VUs and Memory classes from the loggers.py module:

In the code, we again see the object forwarder, which provides routing of log records to the external database. From the perspective of the Greenlet architecture, the key command is gevent.sleep (1), which says: put the current Greenlet to sleep for 1s and release resources (so-called yielding) to process another Greenlet (see the gevent library links above for more details). This implies that the logging data are sent to the database every second.

In the Memory class, note a service call that returns the current state of memory on a given Azure box (server). The monitoring endpoint exposed in this way allows the logging to be orchestrated directly by Locust. In the absence of such an endpoint, all we can do is to arrange for this data to be sent to the database where we store test run data directly from internal infrastructure monitoring. Of course, this has the disadvantage that we often have to cooperate with infrastructure management staff when executing the test.

You may be wondering how specifically the data sent to Elastic is visualized. However, working with Kibana on properly structured source data from running a stress test deserves a separate article that will follow soon.

External Modules

Although Python is capable of almost anything, we will certainly find a number of cases where it is optimal to use other means. In our case, it was PowerShell jobs that simulated page content changes in Azure. The integration into the external module was trivial - it was enough to run the selected PS job and parse the output (check that no error occurred, and possibly find out the required data):

Implementation of Test Scenarios

Baseline Scenario

With a little programming skill, Locust is able to handle virtually any communication protocol. However, for classic communication via http, it can be used, so to speak, "out-of-the-box". The test case, which calls the selected URL and performs a basic response check (return code 200), looks simply:

The endpoints object holds a data set loaded into memory, specifically a set of URLs from a csv file. The GetRecord() method returns a randomly selected URL from the data set. It is also worth noting that each call can be named. This is because calls with the same name appear in the statistics as different instances of a single call, even if they actually target a different URL each time. This already allows you to aggregate data while the test is running.

Extended Baseline Scenario and Editing Scenario

The simple baseline scenario had to be combined with running processes in the CMS system simulated in the Azure application layer using PowerShell scripts. Execution of the script simulated an event activating a process (job), the result of which was reflected on the presentation layer. The duration of the process was one of the monitored variables. To this end, a test case was implemented, which was designed to run an Azure job, to capture the desired change at the given URL and to measure the time period. The code for this test time is very similar to the one from the editing scenario, which we will show and describe:

The test case uses the data stored in the previews object. Data records can be used repeatedly, but one data record must not be processed by more than one job. For this, the following methods are used

Preview.getRecord()
 previews.setReady( preview[‘RowID’] )

The first method returns a free data record and locks it against use elsewhere, the second one releases the record for further utilization (for implementation, see the dataReader.py module above).

On lines 13–35, execute the script running the job job = utils.process_file( site_name, preview_name, type=”Preview”), log the result with the script runtime:

forward.add({
 "type": "batch"
 ,...

and after checking that the script execution did not end with the if error (retVal == “ERR”):…, we start the while proceeding lop, which checks whether the content on the pages has not changed. When the change is recorded, we log again, including the job run time.

The code repeatedly comes with a self.interrupt() command, which, in the event of errors incompatible with the continuation of the test case, interrupts the run, and the virtual user under who the test case instance was running is returned to the pool.

Controlling the Mix of Test Cases within a Scenario

In the introduction to the article, we showed a configuration file that can be used, among other things, to select a scenario for a given stress test run:

LOAD_SCENARIO = 2
## 1 .. web users visiting sites (test case: WebTest)
## 2 .. web users visiting sites + content server processing workers output (test cases: WebTest + ContentServerTest)
## 3 .. editors requesting pages previews (test case: PreviewTest). Note: amount of virtuals users should fit amount of data records.

When scenario 2 is selected, two test cases are run within the test, which differ fundamentally in the way data is handled:

  1. WebTest randomly selects one record (URL) from the data set, and it does not matter if the same URL is selected for several simultaneously running virtual users,
  2. ContentServerTest selects a record that is not in the processing state by another virtual user.
Therefore, before involving another virtual user in the test run, logic is needed that determines whether a given virtual user can choose from both test times and only the first one will be left, based on the state of the data set for ContentServerTest. Let us look at the following code:

The purpose of the testRecord() method is to determine whether there is a free sentence in the given data object for processing (it will be locked only when the virtual user is actually activated using getRecord (), see above).

The self.tasks variable is then filled with a list of test cases that can be considered for the running virtual user. Since we do not list scales in the list of test cases (see tasks attribute), Locust selects from [WebTest, ContentServerTest] completely randomly.

In Conclusion

You can check how the project lives in the Locust code repository- a number of key features have been added in the last year as well. This shows that the platform is increasingly being used for serious stress test projects.

Locust has already worked for us many times, and in this case, too, it exercised its qualities. Let us highlight a few points that helped our customer:

  • Very fast (in terms of days) implementation of the first version of the stress test with the most critical scenarios
  • Painless adoption of the stress framework by the customer's development team
  • Easy subsequent maintenance of the existing ones and implementation of new stress scenarios

We believe that the next Locust project will once again help create the most effective solutions for our clients.

Author: Viktor Terinek

Viktor has been collecting experience in the field of software testing for more than 15 years. In his career, he has experienced virtually all executive test roles associated with the analysis, design and implementation of tests, including various dimensions of automation. He has gone through several managerial positions and is currently fully committed to streamlining test processes and designing new solutions.