Scalable Test Automation Using the Robot Framework
In 2017, a client (a multinational company in the field of automotive) created a program managing several projects aimed at ensuring end-customer satisfaction.
Working with customer satisfaction is made possible by integrating relevant systems into international CRM, as well as by expanding CRM and other systems. Thanks to the centralization of information, it is possible to provide the customer with accurate feedback and other useful information.
What it Will be About?
As CRM robustness has been growing, so have the demands on regression testing efforts. This naturally led to efforts to automate regression tests. After the successful Proof of Concept, the Robot Framework has been chosen as the tool. At that time, CRM worked only for the Czech Republic on several differently set users and the expansion by another 3 countries was being prepared.
I only jumped on the bandwagon at the beginning of 2019, when the development of the first, new markets was over, and the number of our tests exceeded 200. At the time I was taking over the test automation agenda, CRM was planned to be spread to another 6 markets, each with 3 or more user roles. At that moment, the code did not anticipate such system robustness, so it was, even more than before, necessary to meaningfully parameterize it, prepare it for further scalability, and thus minimize future efforts spent on its development and maintenance.
Obstacles and Success
During the optimization of the test set, however, an update of the database to SAP S / 4HANA and a CRM "facelift" came into play, which led to its temporary shutdown. At that moment, we were running almost 600 automated frontend SIT tests, the execution of which may have taken from the beginning to the end of the working day.
After the shutdown, most of the tests had to be completely rewritten because the structure of the frontend had completely changed. This was probably the most interesting challenge. There was a lot of pressure to deliver information about the status of the system as quickly as possible, and thanks to good teamwork and maximum commitment, we succeeded.
In the following weeks, we managed to optimize the most acute issues and unify the keywords for all tests, so that it was good enough to change a few parameters and suddenly a test was completed for another country or user role. Thanks to shared keywords, extending the test set to new markets took a fraction of the time compared to the past. Even when developing tests for new functionalities, it is usually possible to reuse much of the existing code.
Today, we are approaching 1,000 tests in 12 countries, each with 3 or more user roles. However, the maintenance of this set is much easier than it was at a time when only the first of the new countries operated the tests. Thanks to more efficient parallelization of the test run, we also get the results within a few (3-4) hours. So, we have managed to achieve a state where we are not slaves to automation and never-ending maintenance of the test set. On the contrary, automation now serves us well. Just to give you an idea, the effort to manually test regression tests takes within a matter of weeks.
Given the time saved, it is possible for me to assist the rest of the team release testing of new requirements, and thus, in all circumstances, help to ensure quality outputs indicating the state of the system as soon as possible.