Robot Framework vs Pytest
I am an active supporter of the Robot Framework. I think it can solve almost any test automation task, especially when development is done in Python. On the other hand, there is Pytest, known to every Python developer. I'm pretty familiar with both of these tools, so now I want to make a full-fledged comparison.
Pytest is a Python testing framework that originated from the PyPy project. It can be used to write various types of software tests, including unit tests, integration tests, end-to-end tests, and functional tests. Its features include parametrized testing, fixtures, and assert re-writing.
Pytest is an implementation of the xUnit framework for Python. Suppose you have worked with jUnit or nUnit (the two most common members of this family for Java and .NET, respectively) in your life. In that case, you will find exactly the same standards in Pytest - you will quickly understand what is going on and follow the familiar approaches to writing unit tests. xUnit frameworks are add-ons to a programming language or libraries in them, which are convenient to use by developers themselves. Because of this, they are pretty widespread and popular.
To understand the situation with Pytest, as with many xUnit frameworks, it is necessary to use a log generation tool. Already traditionally, Allure Framework is used in this role. It's probably the standard for autotest logs today that businesses want to see. But you have to keep in mind that Pytest is developed by its own community and Allure by its own, and the vectors of these teams may, at some point, turn in different directions.
Unlike Pytest, Robot Framework is a domain-specific language (DSL) - a language specific to its subject area. It is built on Python so you can write anything for the Robot Framework in it. And all the features I'm talking about here (integrations, customizations, etc.) are available out of the box.
Robot Framework is less developer-friendly because it requires a little deeper dive. It's basically a different approach to writing autotests similar to Cucumber for Java (Gherkin syntax). Moreover, Robot Framework itself generates extensive reports (we will talk about this later), it contains a layer of report generation, and this part of the product is developed by the same community - indivisible and harmoniously.
By the way, the community is open and quite responsive. The community has an open channel in Slack, where anyone can ask a question on the Robot Framework and get an answer.
Arguments in favor of the Robot Framework
It often happens that a set of test cases (test suite) needs the same kind of setup, such as execution method, preparing some data, creating entities, and storing their identifiers for use in tests. Robot Framework provides separate suite setup settings that apply to all tests in the suite. There is a separate suite teardown, executed after all tests in the suite, and similar settings for each test case (test setup and test teardown). This is convenient, logical, and doesn't require reinventing the wheel.
In Pytest, test cases can just be methods (and then they have no suite setup - i.e., no unified preparation). We can wrap these methods into a class if we need a unified preparation. But if we make a fixture with scope="class" for that class (i.e., try to implement suite setup), we get a separate class instance for each test so that data from suite setup will not get into test cases in any way. The separate instances are probably created under the assumption that data from different test cases should not affect each other. But because of this, setting up an environment to run tests is much more complicated than in the Robot Framework, where suite setup is provided by default.
As I noted above, with Pytest for generating logs is most often used Allure Framework. It's nice and trendy. But because of the instances feature previously described, Allure doesn't understand what and how is related to the test. The actions from the suite setup do not get into the report. You have to work around it by writing handlers. And from my point of view, it's not just a crutch, but a big crutch.
By the way, other xUnit frameworks also have this problem.
Compared to Pytest+Allure, Robot Framework's logs are as detailed as possible, even sometimes redundant. They even include things you would never think of - Robot writes everything you do. With this log, it's much easier to catch floating errors that you can't easily reproduce. You know exactly where and what value the variable had, what API was called etc. Often, you don't even need to restart the test to figure out what's going on. For Pytest in such complex cases, we have to invent tools that help generate a log Robot Framework already has.
Robot Framework doesn't need to come up with complicated constructs. There are expressions that save you from unnecessary code.
I like to write keyword wrappers and wrap other keywords in them. For example, you can tuck in a keyword that takes ID from API response into a keyword that pulls different API (if a company has standards for writing code, API responses will be similar - the ID field will probably be in all of them). Code written in this way can be more convenient and more straightforward.
In the log header, Robot Framework shows statistics for each tag. If you set up the tags correctly from the beginning, it's enough to look at these statistics to see where the problem is without having to dive deeper. You also can link tags to bug IDs in Jira or whatever tracker you're using as a way to deal with bug-tracking integration.
There is tagging in Pytest, but it doesn't show up in the log. Again, you need some crutches to implement this with your own hands. That's why, as far as I know, almost no one does this.
By the way, Allure doesn't allow you to display tagging stats either. Probably, if it had such a feature, it would bring the Pytest+Allure bundle closer to Robot Framework in my eyes in terms of functionality. The positive side would be that the Pytest+Allure bundle would not require filling conservative developers' heads with new DSL. Unfortunately, such functionality is unlikely to appear due to the fact that Pytest and Allure are developed by different communities.
Arguments in favor of Pytest
There are only two arguments on my list in favor of Pytest. But from a supporter of another tool, they should sound more solid.
Unfortunately, you can't pause execution in the Robot Framework to see the values of the variables. This fact can be written on the positive side of Pytest.
But, in my opinion, you don't need this often. Anything that could be investigated in this way is available in the logs. To work through the logs that way, you need to change your mindset a little bit and change the paradigm of the developer (used to a certain debugging flow).
We can find something Robot Framework doesn't have - thanks to parametrization, Pytest can create test cases on the fly. In Robot Framework, ten tests will always be ten tests, no more, no less. In Pytest, in some cases, you can cover a huge number of cases with a single parametric test.
Let's suppose we have an API with two parameters, each of which can take several variants of values (for example, one takes 7 values, and the other takes 10). These values are not mutually exclusive. According to testing theory, in this case, we should choose several cases that more or less evenly cover the grid of 70 "intersections" (pairwise method). But instead, we can use the product method from the itertools module (which multiplies lists) to write a test setup that prepares 70 combinations of data and then uses that exhaustive testing on API. When another variant appears in the initial data, we just need to add a line to one of the original lists.
You can't do that in the Robot Framework. There you can write a test pattern that takes two values and make 70 calls. As new values appear, the number of calls will have to be increased.
In general, the pros of Robot Framework are more visible to me - I remain a passionate supporter. But, of course, someone can find the pros of Pytest more vital for themselves. This article wasn't about the final winner. It was more about giving information to choose from.