From Aches to Agility: Revolutionizing Software Testing in the Agile Era

Are you getting those software testing pains again? 

Testing pains are nothing new. Until a few years ago when a client asked us for help, we generally found that it was enough to “just” help them tune their existing testing process and competencies to give them an acceptable level of pain relief. Recently, however, software testing pains seem to be becoming both more common and intense and this appears to be closely linked to the increasingly widespread mature adoption of Agile and DevOps, which is the result of the majority of organizations now understanding how crucial high-performance software delivery at speed is for maintaining business competitiveness and success.  

Many of you might already be some way into an IT transformation journey with the goal to deliver quality software at speed. Like many others you might now be experiencing some sharp pains in your software testing and wondering how to treat them. When we work with our clients in these situations, it is often necessary to prescribe some fundamental changes to the software testing approach to alleviate their suffering. In fact, it requires a big shift away from the ‘stereo-typical’ testing mindset to a quality engineering one.

Like any good doctor, we strongly advocate that prevention is better than a cure. We recommend when starting your transition journey to also look closely at how your approach to software quality assurance and testing needs to be adapted. This will help you to avoid any resulting pains, which could cause problems for the overall transition. 

Testing, a dull, unpleasant ache we have somehow learned to live with

For a long time IT managers have frequently viewed software testing as a nuisance. Broadly, there have been three typical responses: 

Attempt to alleviate (or eliminate) the problem with automation tools. Testing as we know it today probably started with the Gemini space missions and it seems that ever since then we have been hearing rumours about the imminent demise of the “manual tester” due to the arrival of some (expensive) test automation tool, most recently by anything with “AI” written on the box. Test automation is invaluable in high-speed software delivery. However, although it can speed up some parts of the testing process, it does not reduce the amount testing being done, and it unfortunately can generate additional problems to deal with. 

Attempt to use more, but at the same time, cheaper resources, for example offshore resources or “students”. Image you are struggling to get a fire to light with damp wood. This approach is the equivalent of throwing more of damp wood on the fire and expecting to get a good, warm fire going. Well, the result will actually be that we are going to spend a dark cold night in the forest.

Improve testing organization, processes and competencies. The testing profession has worked continuously and assiduously in the decades since those Gemini space missions to improve the effectiveness and efficiency of software testing.  

Most organizations have applied a combination of the above responses over the years. Yet, the pain never completely subsided, and in recent years it seems to be intensifying again. Maybe it is time for a testing revolution? 

A testing revolution, but not the one you think 

No, it’s not AI, at least not yet. 

The obvious revolution in software testing in the past decade or so has been the adoption of test automation and the rise of software testers armed with technical skills. This revolution has been big and noisy and has taken centre stage for much of the testing community and IT management alike. But, at the same time there has been another, and until recently, much quieter revolution going on. In my opinion this is a much more important revolution because it gives real power to the testers and has the potential to totally transform software delivery and to deliver quality at speed. Just like test automation, the idea has been around for a very long time. 

The legacy ‘stereo-typical’ approach to software testing essentially involves software testers executing their test cases against the “finished” software product.

This approach is like a police detective examining a crime scene. It is important and necessary work. But metaphorically speaking, the crime has already occurred, and the victim is lying dead on the floor. The ‘crime’ in this analogy is having designed and built software with inadequate quality – software that fails to deliver business value and is not ‘fit for purpose’. Doing all or most of the quality control at the end and trying to ‘fix quality into the product’ requires a huge amount of rework. This is hugely inefficient. Imagine a car factory where all the quality control was done at the end of the production line. If the finished car doesn’t start, then you will need a bunch of people spending quite a bit of time to find and fix the problem. This is why car factories employ continuous quality control, fast feedback, and ‘stop the line’ tactics to detect and immediately correct defects and stop them from going down the line where they will be costly to repair. What’s more they apply continuous improvement to the production process to prevent defects from reoccurring. 

Unfortunately, the legacy approach is still alive and kicking, even inside organizations that have adopted ‘agile’ development practices. We see plenty of organizations trying to treat their “testing pains” by testers using tools to try to automate their traditional E2E regression tests on the UI to try to manage all the testing within a single 2-week sprint. 

The ideas of prevention and early detection of defects in software delivery have been around for longer than I have been in software testing (a long time). Despite this knowledge, many teams remain stuck with the legacy testing and fixing phase at the end of the iteration or release cycle. But the quiet revolution has been slowly growing, driven by the challenges of modern software delivery. 

The challenge of high-performance software delivery

In high-performance software delivery the primary goal is to provide value to the business through delivering quality software at speed. This was once exclusive domain of a few elite technology pioneering organizations, who were able to rollout software changes in minutes and not months. Even though the majority of organizations have long since jumped on the Agile or DevOps train, the mature adoption of these approaches doesn’t happen overnight, in spite the hopes of senior IT management.  

According to Tuckman’s Stage of Group Development, organizations must progress beyond the Forming (focus on facts and adopting methodology) and Storming (characterised by resulting emotions, doubts, frustrations) phases, and move into to Norming and Performing phases. It is in these latter stages where the fundamental change in mindset (values and actions) occurs in both teams and management. Merely changing organization charts and adopting Agile rituals is not enough for Agile or DevOps to really work. 

As organizations move through these stages, then software testing pains start intensifying. This is because the traditional end-of-the-production-line, detective-style software testing breaks down in high-performance software delivery. 

Test automation is not the silver bullet 

In a 2019 GitLab DevOps survey, over 50% of respondents said that software testing caused the most delays in their software delivery. The answer? Speed up testing with automation. Test automation was touted as the silver bullet to the speed up software delivery. Yes, delivery of software changes into production within minutes cannot be achieved without test automation. However, we need to stop and consider whether the primary cause of ‘slow testing’ is not the speed of test execution but rather the poor quality the software being tested. Software that needs a lot of time and effort to correct through reworking design and code. As Jerry Weinberg described way back in 2008, what we see is not a long and slow testing phase, but a long and slow fixing phase. 

In addition, not only is test automation not the silver bullet, but the adoption of test automation has varying levels of success. In some teams it may actually be exacerbating their testing pain, reducing productivity and increasing quality risks. 

A better way forward 

In the software testing world, we’ve known for years that the root causes of software failing to be fit for purpose are the poor quality of code, that is caused to a larger or lesser degree by poor quality requirements and software design (the garbage in-garbage out principle). We should not omit poor testing from this list. Software delivery is people work. People make mistakes and often poorly cooperate, which leads to misunderstandings and miscommunication.  

Those elite, technology pioneering organizations not only realized this, but they actually took action on it. Shift-left and shift-right became the name of the game, with teams feeling collectively responsible for quality and prioritizing continuous improvement of the process to build quality into the product. Like Agile, DevOps and test automation, these changes are now filtering down the line. This quiet revolution is finally going mainstream is starting to make some noise. Many are referring to this revolution as Quality Engineering. 

Quality Engineering has an early detection and preventative mindset, with a focus on built-in quality. It is solidly based on Agile, DevOps and Lean principles. 

  • Shift-Left (or more accurately “Shift-Everywhere”)  
  • Proactive quality assurance 
  • Close collaboration and shared responsibility within cross-functional teams 
  • Continuous testing and quality assurance closely integrated into the DevOps ‘infinity loop’ and continuous integration and delivery (CI/CD) pipelines 
  • Automating everything that makes sense to automate 
  • Early and continuous feedback 
  • Continuous learning and improvement 

Implementing these principles requires good organization, effective processes or guidelines, and not least skilled people with right mindset. Adopting them can enable organizations to deliver valuable software to the business at speed.  

Those pesky software testers far from being finally made redundant by this latest revolution, now have a key role to play in high-performance software delivery as their skills and knowledge can have great value for the team throughout the delivery process. To deliver this value, software testers need to let go of their monopoly on testing and embrace a new cross-functional role.  

Book your free Quality Engineering consultation!

Discover how to streamline your software testing with a no-pressure, insightful session on Quality Engineering. Gain valuable strategies tailored to your needs, without any sales push.

Author: Phil Royston

After a bit of a journeyman IT career starting in the late 1980s, he practically fell into the software testing world in 2002. He hasn't looked back since and still loves what he does. In 2013, he co-founded Tesena | Smart Testing with the slightly ambitious, but very seriously intended goal of changing the software testing world. He enjoys working with our clients to help solve their testing and software quality challenges at individual, team, group, and organizational levels. Beyond his commitment to advancing software quality, he is also dedicated to education and professional development. As a lecturer of courses, he shares his extensive knowledge and experience, shaping the next generation of IT professionals. Phil regularly shares what he learned as a speaker at conferences, further contributing to the industry's growth and evolution.