how we improved testing and accelerated product launches

how we improved testing and accelerated product launches

When working on a project, have you asked yourself at least once: how to be sure how to cover product tests? How to organize your work and task processing as efficiently as possible? How to reconcile manual checks and automation? If the answer is yes, then hello and welcome to the dungeon!

My name is Katya Sergeeva, I am a senior test engineer at Cloud.ru. In the article, I will tell you how small changes in the process of combining manual checks and automation helped to increase the efficiency of testing in our team, ensure high quality of products and reduce their time to release.

Why we decided to change the process

When checking tasks, filling the test management system with test cases and prioritizing the testing process, any QA lead in the project has a no-no and questions arise:

  • how to build a test coverage calculation process taking into account that at the beginning of the work there is already some number of described test cases?

  • what to take for 100% when collecting statistics? how to detect gaps?

  • what and how to standardize? what patterns to use?

  • how to keep documentation and limit new employees?

After all, each project has its own characteristics and requirements: somewhere testing is completely provided by manual testers, somewhere there are automation teams, somewhere mixed teams, somewhere the work is carried out by full-stacks. And when manual testers and automated testers work together, the testing strategy often becomes fragmented and unclear.

Our company has quite a few testing teams – each one works on its own project, but sometimes it connects to adjacent ones. Since each project has its own specifics, the composition and processes of testing teams are also different. At the same time, we adhere to uniform testing tools, methodology and automation rules.

On one of the projects, we implemented automation, and not at the initial stage. There was already some set of features and manual test cases, but there was no confidence in the complete coverage of the product. We created autotests, and new manual scenarios were created in parallel. At some point, already written automation had to be modified to take into account product changes. The number of questions began to grow exponentially:

  • how do you keep everyone under control without spending 100% of your time on it? and is there all this time and resources?

  • Is good communication between teams or within a team sufficient?

  • How to make our synchronization as smooth and efficient as possible in the conditions of using different formats and testing tools, and possibly different approaches?

It became obvious that it was time to build a bridge between manual testing and automation. We used the tools the team was already familiar with: the current internal project management system, the Allure TestOps test management system, and an automation framework. In essence, we have optimized each stage within the already established workflow. I will tell you how it affected not only our team, but other teams as well.

We analyze the task

Let’s consider a standard workflow: a task comes to us in the project management system, we appoint an executor and do testing, then we leave feedback on the task and translate it further. And here we have questions again:

  • were the test cases written immediately or not? maybe the task was immediately transferred to automation?

  • and if there was a pack of tasks for a sprint? was it possible to complete all tasks at once?

  • and how do we know that the task was definitely processed and not lost? maybe it will be necessary to return to it, but there is a good chance to forget about it…

How to take the process under control and take all this into account? We made a decision that was on the surface – to use labels. We invented three options for labels that help you quickly understand the status of a task:

  • tests_exist – The task was processed by the tester, test cases were created in the test management system;

  • tests_required — you need to write test cases for the task or make changes to existing cases;

  • tests_not_reqiered – A test case is not required for this task.

It is convenient when each task has its own label. With the help of requests, you can track how complete our coverage is and quickly find unfinished tasks:

An example of classification of tasks by status

An example of a tag query tests_required per year:

project = “ProjectName” AND Sprint in closedSprints() AND resolved >= startOfYear() AND type in (Story, Task, Bug) AND labels = tests_required

So that the process is not interrupted, now several times in the responsible sprint he compares the statistics by the labels and reminds the team about the gaps. By the way, this practice can be introduced directly into the working scrum process.

We create test cases

What changes did we introduce at the stage of creating test cases? In addition to various test design practices, we decided to use the Behavior-Driven Development (BDD) writing style and the human-readable Gherkin language. It includes keywords Given, When, Then to describe the behavior of a system or product.

When we started writing test scripts on Gherkin, the entire product team, including business and development, spoke the same language. This facilitated the negotiation phase where requirements and functionality are discussed. I will give a simple example of how it works:

In addition to other attributes, we try to attach to test cases a reference to the source task in the project management system. Then, with the help of the test management API system, you can create interesting samples for analysis. And now we can use this format regardless of whether the tests will eventually be automated or not.

We write self-tests

Thanks to the changes in the previous steps, at the stage of writing autotests, we began to rely on already created BDD-style test cases and noticed how easily they fit under the code logic. No matter what automation approach we used – BDD with feature files or TDD.

BDD (Given-When-Then) is essentially the same AAA (Arrange-Act-Assert) pattern that is needed when writing test code:

AAA (or BDD) pattern

BDD is a fairly simple approach that involves: inputs (does the test need some object or configuration), steps to be performed (does the basic functionality get covered), and expected output (final decision — does the test pass or fail). And despite its simplicity, this approach is very powerful. What we got as a result of using it:

  • shifted the focus to the independence and individual behavior of the tests;

  • distinguished preparatory and main actions – achieved order and cleanliness in the code, logical completeness, the possibility of reusing steps;

  • now with tests we actually test the functionality and find what doesn’t work.

Of course, the wording of the steps can change: some test management systems independently tighten the automation steps, some do not. In our version, automation rewrites the steps in the test management system. We emphasize exactly the ID of the test case and make sure that the automation contains exactly this ID in its script. So we are definitely sure if the case is automated or not yet. And we adhere to a unified description of the steps and try to use the same wording.

Summary and conclusions

So, what we and other teams got with the new process:

  • standardization of test cases and the possibility of their reuse;

  • help in eliminating duplication of steps;

  • easier support of test documentation;

  • team spirit – members of product teams work on a single wave and understand each other faster;

  • quick search for gaps and weak points in the testing process;

  • saving time — the faster and more effective the checks, the faster the product release and the faster the new project gets into work;

  • efficient onboarding of newbies – now we don’t need separate documentation, we conduct training immediately on test plans with test cases and quickly immerse newbies in regression.

And we also got good control over the test surface. I think everyone understands that even the updated process will not provide 100% test coverage, since its percentage applies only to those ideas that we came up with while writing cases for tasks (focusing on job acceptance criteria). There is always something we haven’t thought of yet. But when there is structure and test cases (or at least checklists) it is always a kind of invitation to evaluate virgin and untested areas.

As a conclusion, I want to remind you that testing is not just a comparison of the result with the expected behavior of the product, when we know or only think that we know how everything should work. Testing is a continuous process of research, risk analysis, critical thinking and quality reporting.

I hope the information was useful and thought-provoking. I will be glad if you share your own stories and experiences of combining manual testing and autotests in the comments.

And you can hear even more about our processes and the inside of cloud solutions on October 24 online and offline in Moscow at the GoCloud Tech conference. Join 🙂.


What else to read on the blog:

Related posts