Assessment System for Evaluating Test Automation Solutions

Assessment System for Evaluating Test Automation Solutions

What is the primary task of many software engineers in test nowadays? It is to develop or find the right test automation solution for achieving fast, reliable, easy to understand and maintain tests that can be integrated into CI/CD pipelines. I will share some approaches that he regularly uses with clients during consulting to achieve these goals and discuss many common mistakes. One of the many errors is that the engineers are not doing proper research and setting the right requirements upfront, which leads to losing time developing their own solutions and maintaining lots of problematic tests later. You will learn some fundamental assessment criteria for automation testing designs, why they are essential and how to apply them in practice. We will discuss acquiring the proper requirements for the searched solution and how to conduct the research the right way. Afterward, the presented assessment framework can help you to find the right test automation solution.

What Is the Problem?

Many companies, at some point, decide to start automating their QA activities if they had some. Or for some new projects, want to establish test automation from the start. So, they have to pick the right tool/solution for doing that. This article will be about approaching the problem, selecting the tools to evaluate, comparing them, and deciding which is best.

What Is the Usual Approach?

From my professional experience as a consultant, many companies make poor decisions at this particular step, one of the most important ones. Many of them have only manual QAs. The usual approach is to assign this process to the most senior of them, who may have only basic programming experience or almost zero experience with test automation. Or another usual practice is the transfer one junior/regular developer to help with the effort.

Assessment Framework

Phase 1: Gather Requirements

Phase 2: Research Existing Solutions

Pick the so-called HARD requirements and ignore for now the nice-to-have stuff. Based on your HARD requirements, the solution for sure should have - filter the frameworks/tools and pick a few - 3 to 5 most serious candidates. These will be the solutions that we will further use for creating a PoC (proof-of-concept) and assess.

”HARD” Requirements Examples

  • Automate Web
  • Open-source
  • No License Costs
  • Community Support
  • Basic Documentation/Tutorials
  • Frequent Updates

Phase 3: Define Assessment Criteria

For me usually, it is the solution to be a programmatic one - not to use UI tools.

  • Automate Web
  • Automate Mobile
  • Automate React
  • Good Documentation
  • Parallel Execution
  • Easy Tests Creation
  • Tests Stability
  • Tests Readability
  • Failures Troubleshooting
  • Source Control Support
  • Easy CI Execution
  • Reporting Tools Support
  • Paid Support/Consulting
  • Customization Time Required
  • Framework Extend Support
  • Fast Learning

And a few bonus ones.

  • Code Conventions
  • Test Environment Configurations
  • Responsive Testing
  • Secrets Management
  • Specific Tools Integrations
  • Use SUT Technology Stack (Microsoft Technologies/JAVA)
  • Use Non-programming Solution

Phase 4: Implement Proof of Concept

Form a working group of technical experts. Implement 2-3 tests for each solution. It is better if all participants do it.

Phase 5: Rate Each Assess Criteria

CriterionSolution 1Solution 2Solution 3Solution N
Good Documentation (1.5w)4 (6)3 (4.5)3 (4.5)
Easy Tests Creation (1.2w)4 (4.8)5 (6)3 (3.6)
Troubleshooting355
Parallel Execution522
Tests Readability555
AVG4.764.54.02

Phase 6: Working Group Total Rating

After we have the ratings of all participants, we create a final rating table. Each person from the working group should provide their ratings based on the PoC. Then, you can also put the WEIGHT index again based on the participants’ seniority/authority/role. In the example, in the first row, Participant 1 is the QA Lead, so we multiply all scores by 4.

ParticipantsSolution 1Solution 2Solution 3Solution N
Participant 1 (4w)5 (20)3 (12)2 (8)
Participant 2 (2w)3 (6)4 (8)3 (6)
Participant 3355
Participant 4 (0.5w)4 (2)5 (2.5)5 (2.5)
Participant 5 (0.5w)4 (2)5 (2.5)5 (2.5)
AVG6.664.8

Phase 7: High-level Test Automation Strategy

  • 1. Create Abstract Test Cases Suite

  • 2. Set Priorities and Categories

  • 3. Put Test Cases in 3 Phases

  • 4. Create a Test Automation Requirements

  • 5. Provide Estimates for Phase 1 Test Cases

  • 6. Provide Estimates for Requirements

What you will need as a test environment, test data preparation for working on phase 1 tests?

  • 1. Mock server and web page fixture generator

  • 2. Test Data Web Service with 3 endpoints - creating test users, test purchases, validating completed orders

  • 3. Custom framework components for working with test data tables?

  • 4. Custom logic for validating PDF information

  • 5. Logic for generating session cookies by user name

  • 6. Bypass captchas on test environment

Phase 8: Final Decision

You need to answer a few essential questions if you believe you should use an existing solution.

You need to answer a few essential questions if you believe you should use an existing solution.

  • Are you going to use the existing solution?

  • How much time will you need to implement it?

  • How much will it cost to maintain and support it?

  • Do you have an easy way to learn it? Are there any trainings?

You need to answer a few essential questions if you want more to develop your own solution.

  • How much will it cost?

  • Do you have the expertise to do it?

  • How much will it cost to support it over time?

Summary

You will be able to use this pragmatic information-based approach to evaluate test automation solutions/frameworks and other kinds of tools/systems.

Related Articles

Design Architecture

Assessment System for Tests’ Architecture Design

Usually, people want to improve their tests but do not have quality metrics to determine which version of their improvements is most beneficial to their project

Assessment System for Tests’ Architecture Design

Design Architecture

Handling Test Environments Data in Automated Tests

In this article part of the Design & Architecture Series, we will talk about handling environments' test data in automated tests. We will discuss why hard-codin

Handling Test Environments Data in Automated Tests

Design Architecture

How to Write Good Bug Reports And Gather Quality Metrics Data

One of the essential tasks every QA engineer should master is how to log bug reports properly. Many people are confused about what information to include in suc

How to Write Good Bug Reports And Gather Quality Metrics Data

Design Architecture

Defining High-Quality Test Attributes for Automated Tests

To be able to write high-quality automated tests, more knowledge is needed than just knowing how to program in a certain language or use a specific framework. T

Defining High-Quality Test Attributes for Automated Tests

Design Architecture

Generations of Test Automation Frameworks- Past and Future

In the last publication from the Design & Architecture Series, we talked about what is a test automation framework and discussed all related terms. Here, we wil

Generations of Test Automation Frameworks- Past and Future

Design Architecture

Assessment System for Tests’ Architecture Design- Behaviour Based Tests

In my previous article Assessment System for Tests’ Architecture Design, I presented to you eight criteria for system tests architecture design assessment. To u

Assessment System for Tests’ Architecture Design- Behaviour Based Tests
Anton Angelov

About the author

Anton Angelov is Managing Director, Co-Founder, and Chief Test Automation Architect at Automate The Planet — a boutique consulting firm specializing in AI-augmented test automation strategy, implementation, and enablement. He is the creator of BELLATRIX, a cross-platform framework for web, mobile, desktop, and API testing, and the author of 8 bestselling books on test automation. A speaker at 60+ international conferences and researcher in AI-driven testing and LLM-based automation, he has been recognized as QA of the Decade and Webit Changemaker 2025.