ATOM Model - Advanced Testing Optimization Maturity Model

ATOM Model - Advanced Testing Optimization Maturity Model

Introduction

This article introduces the Advanced Testing Optimization Maturity (ATOM) Model. It’s an innovative approach for assessing and boosting test automation in organizations. The ATOM Model is not just a tool; it’s a detailed method that guides organizations in reviewing and improving their test automation processes. This model reflects the evolving world of quality assurance (QA) and highlights the crucial role of effective test automation in achieving software excellence.

Target Audience

This article is aimed at a broad audience in software development and QA. It’s particularly useful for:

  • QA Teams: It offers insights into the current state of their test automation and guides them towards enhancements.
  • Automation Engineers: It provides a structured framework for evaluating and improving their automation processes.
  • Consultants and QA Architects: It’s a valuable tool for assessing the test automation maturity of client organizations and creating customized improvement plans.
  • Project Managers and Decision Makers: It helps understand the importance of test automation in project delivery and resource management.
  • Software Developers: It highlights the vital role of test automation in the software development lifecycle.

Brief Overview

The ATOM Model is a comprehensive approach that examines various aspects of test automation within an organization. It includes a series of steps and questions that together give a complete picture of an organization’s test automation status. The model evaluates team skills, test quality, and categorizes organizations into distinct maturity groups. It leads to the Test Automation Excellence Index (TAEI), a unique measure of an organization’s automation maturity. This article will explore every part of the ATOM Model, showing its methodology and its potential to transform test automation in today’s software development world.

The ATOM Model Explained

Concept and Origin

The ATOM Model was developed in response to the growing complexity and necessity of test automation in software development. Recognizing the challenges organizations face in assessing and improving their test automation strategies, the ATOM Model offers a structured, systematic approach.

At Automate The Planet, we’ve spent years perfecting the ATOM Model, significantly improving test automation for a wide range of organizations, from dynamic startups to major Fortune 500 companies. Our approach is thorough: we don’t just craft an initial action plan, we also form a dedicated team of skilled Automate The Planet engineers. This team works closely with organizations to bring these plans to fruition. While each plan is customized to suit the specific needs of each organization, our track record is a testament to the model’s effectiveness. To date, we’ve achieved a 90% success rate with the ATOM Model. It’s essential to be realistic – I steer clear of claiming a perfect success rate because there are always unpredictable factors, such as global events like COVID-19, economic downturns, or stock market changes. These can influence an organization’s budget and staffing, along with internal factors like acquisitions and office politics. However, putting these external elements aside, I am confident that, technically, the ATOM model plans are solid and lead to successful outcomes.

Core Components

The model is built around several key components:

1. Evaluate Team Skills:  

Purpose: To assess the current skill level of the team in test automation.

Method: Distribute questionnaires and coding challenges to the team members. These assessments should cover various aspects of software testing, programming languages, test automation tools, and methodologies. You can check the following article for such questions. Alternatively, you can use a service such as HackerRank.

Outcome: A clear understanding of the team’s strengths and areas for improvement, which is crucial for tailoring the subsequent steps of the model.

2. Test Quality Index Evaluation:

Purpose: To gauge the quality and effectiveness of existing automated tests.

Method: Conduct a thorough review of the current test suites, including code quality, maintainability, and the relevance of test cases to business requirements. Another metric might be the effectiveness of tests - how many defects they have caught in the last year. How much time does the team spend maintaining the tests? How many false positives there are? How much time is required for all tests to be executed? Are the tests using test data management properly to optimize test execution? Many similar questions can be asked here.

Outcome: An index or score that reflects the quality of the current test automation efforts, highlighting areas that require attention.

3. Challenges and Workflows Review:

Purpose: To identify any existing challenges in test automation and understand the workflows of the application under test.

Method: Interview team members, review documentation, and analyze the workflow of the application, focusing on areas that are frequently changed or are critical to business operations.

Outcome: A detailed report on the specific challenges faced by the team and insights into the application’s workflows, which are essential for targeted improvement.

4. Development-Testing Process Review:

Purpose: To assess how testing is integrated into the overall development process.

Method: Analyze the current development and testing life cycle, including the integration of testing in the CI/CD pipeline, the collaboration between developers and testers, and the use of agile methodologies. Is the time for writing automated tests included in the general development planning process? Do developers have access to test result analysis? Do they have trust in test automation? During my consulting sessions I use quite often - TMMi framework for such evaluations.

Outcome: Insights into the effectiveness of current practices and identification of gaps or bottlenecks in the process.

5. Apply ATOM Model Questions:

Purpose: To determine the organization’s current state in terms of test automation maturity.

Method: Have team members answer the comprehensive set of questions designed to assess various facets of test automation maturity.

Outcome: A qualitative measure of the organization’s current test automation practices, highlighting areas that require immediate attention.

6. Calculate Excellence Index:

Purpose: To quantify the organization’s test automation maturity.

Method: Calculate the Test Automation Excellence Index (TAEI) based on the Team Skill Index (TSI), Test Quality Index (TQI), and Automated Testing Coverage (ATC).

Outcome: A numerical value that provides an objective measure of the organization’s current state in test automation.

Later, there is a detailed section about it.

7. List of Outputs:

Purpose: To develop a set of actionable recommendations based on the ATOM Model’s assessment.

Method: Analyze the results from the previous steps to generate a list of recommendations and actions that can help improve the test automation process.

Outcome: A comprehensive action plan tailored to the organization’s specific needs and current state.

8. Technical Solutions Assessment:

Purpose: To explore potential technical solutions that can address identified challenges.

Method: Our strategy involves conducting thorough research and assessments of different tools, frameworks, and technologies to improve the test automation workflow. Predominantly, we rely on our in-house developed system at Automate The Planet, known as the ‘Assessment System for Evaluating Test Automation Solutions’, for these evaluations.

Outcome: A list of potential technical solutions, with pros and cons, to guide decision-making.

9. Proof of Concept Implementation:

Purpose: To validate the chosen technical solutions.

Method: Implement 1-3 selected solutions in a controlled environment to test their effectiveness and compatibility with existing systems.

Outcome: Practical insights into the feasibility and impact of the proposed solutions.

10. Roadmap Creation:

Purpose: To provide a structured plan for implementing improvements in test automation.

Method: Develop a comprehensive roadmap that outlines the steps, timelines, and resources required to achieve the targeted improvements in test automation.

Outcome: A detailed and actionable roadmap that guides the organization through the process of enhancing its test automation capabilities.

11. Final Presentation:

Purpose: To communicate the findings and proposed roadmap to stakeholders.

Method: Prepare and deliver a presentation that summarizes the findings, recommendations, and the roadmap for improvement.

Outcome: Stakeholder buy-in and a clear understanding of the path forward to enhance test automation processes.

Key Questions in the ATOM Model and Their Purposes

1. Do you manually test your product or service?

Purpose: To assess the reliance on manual testing methods, which can indicate the potential need for more automated processes.

2. Do you have manual QA engineers within your organization?

Purpose: To understand the existing QA structure and the potential for incorporating more automated testing specialists.

3. Are you doing any automated tests?

Purpose: To establish the baseline presence of automation in testing practices.

4. Are those coded automated tests?

Purpose: To determine if the automated tests are script-based, indicating a level of sophistication in test automation.

5. Do you have existing test automation engineers within your organization?

Purpose: To identify if there are dedicated resources for maintaining and developing automated tests.

6. Do you have a QA Architect with significant technical expertise within your organization?

Purpose: To gauge the level of technical leadership and guidance available for QA processes.

7. Do you adhere to specific industry standards or certifications in quality assurance, such as ISTQB?

Purpose: To understand the organization’s commitment to recognized QA standards and practices.

8. Do you have any documented manual test cases that need to be executed to ensure the quality of your software?

Purpose: To identify the extent of formalized test documentation and the potential scope for automation.

9. Do you use a test case management system to store the manual test cases?

Purpose: To assess the level of organization and accessibility of test case documentation.

10. Do you collaborate with external experts or consultants to gain insights into industry best practices or upskill your team qualification and level of expertise?

Purpose: To determine openness to external expertise and commitment to continuous learning in QA.

11. Do you have defined test processes and practices easily accessible by engineers?

Purpose: To check for the presence of clear, standardized testing procedures within the organization.

12. Do you track test metrics to improve your software development and testing?

Purpose: To find out if there are mechanisms in place for measuring and enhancing the effectiveness of testing.

13. What is the current system under test coverage with automated tests?

Purpose: To evaluate the extent to which automated testing is applied across the system.

14. Are there specific reporting mechanisms or dashboards that provide visibility into the status and quality of your software?

Purpose: To assess the tools and methods used for monitoring and reporting on software quality.

15. Do you have a dedicated test environment?

Purpose: To determine if there is a specialized environment for conducting tests, which is crucial for effective testing.

16. Are there any escalations or critical bugs reported after release on average?

Purpose: To gauge the effectiveness of current testing in catching critical issues before release.

17. Is the unresolved non-low priority bugs count increasing over time?

Purpose: To identify trends in bug management and the efficiency of current testing processes.

18. Do you execute your automated test suite daily?

Purpose: To understand the frequency of test execution, which is key for early detection of issues.

19. Do you execute your high-priority automated tests after each deployment of the app to the test environment?

Purpose: To assess the rigor of testing post-deployment, ensuring critical functionalities are always tested.

20. Are there any flaky tests part of your test suite?

Purpose: To identify the presence of unreliable tests that could undermine the trust in testing outcomes.

21. Is more than 5% of your automated test suite failing regularly?

Purpose: To evaluate the overall health and reliability of the automated test suite.

22. Do you have to update element locators too often?

Purpose: To determine the maintenance overhead in test scripts, indicating possible areas for optimization.

23. Can your automated tests be executed without depending on existing (hard-coded) data on your test environment?

Purpose: To check the flexibility and robustness of automated tests in different environments.

24. Are there any automated tests that depend on other tests?

Purpose: To identify dependencies that could affect the consistency and reliability of test outcomes.

25. Do you use hard-coded pauses/sleeps in your tests?

Purpose: To assess the use of potentially unreliable testing practices that can impact test accuracy.

26. Do you automate newly developed features before they are released?

Purpose: To determine if the organization practices proactive automation for new features, enhancing overall software quality.

27. Do you perform planning for automated tests tasks? Do you have a test automation roadmap?

Purpose: To check for strategic planning in test automation, indicating a forward-thinking approach.

28. Are your automated tests executed in parallel or distributed?

Purpose: To understand the efficiency and scalability of the test execution strategy.

29. Is it easy to reuse your tests to be executed against different instances/environments of your system?

Purpose: To assess the adaptability and reusability of automated tests across various environments.

30. Have you integrated any cutting-edge technologies within your test solution like machine learning auto-analysis of auto-failures or self-healing?

Purpose: To identify the use of advanced technologies in enhancing test automation and reliability.

31. Is your test suite integrated with your CI tools?

Purpose: To determine the level of integration between automated testing and continuous integration tools.

32. Do you currently believe in the value test automation brings to your company?

Purpose: To gauge the organization’s perception and commitment to test automation.

33. Do you follow any specific development methodologies/frameworks, such as Agile, Scrum, or Lean?

Purpose: To understand the development methodologies in use, affecting the approach to testing.

34. What are the different levels of testing you perform?

Purpose: To identify the range and depth of testing levels implemented, such as unit, integration, system, and acceptance testing.

Each question in the ATOM Model is designed to uncover vital aspects of an organization’s test automation practices. Understanding the purpose behind these questions allows for a more insightful analysis and categorization of an organization’s maturity in test automation.

ATOM Model Self-evaluation Groups

1. Test Automation Excellence

  • Team Composition: Comprises a team of highly-skilled test automation engineers.
  • Test Coverage: Extensive coverage of test scenarios, ensuring a thorough evaluation of the application.
  • Test Reliability: Consistently reliable tests (often referred to as ‘always green’) that rarely fail without cause.
  • Technical Solutions: Utilization of excellent technical solutions in test automation, employing cutting-edge tools and frameworks.
  • Best Practices: Adherence to the latest best practices in test automation, ensuring efficient and effective testing processes.
  • Development Synchronization: Ability to keep pace with ongoing development, including automating tests for current sprint stories.
  • Continuous Integration: Tests are integrated into the continuous delivery process, running automatically and frequently.
  • Maintenance Efficiency: Minimum time required for maintenance, fixing, and analyzing test results, signifying high-quality test scripts and processes.
  • Decision Making: Use of metrics-driven approaches for decisions and development, ensuring data-backed improvements.
  • Trust and Belief: A strong organizational trust and belief in the value and reliability of automated tests.

2. Test Automation Average

  • Team Skills: The team possesses average programming skills and knowledge in test automation.
  • Coverage Level: Average test coverage, potentially leaving some areas of the application untested.
  • Test Quality: Average quality of tests, with issues like large workflow tests, reliance on hard-coded data, and lack of test data management.
  • Development Alignment: Typically behind on current development sprints, lacking automated tests for newly developed features.
  • Maintenance Load: Significant time spent daily (2-3 hours) on fixing, maintaining, and analyzing tests, indicating inefficiencies.
  • Trust Issues: Existence of trust issues within the organization regarding the reliability of automated tests.
  • CI and Analysis: Lack of continuous integration (CI) and automated analysis, often resulting in tests being run locally or on-demand.

3. No Test Automation - Good QA Process

  • Automation Status: Absence of automation engineers and thus no existing test automation coverage.
  • Manual QA Team: Presence of a skilled team of manual QA professionals, some with programming experience.
  • Manual Testing: Reliance on documented manual test cases, following a structured QA process.
  • QA Process: Despite the absence of automation, the organization follows a good manual QA process.

4. No Testing at All

  • QA Presence: Complete lack of QA roles within the organization.
  • Testing Practices: An absence of both automated and manual testing practices and processes, indicating a significant gap in quality assurance.

Calculating the Test Automation Excellence Index (TAEI): A Detailed Approach

The goal here is to quantitatively assess the maturity of an organization’s test automation capabilities. To calculate the Test Automation Excellence Index (TAEI), we consider three key dimensions: the Team Skill Index (TSI), the Test Quality Index (TQI), and Automated Testing Coverage (ATC). Each of these dimensions is scored on a scale from 0 to 10. The formula for Test Automation Excellence Index TAEI is: 

TAEI = (TSI + TQI + ATC)/3

Calculating the Test Quality Index (TQI)

The Test Quality Index (TQI) evaluates the robustness and reliability of the automated tests themselves. Key factors in calculating the TQI might include:

  • Tests Relevance: Involves a thorough assessment of how closely the automated test cases align with essential business workflows and technical specifics of the application under test.
  • Test Reliability: Determining the stability of the tests, i.e., how often they pass without false positives or negatives.
  • Best Practices: Evaluating the adherence to test design principles and automation best practices, such as using clean code, proper test data management, and avoiding hard-coded values.
  • Maintenance Effort: Considering how much time and effort are required to maintain the test suite, including updating tests for new features or fixing broken tests due to application changes.

Our strategy involves conducting thorough research and assessments of different tools, frameworks, and technologies to improve the test automation workflow. Predominantly, we rely on our in-house developed system at Automate The Planet, known as the ‘Assessment System for Evaluating Test Automation Solutions’, for these evaluations.

Calculating Team Skill Index (TSI)

TSI assesses the overall competence and proficiency of the test automation team. This is determined by evaluating the team’s expertise in relevant programming languages, familiarity with automation frameworks, and understanding of best practices in test automation. You might consider factors like certifications, years of experience in test automation, and performance in technical assessments or coding challenges. You can check the following article for such questions. Alternatively, you can use a service such as HackerRank.

Calculating Automated Testing Coverage (ATC)

ATC can be challenging to gauge, as it depends on whether your tests are based on manual test cases or not. While some organizations measure pure code coverage post-test execution, this can be misleading as it often excludes logic in databases or front-end components. In distributed cloud architectures or systems with many microservices, accurately determining coverage becomes even more complex. A more effective method might be to compile a comprehensive list of all major business areas, subareas, and workflows, and then evaluate whether they are adequately covered by automated tests. This approach is tailored and requires a nuanced understanding of each organization’s specific context.

The Test Automation Excellence Index offers a numeric representation of an organization’s test automation maturity, providing an objective basis for improvement plans.

Examples

A TAEI of 25-30 indicates excellent test automation maturity, with each dimension scoring above 5 points.

  • Example with TSI = 7, TQI = 7, ATC = 5:

  • TAEI = (7 + 7 + 5)/3 = 19/3 ≈ 6.33

  • This suggests a robust test automation setup but indicates a need for additional workforce to further enhance test automation coverage and quality.

  • Example with TSI = 3, TQI = 7, ATC = 8:

  • TAEI = (3 + 7 + 8)/3 = 18/3 = 6

  • This might reflect a recent transition from manual to automated testing, indicating a high quantity but lower quality of tests. It calls for team training and possibly external expertise.

  • Example with TSI = 8, TQI = 8, ATC = 8:

  • TAEI = (8 + 8 + 8)/3 = 24/3 = 8

  • This represents a very high level of test automation maturity, suggesting only minor adjustments or additional staff might be needed.

  • Example with TSI = 6, TQI = 5, ATC = 4:

  • TAEI = (6 + 5 + 4)/3 = 15/3 = 5

  • A score of 5 indicates a moderate level of maturity. This scenario might require a focus on enhancing test design and expanding coverage areas.

  • Example with TSI = 3, TQI = 3, ATC = 3:

  • TAEI = (3 + 3 + 3)/3 = 9/3 = 3

  • A TAEI of 3 reflects a nascent stage of test automation maturity. It points to the need for significant improvements across all areas.

  • Example with TSI = 4, TQI = 8, ATC = 3:

  • TAEI = (4 + 8 + 3)/3 = 15/3 = 5

  • This score suggests high test quality but gaps in team expertise and test coverage. Strategic hiring and focused efforts to expand automated testing are recommended.

By examining these examples, you can draw similar conclusions in different scenarios. This calculation is just one aspect of the ATOM Model, providing vital insights into the next steps for enhancing test automation.

ATOM Model Outputs: Effective Steps for Enhancement

The outputs of the ATOM Model are diverse, ranging from team expertise development, QA architecture services, upskilling, extending test coverage, framework integration, and test case optimization, to strategic planning and roadmap creation.

1. Introducing Expertise

Objective: To infuse the team with advanced knowledge and skills.

Actions: Recruiting seasoned professionals or consultants to bring invaluable insights and deep technical expertise.

2. Team Enhancement and Skill Building

Objective: To broaden and strengthen the team’s capabilities.

Actions: Performing detailed interviews to identify suitable talent, coupled with comprehensive training and skill development sessions. In the interim, consider bringing in external experts to establish the necessary infrastructure, widen test coverage, and educate your staff.

3. Team Skill Upgradation

Objective: To elevate the team’s collective skill level.

Actions: Organizing targeted training sessions and coaching programs, tailored to meet the team’s unique needs.

4. Expanding Automated Test Coverage

Objective: To increase the range of automated testing.

Actions: Identifying areas lacking in test coverage and creating automated tests for these key segments.

5. Tool and Framework Integration

Objective: To upgrade the technical backbone for testing.

Actions: Implementing and incorporating cutting-edge testing frameworks and tools that align with the organization’s specific needs.

6. Crafting Targeted Test Cases

Objective: To ensure all important areas are thoroughly tested.

Actions: Designing new test cases that focus on critical and high-priority elements of the application.

7. Optimizing Current Testing Solutions

Objective: To enhance the efficiency and impact of existing test automation methods.

Actions: Reviewing and upgrading current testing strategies to boost their effectiveness and reliability.

8. Customizing the Testing Framework

Objective: To adapt the test automation framework to specific client needs.

Actions: Developing unique features or modules within the framework to meet the specific demands of the client’s environment.

9. Refining Tests for Better Quality and Maintenance

Objective: To enhance the test suite’s quality and ease of maintenance.

Actions: Overhauling existing tests and systems, implementing effective test data management strategies, and boosting overall test quality.

10. Streamlining CI and Test Execution

Objective: To smoothly integrate testing into the Continuous Integration (CI) process.

Actions: Setting up a comprehensive CI strategy, refining test execution methods, and establishing solid reporting and analysis frameworks.

11. Cultivating Trust in Test Automation

Objective: To foster belief and confidence in test automation within the organization.

Actions: Educating team members and stakeholders on the value and significance of test automation, and highlighting successful outcomes.

12. Strategic Test Automation Planning

Objective: To chart a clear, long-term course for test automation initiatives.

Actions: Developing detailed plans and roadmaps based on precise estimates and priorities, ensuring test automation aligns with overarching business goals.

Conclusion

The ATOM Model is a robust framework for understanding and improving test automation. It offers a complete system for evaluating and enhancing the test automation practices within an organization. It’s an essential tool for teams looking to refine their approach, whether they are just starting, improving current methods, or aiming for the highest standards of test automation.

What makes the ATOM Model so effective is its comprehensive and methodical approach. It covers everything from evaluating team skills and the quality of tests to implementing best practices and the latest tools. This thorough approach ensures a holistic enhancement of test automation practices, guiding organizations from their current state to a more advanced and efficient level.

A key feature of the model is its actionable steps, tailored to an organization’s specific test automation maturity level. Whether a team is already excelling in test automation or just beginning to establish testing practices, the model provides clear guidance. These steps range from bringing in expert knowledge, broadening test coverage, to integrating modern tools and nurturing a culture that values automated testing. This adaptability makes the model beneficial for a diverse range of organizations.

Implementing the ATOM Model goes beyond technical improvements; it also involves a shift in the organizational culture. It’s about building a belief in the value of test automation and aligning testing objectives with the broader business goals. By following the model’s roadmap, organizations can enhance their test automation processes and cultivate an environment where quality and efficiency are key priorities.

In summary, the ATOM Model is a vital resource for navigating the complexities of test automation. With its detailed, adaptable approach and focus on practical insights, the model is an invaluable asset for achieving excellence in test automation, catering to both technical improvement and cultural growth.

Need help with your evaluation?

Our company specializes in helping our clients create high-quality and scalable test automation solutions to meet their targets. After the survey is done, our expert will contact you.

Related Articles

Design Architecture

Generations of Test Automation Frameworks- Past and Future

In the last publication from the Design & Architecture Series, we talked about what is a test automation framework and discussed all related terms. Here, we wil

Generations of Test Automation Frameworks- Past and Future

Design Architecture

How to Write Good Bug Reports And Gather Quality Metrics Data

One of the essential tasks every QA engineer should master is how to log bug reports properly. Many people are confused about what information to include in suc

How to Write Good Bug Reports And Gather Quality Metrics Data

Design Architecture

Full-Stack Test Automation Frameworks- API Usability Part 2

In the last article from the series, we talked about API usability, one of the must-have features of the full-stack test automation frameworks. Here I am going

Full-Stack Test Automation Frameworks- API Usability Part 2

Design Architecture

Assessment System for Evaluating Test Automation Solutions

What is the primary task of many software engineers in test nowadays? It is to develop or find the right test automation solution for achieving fast, reliable,

Assessment System for Evaluating Test Automation Solutions

Design Architecture

Benchmarking for Assessing Automated Test Components Performance

The evaluation of core quality attributes is not enough to finally decide which implementation is better or not. The test execution time should be a key compone

Benchmarking for Assessing Automated Test Components Performance

Design Architecture

Handling Test Environments Data in Automated Tests

In this article part of the Design & Architecture Series, we will talk about handling environments' test data in automated tests. We will discuss why hard-codin

Handling Test Environments Data in Automated Tests
Anton Angelov

About the author

Anton Angelov is Managing Director, Co-Founder, and Chief Test Automation Architect at Automate The Planet — a boutique consulting firm specializing in AI-augmented test automation strategy, implementation, and enablement. He is the creator of BELLATRIX, a cross-platform framework for web, mobile, desktop, and API testing, and the author of 8 bestselling books on test automation. A speaker at 60+ international conferences and researcher in AI-driven testing and LLM-based automation, he has been recognized as QA of the Decade and Webit Changemaker 2025.