TL;DR
- You must learn Test Pyramid.
- Automated tests should enable the team to release faster; they should not be a blocker.
- Always prioritize fixing a failing test over adding a new one.
- Adding a new automated test is straightforward, but maintaining consistent test results can be challenging.
- Don’t limit automation only for testing, it can be used for various other tasks to reduce the testing effort.
- A suite of consistent tests are better than a suite having comprehensive test coverage but failing intermittently.
- Begin with basic automation framework and scale as necessary, prioritising simplicity over complexity. Adhere to fundamental principles like SOLID and YAGNI.
- Avoid waits as much as possible, prefer dynamic waits over static ones like
sleep
. - Last, and most important, enable your team members to contribute to automated tests.
Transitioning to Automation Testing Role?
Are you aspiring to be an automation tester, or have you recently completed training on tools like Selenium WebDriver, Cypress, etc.? Or are you bored of a manual testing role and planning to transitioning to automation testing role?
While the training may have taught you how to write code or design a fancy framework, it’s crucial to understand the basic question: “Why do we need automation?”
Generally, QAs overvalue their knowledge of programming languages, various automation tools, libraries, etc. But they do not put the same level of effort into understanding how and why automation should be used in a team, and what problems it solves. Once they have a clear goal for automated tests, they will always be in a good position to use automation effectively in the team.
In this blog, I will share the basic hygiene that a QA should know before transitioning to an automation testing role.
[Read here, Why QA conferences need a change]
Test Pyramid
Test Pyramid – You can refer to the link to read in detail about the test pyramid.
Additionally, I would like to mention that you shouldn’t limit yourself to the theory of the test pyramid. Instead, you should pair with developers to understand what types of tests they have. Gain insights into how developers gain confidence when they make changes to the code, and what kind of coverage they have with their unit and integration tests. Your conversations with developers should provide you answers regarding which level of the pyramid your tests belong to. If needed, you suggest your developers to add tests at the lower level of pyramid.
Enabler vs Blocker
Automated tests primarily focus on ensuring that existing features are not broken, rarely discovering new defects. Therefore, it’s essential to make this process as fast as possible to avoid delays in development.
I prefer to refer to my automated tests as “ENABLERS” because they provide quick feedback, bringing confidence in my team that there’s a safety net to catch any problem. This allows developers to change code quickly, especially in the case of hot fixes.
Imagine a scenario where your automated tests take too long or fail intermittently. This not only erodes trust in their reliability but also delays the feedback cycle, ultimately delaying deliverables.
Anything related to software testing should enable the team to make decisions rather than blocking their progress. This consideration should always be into consideration of QAs.
Quantity vs Quality
A few consistently reliable tests are more valuable than a comprehensive suite of tests. If you understand my previous point that automated tests serve as enablers rather than blockers which eventually emphasizes the importance of tests that function reliably.
A well-programmed test behaves like a robot: it fails only if there is a genuine issue, otherwise, it efficiently performs the assigned task repeatedly. Therefore, it’s preferable to focus on a smaller number of reliable tests for your team rather than having too many poorly programmed bots that fail without any issue in system under test.
What to automate?
Anything, I repeat anything, that reduces your effort in testing.
However, when I say “anything,” it doesn’t necessarily imply having all edge cases covered in your end-to-end tests. You also need to consider the cost of automation. By cost of automation, I don’t just mean human effort; it covers:
a) Reliability – Ensuring consistent results.
b) Maintenance – The resources required to maintain the automation, including services, databases, and containers, to ensure tests work as expected over time.
c) Dependencies – The number of external factors necessary for the test to function correctly.
d) Complexity – The complexity of the test code itself and the prerequisites required for setup.
e) Business value – Assessing whether the automated test covers a crucial feature or aspect important to end users or business stakeholders.
f) Time saver – Evaluating the time saved by automating the test compared to performing it manually.
If any of the factors mentioned above are prone to causing frequent test failures, I would recommend considering skipping such tests. However, if you believe that a particular test would still be beneficial, I would suggest using it to complement manual testing rather than incorporating it into our CI/CD pipeline. Alternatively, it could be added to a separate CI/CD pipeline to minimize its impact on the entire team’s workflow.
In summary, any task that saves time and effort for repetitive tasks, whether it involves test data setup, environment configuration, or any other aspect, is worth automating.
Coding Practices
Over the time the biggest problem which QA face with their automated tests is it’s maintenance. Though nothing can guarantee that your code be very easy to maintain as different people write code differently. But in order to align all the contributors, one need to stick few basic coding practices like Clean Code, SOLID and YAGNI.
Subscribe to our newsletter to receive similar posts directly to your inbox.
Thank you for reading this post, don't forget to subscribe!