Qualified software tests are more important nowadays than ever before. However, no longer at the end of the development phase, but continuously as agile testing throughout the project. Test automation can also support extensive end-2-end tests.
For a long time, qualified software tests were given little consideration and test automation was hardly an issue. Every developer was themselves responsible for software; so, why add another extensive process to the project? Why block even more resources with such “overhead” tasks?
“Testing is done at the end (by the intern).”
In agile software development, this is of course no longer the case. CI/CD (Continuous Integration / Continuous Delivery) also means: Continuous Testing. Component and limited integration tests are created and continuously executed by the development teams themselves.
Internal test automation serves as agile testing, though less for classical quality control than for controlling the software design. For instance, it is possible to quickly find out what effects a new feature has on existing components.
Complete end-2-end testing by independent software testers should not, and cannot, replace agile component testing. As it is first and foremost here that the final interaction of all the components undergo rigorous quality control: From the actual application, through the databases and basic infrastructures used, to the software on the client side, eg., a browser.
End-2-End Testing also offers the chance to discover errors which remain unnoticed during isolated component tests.
End-2-end tests are therefore indispensable, especially for complex web applications – and a challenge at the same time. Test plans often include several combinations of OS, browser and hardware equipment. For each of these combinations, countless page views, clicks, scrolls and form entries have to be tested.
Without test automation with solutions like Selenium, these extensive end-2-end tests within CI/CD processes would hardly have been manageable, even for large teams of experts exclusively assigned to do software testing.
Many companies, however, are still shying away from the time and effort needed to implement end-2-end tests for better quality assurance. They compensate by intensifying other tests and relying on test-driven development. This is usually easier to integrate into agile processes than end-2-end testing.
A frequent argument against automated end-2-end tests is the recurring adjusting of test plans, which may become necessary, for example, due to a merge. Hence, software testing can produce several errors even with small changes to a user interface.
This hinders the development speed and results in errors, in general, being simply tolerated.
Also, it is often anything but trivial to infer from the errors found in end-2-end tests the concrete places in the source code of the tested application. All the components in the system used, from the connected database to the browser, could be potential sources of errors.
The main argument against test automation is that very little time is saved due to the complexity of the setup, execution and result analysis. This is all only true if the end-2-end test is overloaded. And it is no wonder then when there’s a repeated: “We’ll test at the end.”
In agile software development, however, there is no definite end and therefore continuous end-2-end testing makes sense if you want to achieve consistent quality throughout the software life cycle.
In order to successfully incorporate them into CI/CD processes, they should be used primarily for comprehensive integration aspects and not for the functionality tests of individual components. These are much better placed in smaller, more targeted tests.
If a new feature does not pass the component test or integration test, it should be excluded from the merge until a fix is found.
Test automation for end-2-end therefore requires good planning, maintenance and discipline. This applies to internal test plans as well as to the work of external software testers. Once, however, the procedure has been learned and integrated, no quality manager will want to do without it.
Avenga offers ISTQB certified software testing for functionality, usability, performance and accessibility, end-to-end automated for the entire software life cycle.
In this German language video you can see how Avenga (formerly Sevenval) perform the test automation of end-2-end testing:
404, three numbers which cause developers to sweat. “In the staging environment, everything was running perfectly, then came the push into the production system and suddenly everything halted. How could this have happened?”
To avoid situations like the above occurring, automated testing allows for the integration of measures into the production cycle which provide warnings and indications during development, before going live and during operation.
As the name suggests, these measures are carried out automatically, i.e. by scripts and programs. In this way, many developers hope to save time and resources which would otherwise be spent on manual software testing.
Sounds too good to be true? Our experts from the QA, development, and sales departments take a closer look at the subject of “automated testing”.
For me, automated testing in software development means: All the measures for evaluating software which are not carried out manually. From health checks in monitoring, unit tests in development, up to end-2-end tests by quality engineers. A widespread myth is that this would per se save resources and effort. This is only true, however, if a corresponding amount of effort and time has been already invested in manual quality measures.
The strength of automated testing lies in the fact that it minimizes the effort for recurring quality assurance measurements and can present results more rapidly. Freed up resources are then logically transferred to activities which bring about an even higher level of quality. Hence, it is cheaper and better in the long run. Always keep in mind: “What is right or wrong for the computer must first be determined by the user. DON’T PANIC!
Does Automated Testing reduce running costs? A definite, yes and no!. Monitoring and tools can be used to ensure that the website can withstand permanent testing. But this does not reduce time and effort – though, possibly for the boss who doesn’t check the website’s availability every morning, anyway. But it does ensure that failures are noticed and can be corrected more quickly. With increasingly complex applications, this will soon become a must have. The degree of automation in testing activities is still relatively low. Some companies are therefore introducing a Test Excellence Center which acts as a central instance within the company.
Automated testing has both advantages and disadvantages: On the one hand, it offers the possibility to really test end-2-end against an API. This allows you, for instance, to detect changes to the API at an early stage. In addition, you can maintain the stability of a website across multiple browsers and mobile devices. This is especially valuable if, for example, native inputs are used on mobile devices. On the other hand, there is a high risk of having “dropouts” in the test battery, e.g. due to API failures and long response times. Besides this, test runtimes are high: the tests run through the website like a user (clicks, keystrokes, page structure). This means that they require a multiple of the time required for snapshot tests with Jest which are common in SPAs, or similar frameworks.