I received a question from a friend about end-to-end testing. The question was: "Is it wise to try to build end to end test automation around an intermittently unstable web app?" Instead of sending him my answer directly, I thought I would put my answer in a blog post.
In my opinion, I think it is worth having an end-to-end test for the intermittently unstable web app. I don't think it is worth the effort to automate those tests and include them in part of a CI/CD pipeline. Here's why I think this.
I am a HUGE believer in automated testing. I think having an end-to-end test of a system is a good thing. It gives developers and testers an easy way to exercise changes to the application. In some cases, these tests can be used to verify changes to a production system. It is critical to have these tests so that the functionality is documented and can be exercised by anyone with access to the tests.
I don't think it is worth it to automate these tests against an intermittently unstable system. The key words here are "intermittently unstable." If the tests can run repeatedly with immutable results, I would say automate their execution. The trouble with some web apps is that they may take longer to respond in some areas based on time of day or operation performed, causing inconsistent test results due to time outs. It can cost a lot of time and effort to keep this automation running and test results consistent. If this is the case, I don't think the automation effort is worth the time.
I hope this answers my friend's question. If not, I'll keep the thread going here.
I agree, there are certain things that provide little to no value when automated or placed in the pipeline.
ReplyDeleteWhich all makes me wonder, what happens when a set of responsibly written tests begin to intermittently fail down the road?
In theory, all tests start out with the best of intentions, play nice with other tests, and successfully pass before being officially committed to the test suite.
Having said that, what happens when that stealthy bug finally presents itself under the right circumstances?
If this test suite is in your CI/CD pipeline, it seems like you would be forced to turn it off or remove tests from the pipeline altogether.
If going down that road, you would then be inspecting the test results manually to determine whether or not they are valid failures.
I think that turns a pass or fail automated process back into a manual one, and that kind of manual inspection would be right in the middle of an otherwise streamlined process.
All of this to say can front-end test automation really be trusted enough to reliably run in your pipeline ever?
I honestly don't know.
I think if the "flakiness" of the tests is due to a bug, then the tests did their job and the bug should be fixed. Usually "flakiness" in these kinds of tests happen over time as the application scales but the tests were not updated to keep up with that scale.
ReplyDelete