Saturday, September 20, 2014

Why would I do regression testing?

There was an evening event and late in the evening we were talking about testing. As usual in the agile circles, test automation creeped in to take the stage. This time though it was formulated perfectly for me to learn something essential.

A developer colleague took a sympathetic approach and told me he would feel bad for if he didn't create automation and in particular he feels he needs to work on automation so that I can focus on testing the new stuff and not repeating the same tests.

I immediately replied from my heart and experience. I have worked on my product for over two years, logged some thousands of issues but so far I have not repeated a single test. Even with the fact that the level of automation in my team isn't created by these aforable, sympathetic agilist developers who care for my well-being.

I realized that reply includes a core of how I deal with testing work. I'm active player in varying my approach. I might press the same buttons but I have different ideas racing through my mind. I don't use the same browser, I don't use the same user, the same data, the same story, the same combination. And I find problems the test automation will never find.

Don't get me wrong, I love having test automation around. I don't expect it to do any testing for me. Instead, I expect it to make my testing work less interrupted with plethora of simple problems automation can catch. Whenever I find a problem, it stops the testing I was doing. But more than automation, I love having active thinking developers around that think and use automation as their safety net, but don't go into relying only on automation. Thinking is the core of it all and sometimes some of that thinking gets packaged into automation.

Regression testing isn't what I do - at all. I'm not sure to what extent we should even talk about doing regression testing. I test, and regression is one of the risks I'm considering to motivate what I do. But there's no specific repeating same tests approach that I would call regression testing.

I've been teaching that tests have 'best before' dates with short expiration dates due to the changes that might cause the need of going back. It's really test results that expire, and the results are not the tests but the information about the system. I seek for same information again, but while at it, I also seek for new info based on the learning that other tests (and discussions, readings etc.) have given me. That's what a tester needs to set their mind to achieve.