This is the first part in a series of posts I’ll be making, all discussing problems with test automation. Now, I’m not saying test automation is problematic, or that it doesn’t work, but there are inherent problems with how most organizations do automation, and due to that, Test Automation comes off looking poor. To see the extent of this problem, you only need to look at the industry in generally. Industry wide, only about 20% of testing is done with automation. For a technology that has been around for over 30 years, this speaks wonders. To put this in perspective, cell phones have been commercially available for about 35 years, and it took about 20 years to get 60% adoption and was at 90% after 30 years (we now have more phones than people on this planet). With numbers like that, it would appear that test automation isn’t the solution we’re looking for.
I hear very often that test automation doesn’t work, and most time, yes, test automation doesn’t work...the way you’re doing it. Many people, organizations, and even industries have decried automation, and entire niche markets have been created to ‘make automation simpler’. That said, I believe that automation can be effective, however, it will probably entail changing the way you’re doing things. Not only that, but I believe test automation is necessary to ensure success in any software development project.
You might be wondering, well, sure automation has problems, but why should I be listening to Max about this, and that’s a great question. I’ve been working in the test automation field for about 15 years, with the majority of it being a consultant. I've worked with over 20 different organizations, sometimes as an outsider providing insight, sometimes hands on doing the work. At all of them, I’ve gone in to try and improve their software quality, while reducing the effort needed to get their software released. The majority of the organizations I’ve worked with were successful in getting automation going, and many continued their good work after I left (I try to keep up with a lot of them). Some of them didn’t. I have a whole host of lessons learned, that I’m hoping I’ll be able to share with you in these posts over the next few months. I’ve pretty much never worked for or with an organization with testing ‘all figured out.’ No one pays money to hire someone to help when things are moving smoothly. Which puts me in an interesting position of having seen lots of different problems, and how I (and others) have gone about solving them. And of course, that also leaves me with the idea of what I’d do differently next time (hindsight being 20/20 and all).
Why Bother with Test Automation?
I’m not an ‘automate everything’ sort of person. I mean, that sounds great and all, but for most organizations, that’s simply not practical. I am a huge fan, however, of automation, and automating as many of your tasks as possible. As a tester, this means I want to automate as much of my testing as I can. You might ask why, and I have several reasons, first and foremost, I’m lazy. I don’t want to do the same thing over and over and over again. It’s boring, repetitive, and error prone. From a practical perspective, however, it’s probably just not feasible.
If you look at the software development process from a simple tester standpoint, it goes like this. Developers make A, you test A. Developers make B, you test B, and then you test that A isn’t broken. Developers make C, you test C, and then you test that B and A didn’t break. Testing, therefore, becomes a cumulative process, with the amount of work you’re expected to do over time growing. It is why organizations have such long release cycles. It is also why test automation is needed: you have repeated tasks that need to be performed, and without some rapid way to do this, at some point, releasing new software doesn’t become practical, either due to risks of untested software or long times to be able to release due to testing.
Understanding this can even help direct where to test. What actions will be performed repeatedly, and what are some one time actions? We can use this to start making our testing suite more effective by understanding and developing an ROI (return on investment) from this. When starting out, that means you should use your great testing skills on the latest feature, but use automation on prior features, to check to ensure nothing broke, as these will be needed in the long term. Maybe still have some exploratory testing for the entire app, but solid consistent features, like say some static content, or a login feature, get some automation around it. Once you have that in place, identify your core verification points on your latest feature, and automate those. Expand from there, but in the meantime, do manual testing to fill in the gaps.
An important thing to note; I’ve never been in a position to spend as much time as I want testing a feature. Time is always an issue, and automation at the beginning will cost you more time, not save you any. But if you don’t invest now, things will only slow down. An initial investment will pay off 10 to 100 times down the line, often in the very next sprint or release cycle.
Hopefully I’ve gotten you excited about doing some automated testing, but probably haven’t given you much insight into how to tackle some of the problems you’ll be running into down the line. Don’t worry, that information is coming soon.
Tune in next week, when I dive into our first topic, unreasonable expectations. I’ll be discussing not just what you should and shouldn’t expect from automation, but also management, and how to handle dealing with organizations that expect way too much. I’ll discuss some good measures to collect to help you show value, but also how to write tests in such a way as to ensure transparency in what your automation is doing, isn’t doing, and why. If there is a particular area you’re struggling with, please leave a comment below, and I’ll either try to address it there, or add it onto my list for topics to write on. Good luck, and happy testing!