Test Automation – Am I the only one?

What would the world of testing be without test automation? Well, I assume a bit slower than it is with it.

In this post I don’t want to speak about:

  • There is no such thing as automating “testing” – I know!
  • It’s not about using tools to assist testing
  • It will not be a rant against vendors who declare 100% test automation is possible – no it’s not!
  • Test Automation is for free, besides license costs – no, sir. It’s f**king expensive.
  • Test Automation doesn’t find bugs, it simply detects changes. Humans evaluate if it’s a bug.

So what is this post about? It’s about my personal fight with test automation and the risks I identified attached to it, that don’t seem to bother most of the people working with test automation. So, am I worrying too much? You want to know what bothers me? I will explain.

There are lots of people who treat test automation as silver bullet. “We need test automation to be more efficient!”, “We need test automation to deliver better software!”, and “We need test automation because it’s a best practice!” (Writing the b word still makes me shiver.) If you are in a product/project that doesn’t use test automation, you quickly get the impression that you are working outdated.

My personal story with test automation started a while back with entering my current company. Implementing test automation was one of the main goals why I was brought in. After nearly 2.5 years there was still nothing, because me and my team were busy with higher priority stuff. Busy with testing everything with minimal tool support. Sounds a bit like the “lumberjack with the dull ax” problem, when you belong to the test automation as silver bullet fraction. No time to sharpen the ax, because there are so many trees to chop. In May 2015 I got the assignment to finally come up with a test automation strategy and a plan how to implement it. Reading several blogs about the topic, especially Richard Bradshaw’s blog, quickly formed some sort of vision in my head. I know, against a vision, take two aspirins and take a nap. But really, a plan unfolded in my head. And again we had no time at hand to start with it. Some parts of the strategy were started, some proof of concepts were implemented. Since 3 weeks now I got a developer full-time to finally implement it. Things need time to ripe at my place.

Now I am a test lead with no real hands-on experience how to automate tests and I have a developer who can implement what formed in my head. But with all the time between creating the strategy, a strategy I still fully support as useful and right, and implementing it, I also had enough time to use some good critical thinking on the topic.
And finally, last week at the Agile Testers’ Meetup Munich the topic was “BDD in Scrum”, and thanks to QualityMinds who organized the event, we got not only a short introduction to BDD, we also had the opportunity to do some hands-on exercise.

Why am I not a happy TestPappy now that everything comes together? Here are my main pain points. Risks I would like to address and that I need more time to find out about.

Why do people have more trust in test automation than in “manual” testing? It seems that people are skeptic when it comes to let people do their job and test the right things. But once you have written 100 scripts that run on their own, 3-4 per day, every day of the week, producing green and red results. It seems to me that no one is actually questioning an automated test, once it’s implemented.

Automated checks with a “good quality” need well skilled people. Your stomach makes itself ready to turn, when you read about “good quality”? Good, we are on the same page. The most important quality characteristics in my opinion an automated check should have are completeness and accuracy, stability, robustness, and trustworthiness, scalability to some degree, maintainability and testability, and some more. That’s a shitload of things to take care of, when writing some simple checks. To be honest, most of these criteria our application itself doesn’t have to a sufficient degree, at least not, when it has to stand up to my demands. How could a test team take care of all that when generating hundreds of necessary tests? Now I got lucky and was able to hire a developer to implement the automation framework of my dream, so I got some support on that front. But once we start implementing the checks itself, me and the testers need to implement or at least help to implement them. How do you take care of the problem that all checks need have “good quality” to be reliable not only today but also next week or next year.

How do I know that my script checks the right thing? I’m a very explorative tester. I usually don’t prepare too much what I’m looking for. I let my senses guide me. So when I hand over a certain area to be covered by a script, I have to make decisions what to cover. At least in my context I am pretty sure, that I will miss something, when I give that out of my control. How do you handle that?
My first attempt to implement some automated checks 3 years ago was to call every page that I could reach without too much logic and making a screen shot. I would then just quickly walk through the screenshots and check for oddities. But this is more a tool to assist my testing, and not able to run without me or some other human. Simply comparing screenshots and only checking screens with differences to a baseline is not really possible, since displayed data are usually changing often in my context.

What am I missing? My favorite topic of the past 2 years is Ignorance, in this case the unknown unknown. How do I have to handle that question? I’m sure I miss lots of things with a check, but can I be sure that I don’t miss something important? Once an area is covered with automation, how often do you come back to investigate it? Review if you missed something, if you need to extend your checks, or redesign the whole approach?

How to trust a green test? There is always the problem about false positives and false negatives. False negatives are wasting time, but in the end I have double-checked the area and covered more than the script, so I’m okay to handle those. But false positives are mean. They say everything is all right, and – hopefully – they hide in a big list of other positive tests. So for every single check, every single assertion you need to think about if there is a way that the result is “true”, when it’s really “false”.
Now it also depends on what the script is covering. If you forgot to cover essential parts, you will miss something. But the check will not tell you, it simply can’t. It’s not programmed to do so.

Automating the API/service level? Richard Bradshaw presented that nice topic to automate things on the right level on Whiteboard Testing. Many tests would be better to run on the API/service level. I agree to a certain degree. As long as there is no business logic implemented on client (e.g. browser) side that I need to emulate. When I need to mock front-end functionality to effectively interact with an API, I have to re-create the logic, based on the same requirements. Do I implement the logic a second time to also test if the implementation is correct? Do I somehow have the possibility to re-use the original front-end code and miss problems in there? Do I trust the test implementation more than the front-end implementation? If so, why not put the test code into production?

And the list of worries would go on a bit, but I stop here.

Please help me! Do I worry too much about automating stuff? I would be very happy for some comments on my thoughts, if these thoughts are shared or maybe solved? And if they are overdrawn, I want to know as well.

Advertisement

22 thoughts on “Test Automation – Am I the only one?”

  1. Hey TP, so you’ve come to that point where so many of us have got to before – the point where you say “who checks the checking?” and “is the return worth the investment?” These are very salient points and you haven’t even reached the “why won’t this check/test do what it should?”
    This is where the (first) decision to proceed must happen. As you are probably aware, implementing automation is just like every other technology project – you define the problem, you identify how you want to solve it, you define several possible solutions, you put together a budget that you believe will achieve your goals, you log the risks, issues, dependencies etc and you agree whether or not to proceed.
    However, this is not your decision alone. You must update the business case (hopefully you had one at the start of your 3 year exercise!) and present it to your sponsors and stakeholders in a way that lets THEM make the decision. AND you must do it in a way that is not biased towards your preferred outcome.
    It appears to me that you still have more questions than answers which leads me to think you still have quite a way to go before you embark on the build phase of this project (if it is still viable in it’s current objectives) or it may be time to reset the objectives and goals.
    I could write a whole report of “what next?” but it would probably be more efficient to do an online Skype session to save on misunderstood assumptions. Yes, I am offering assistance if you need it!!
    This is an excellent case study, highlighting the risks associated with ANY project and therefore you need to do exactly the same thing that all other projects (should) do – ensure that going forward is the right thing to do. Far too many projects fail because they fail to ask this question often enough.
    It is far braver to stop, review, question, re-analyse than to blindly keep going forward. Maybe it’s time for a SWOT (strengths, weaknesses, opportunities, threats) analysis.
    As I said, I could go on….
    I hope this helps and as I said the offer of more assistance is here 🙂

    1. Thanks, Colin. Your answer is highly appreciated.
      Let me start with the first point. Yes, it’s a project trying to solve a problem. The problem is that we don’t trust what we put into production. That’s why we test in the first place. When we let machines do parts of our job, we have to realize that we either need to trust those scripts or “check the checks”. When we trust the scripts, why can we not come to the same degree of trust for our production code? Wouldn’t that be easier?

      I would like to accept your generous offer. But maybe not to talk only about my context. What I try to understand is why so many people obviously either don’t ask so many questions or they have answers to most of their questions. Maybe your experience can help me bring some light to my soul.
      Thanks my friend.

  2. Nice post. Everything you mentioned are valid points. For me I always take automation results with a grain of salt. I usually monitor the usefulness of the automated checks and generally it comes a tipping point when you decide that the maintenance and/or the number of false positives are too high and I usually plan a big review that usually ends with a big clean up/rethink of what we’re doing.

    You could ask why I wait for a tipping point rather than review frequently. But for me that would mean going into review hell which would closely be followed by maintenance hell.

    1. Thanks, Lim. From your answer I take you accept the risks and you try to live careful with your automation scripts. At least you know about the risks, what about management? Do they share your view and agree to spend budget on reviewing and rethinking?

      1. Indeed we do manage the risk by actually doing testing ala manual/exploratory ways 🙂 WRT management we try to explain it to them but it usually goes over their heads. We include the reviewing and rethinking as part of everyday work in the same way we do with managing other tech debt.

  3. Interestingly enough this is a scenario I am in at the moment as well. I was – among other things – brought in to improve test automation.
    I am in a somewhat comfortable position as stakeholders are aware that it is no silver bullet, but can be useful for certain things. Business case still comes in handy here.
    But since you posed some questions, here are my answers.

    Why do people have more trust in test automation than in “manual” testing?
    This really depends on what their understanding of manual testing is. Assuming that this is scripted testing (and that is what many people think, I fear), you are taking out the “repetitive and boredom factor”.
    Personally, I don’t trust automation any more or less, at least contentwise. I trust it to be faster and to be able to do more things in parallel and thus freeing up testers for testing beyond a script.
    This is my main motivation at the moment.

    How do I know that my script checks the right thing?
    You don’t. You could check and review the script of course. But is this really different from human testers? How do you know that they check the right thing? Well, by asking, but that isn’t too reliable, either.
    I’d say both situations are about trust. Trusting the tester to test the right thing and trusting the tester to create a script to check the right thing.

    What am I missing? Once an area is covered with automation, how often do you come back to investigate it?
    Automation shouldn’t be an excuse to not go back and investigate. But I think automation allows you to allocate more time for testing in areas where it is neccessary, i.e. areas that might be affected by recent changes.
    When you can’t allocate much time to an area, automation provides some welcome additional information for judging that area I think.
    To borrow a term from literary studies, automation is certainly not the all knowing narrator some people take it for, but I think it is nevertheless a valuable source of information that can be taken into account when making judgements.

    How to trust a green test?
    This is really about what a green test means to you. Assuming that you are not running a script for the first time, a green test is for me an indicator that the software probably hasn’t changed since the script ran last time.
    Does it mean that the system works properly? Not neccessarily. But that is something to test.

    Automated checks with a “good quality” need well skilled people.
    This is right, just as anything with “good quality” will need well skilled people. And those people will need time, which I feel is often more limiting than skill. So this is something to provide people with, hence the business plan.

    So those are my 2cents, which probably raise more question than answer anything (at least for me), but I didn’t want to discard them. If you want to chat about that topic, I’d be more than happy to do so. Just give me a ping on skype at kram.christian

    1. Thank you Christian, your 2 cents are highly appreciated.
      You are right with your answers, I only have to think about them, if that’s all that bothers me or if there’s more to it. At the moment it feels only like answering a part of my problems, and far from solving. I hope a good nights sleep will bring some new insights.
      Thanks for your kind offer, I might come back to it.

      Patrick

  4. Hey Patrick,

    1st of all nice post!
    I don’t think this topic of automation needs to be such a big story. Maybe you should view it as an agile project for yourself. Start with something small that runs (mvp). See what you can do with it and explore the capabilities of the tool. You will get new ideas as you go. Just keep in mind that you still need to maintain it. On the other hand try to keep the expectations low. That is at least what I am trying to do. I guess you already thought about setting up test data using automation (bulk changing data or creating test customers is always a good application of automation IMHO).
    So hopefully cya soon 😀

    1. Hi Ben,

      that is a good approach, and partially I implemented it in my strategy timeline. I tried to prepare my mgmt. with at least 2-3 occasions in the first year where we might need to review the approach and partially return to the drawing board.
      I’d prefer to start with something small and learn from it. But mgmt. wants a 1-5 year plan and a vision of the final picture to sell it to their mgmt. so what we start with is like “low hanging fruits” or “quick wins”. And we had to set quick milestones to prove to upper mgmt. that the investment (of including an additional person) is working.
      So the schedule ruins part of the learning.
      Let’s see how far we come this to me.

      Cheers
      Patrick

  5. Perhaps I am taking this from a different angle. Because to me automated or not, you still need to test. And automating is basically making sure you don’t need to repeat yourself. Especially since projects (code) tend to get very complex very fast. It is impossible to keep up with hand made tests.

    The question: how reliable are green tests, well how reliable is your test script? Its the same thing really. The only difference I can see is that a human being might consider the question: is this script still relevant/up to date/etc. Then again, if your automated test gives green and the scenario was changed, your test was not good to begin with. (neither was your test script probably).

    So manually or not, the questions still stand.

    Lets suppose you had a batch of people who could run manual tests for you at the same level as automated tests would. How many of the questions you have would still apply? I am really curious about that.

    Don’t get me wrong, I think automated tests are very nice (and almost mandatory). But I also think this is a team effort, not a ‘tester effort’. You need to be very wary of test results, know exactly what you test and when tests should fail. Your trust in the ‘test suite’ should be 100%. Because once you get into the broken window syndrome (“this tests fails occasionally, just re-rerun the suite”) you have bigger problems.

    I also think (automated or not) testing should be way more closely coupled to implementing features. Break down features to very small (vertically sliced) parts and identify how to test them individually. Then once you have a good base of the individual parts you can move up one ladder (in the test pyramid) and test those parts. At last you need some tests for the whole (system) integration.

    About ‘how do you know you test the right thing’. Well how DO you know anyway? You validate with ‘the business’ I hope? 🙂 Roughly speaking: your tests validate the product. Your product validates the tests. They should be in sync, just like a bookkeeper keeps his balances in sync.

    I once heard someone say: sometimes it is hard because it is hard.

    And I think there is some truth in that. Regardless how you approach this problem, it always will be difficult. Take small steps, reflect, inspect and adapt. It all sounds so easy, but its damn hard to do.

    Good luck in your journey!

    1. Hi Stefan,

      thanks for your comment.
      I disagree with your first point “Especially since projects (code) tend to get very complex very fast. It is impossible to keep up with hand made tests.” When the product starts to grow, changes to existing parts are more likely to happen, breaking your existing scripts or rendering them useless. So instead of spending time testing the feature, I spend time adapting my existing test scripts.

      I don’t know from what philosophical testing background you come, but for me (test scripts/test cases) whatever you want to call them, are as generic as possible, to keep maintenance low and variation high. I’d rather use checklists or test charters to be able to cover more grounds with less documentation and less need for maintenance. Because, yes, products tend to grow and change.
      An automated script can’t keep up with that.

      From testing (what you might call manual testing) I expect more than a list of green and red check marks. I want to hear about what was done and what was found. And if the story is, that there were no noticeable problems, and I can question back about particular parts, I am satisfied. If I look at a list of test cases marked as passed, I don’t get the same satisfaction. Maybe that’s just me. So testing done right is time spent valuable in my eyes.
      In case I really have to do the exact same thing over and over again, an automated script is better than a human. But then we come to checking, where machines are more reliable than humans.

      The scenario of comparing people with machine-running scripts brings in the fact that people are not good at following instructions to the point. So depending on the context I’d rate human results more valuable as they can look out for more problems than just what the script is asked to do, because people just do that. If the task has lots of steps needing precise input, that should not be varied, I’d say either the application has a problem to allow mis-inputs, which might also lead to a problem later in production, or if I know (and I know that by checking and double-checking, monitoring and analyzing) that my script is robust and reliable at what it does, and doesn’t miss a field, looks at all the right things to assert, I would trust the script more.
      Trust is the essential aspect here. The trust in a human being is built upon various factors and situations. If I have one person that I know for quite a while, and I know she does good work, I trust her with a variation of new tasks as well. A script is only for a certain task, and I have to trust each script over and over again. When bringing in a group of completely new people to the company that I have to compare to one script, the situation is different. But bringing in the “trained monkeys” in my view has nothing to do with testing.
      I can reflect the trust from the script to the person writing the script, trusting her that she knows what to do, in order to have a reliable and robust script, looking for the right things. In order to reach that goal, the person needs experience in how to achieve these quality characteristics for each and every script. In my situation the problem is, that I have no other person that know all these things. I came up with a strategy that requires myself to gain that experience and learning. And of course I don’t trust myself to reach all those parts to 100%.
      For many people – it seems to me – trust is not a problem here. People trust tool vendors in no time that the thing does exactly what they need. Or do these people just have the right people and trust them enough with their decisions?
      So many more questions…

      Thanks for mentioning the “broken window” fallacy, that is quite an important aspect, that I have observed several times for our unit tests. If a test script does that, it needs revision, now!

      If we exchange “test pyramid” with “test hierarchy” I agree with your strategy about unit, integration and end-to-end tests, and whatever levels might be available in between.

      I like the quote “sometimes it’s hard because it’s hard”. And I would rate “test automation” as hard, but when listening to management, some developers, and of course tool vendors, the problem for them seems to be solvable easy. That was the trigger for my post in the beginning.

      Taking small steps would be my preferred solution, learning on the way how to use automation in our context the best way. But management – again – also wants to see the big picture they are investing in, which I can understand. They don’t agree with hiring a person with the strategy “let’s start small, then see what’s next”. They need a vision and a plan for implementing it that can be followed and monitored. Are we back to trust again? To one part yes, to another part they just want to protect their investment, and they want to understand what you are doing with such a huge investment.
      Managers are often hoping and get promised by vendors and consultants, that with bringing in test automation reduces the cost of people, which is another pain point, that has often proved wrong. But there’s another blog post behind that question alone.

      Thanks again for your comment. That brought up several good points here.

  6. Hi Patrick,

    first of all, thank you for writing this post. I think your questions are excellent everyone in the software testing industry and even all the developers, managers out there should ask them selfs. I totally agree with your bullet points in the beginning of your post. I hate to hear sales people talking about their products as something from heaven that will solve all the problems. And we all know they are introducing even more issues and complexity in our complex life as tester.

    Why do people have more trust in test automation than in “manual” testing?

    I made the experience that especially developers or managers trust more automated checks because the feel they get a “real” result, namely a green light. Because they have NO glue what software testing means and what it is all about.
    Manual testing is always not efficient for them, because they not see any result out of it. They have nothing in their “hands” after testing is done. Sad but true, I heard that too often in various companies and projects. I think we want to have something that they can use for argumentation when something went wrong…

    When I hear such statements, that manual testing is not efficient enough, I already start to questioning their points and statement that they should explain me there thinking and I try to tell them the bullet points you mentioned and some of them started to rethink their opinion too. Many of those people also think that software testing can be done by everyone, because it is just “clicking” around. We know it is more than that. Another problem is that software testing is nearly not part of any university schedule. At some universities there is a single class in a semester dealing with testing concepts from a theoretical point of view and thats it. Everyone wants to become a DEVELOPER ;). I think we have to educate them that automated checks are not the silver bullet. One nice thing lately happened to me. I was sick for 2days and the devs where on there own for the code freeze of our app. They had no tester in the team and they really started to test the app from my point of view :). And I was really happy to hear that when I came back to the office. One developer came to me saying, Daniel now I admire you and your work even more. The testing activities you are doing are just great.
    Well that felt good but the situation that the devs where testing the release too was even better for me. Because my education works :D.
    Let’s see if that will stay that way.

    How do I know that my script checks the right thing?
    I agree with Christian Kram. You don’t, we have plenty of automated checks in place and even when I see the green little light in our CI environment I visit the sections and check some stuff manually, from time to time. Sure not all the time, because time is always a problem. We are doing code reviews on the automated checks every time we want to commit the code to the master branch. We treat the test code as production code and developers and other testers will review the pull requests and add comments to it. This helps a lot and is also very efficient to spread code ownership in the team.
    Once in a while we are doing a clean up of our checks. We check the different classes if the features are still covered for our needs. If not those checks are getting deleted.

    How to trust a green test?
    Green checks are OK, it is nice to have them but as you mentioned those false positives are really nasty ones. We are using manual test session across development teams to perform lots of manual testing either on specific areas or on the whole platform, even if we know that we have green checks.
    Checks for me are “stupid”, they have no brain and can’t look left and right if something is going wrong. So we use them to free some time to concentrate on other sections for our testing.

    So far my points to your post. I hope they are helpful. As mentioned in my answer, I try educate other people that test automation is not the silver bullet and show them the pros and the cons of it. However, this is exhausting and way too often annoying, too.

    Best,
    Daniel

  7. Awesome post, thanks for that. I don’t really have answers, just some thoughts that I wrote down while reading your article:

    Paraphrasing a point I think you made: “If something is covered by automation, do we come back to explore it often enough?”
    I have identified these three heuristics so far:
    1. If we expect change in that area of functionality
    2. If your change detectors pick up something, even though there should be no change in that area – That is, if the change detectors pick up something, we should test thoroughly, not just ‘fix the check’.
    3. If we expect AT LEAST A TINY BIT OF CHANGE in an area of functionality, but the change detectors DO NOT pick up something. This means we can go hunting for false positives

    When asking ‘how to tust green tests’, then practicing and learning good check design can help with increasing trust in your own work. Like all of testing, this is a humbling experience, since every once in a while we still miss stuff.

    “Worry about worrying too much about automation.”
    I guess you mean ‘worry about worrying too much about automation at the expense of worrying about something that is possibly more important’. I think the problem here is that there is always more stuff that we could write automated checks for. We need some kind of stopping heuristic so that we can say ‘This is it! Enough change detectors, let’s move on!’. I think lean principles could help here.

  8. Last friday I showed another tester from my company how we test in my current project: I showed him my Evernote notebook with the charters and documented test sessions, told him how we use tagging and how we identify interesting charters when different parts of the app are changed.
    After a while he asked me: “Well, I do know that you also have automation in place. If you test like this what do you use the automation for?”

    At first I found this an off question, because using automation in my projects is somewhat naturally for me. After all I started out as a developer. Then we talked about, which role automated checks actually play in our projects. I do my best to paraphrase here, maybe you can work with it.
    In a nutshell we use automated checks to make sure a tester does not waste his time when he starts exploring a new version of the app, because we can identify quite early if it is completely broken or if important use cases do not work at all.

    We are developing a mobile app and have three kind of checks in place: unit checks for logic, unit checks for UI and checking customer journeys. “Chasing the green” is something we do for the unit checks not necessarily the customer journeys, since we know of some problems that can make a check red even though it is not the app’s fault (primarily scrolling a lot …). We compensated this be creating a report, which includes a lot of screenshots. This way the tester can easily identify if it is a real red or a false negative.
    In addition to making sure the build is good enough for exploration and maybe our beta users, we also use automation to check some pesky cases and functionalities that are annoying to set up and or think about, e.g. we have dozens of different coupons and don’t want to waste a testers time to make sure each of them can still be activated.

    This is in very short our testing and our use of automation. Hopefully it helps as a framing device when I now try to answer your questions.

    Why do people have more trust in test automation than in “manual” testing?

    I think, this is because it seems to make a complex problem (testing) easy. First you write down all in extensive test cases and scripts as some schools of testing are telling you for years and then you can even eliminate the last possibility for human error and let the machine perform them. How can this not sound awesome to a managers ears? Faster, more often and less error prone

    In this scenario thinking about possible broken automation scripts, makes everything complex again. I think it is a vital point of good testing to know how to use automation in your projects and vocalise the chances and limits. You have to be a heart breaker in some cases: No you can’t automate everything and of course you have to maintain your automation scripts. And of course there can be bugs, too.

    In my experience this is the hardest part in introducing automation: Telling them that all the silver bullet stuff is nonsense, yet automation is still useful and should be done.

    Automated checks with a “good quality” need well skilled people.

    This is not a question, this is a statement. And I agree. People working in automation should know enough about testing to create good checks and know how to integrate automation most beneficial into the testing. And they should be good enough at coding that the automation itself does not become a mess.
    You ask how you take care of checks not written by the developer in a years time? I think this is the wrong approach. You should go over your checks regularly and ask yourself a few things: What is it checking? Why is it checking this here? Can I check this easier?
    Furthermore it really helps to apply the same rules you apply to production code, e.g. do code reviews and have static code analysis in place. I for one want my checks have zero findings from intelliJ inspections: Either I fix the finding or I can live with it and except the respective code.
    Since Code reviews are also a great tool for learning this would be a good idea for educating yourself and your testers to write better checks code wise. Or even pair coding?
    If you work in an environment that lets the code of your actual application rot, you might face the same issues with your automation code also. In this case I think it is wise to seek help from the actual developers. Work together towards good code.

    How do I know that my script checks the right thing?

    You don’t. You set your check up the check one very specific thing and that it does. You cannot have it check everything that you can or would cover during exploration. Set up a strategy that gives you a guideline which things you want to check and which things you want to left open for exploration.
    After this you know what your checks cover and can decide from build to build if this is enough coverage for that area or if you want to perform some additional charters.
    So checking is more about knowing what is checked and not being able to cover everything in checks.
    This is why I always get a little scared when people have like a ton of checks in place. Usually no one knows what they check anymore. The reports don’t give it away either. Oh, by the way: Have good reports that tell you what is checked and what not.

    How to trust a green test?

    It is basically the same as with the software itself: Gather information about them. Here are some things that help me trust my checks: Know what is checked. Do code reviews on the checks. Regularly go over your checks. Make them easily accessible with good reports and screenshots.
    These are all things, which help you to decide what a check does and cover. The rest is for you to decide: Is something important missing? If so should I write checks to cover it or is it better to cover the rest manually?
    If the check misses important things that you yourself do not know about also then the problem is not in the check. But you know that. 😉

    Automating the API/service level?

    Your questions here deal with the topic of logic in the UI client and how you can test that on the API level. I would argue that you don’t. You should test on API level when this actually can be done without introducing new complexity. It makes no sense to rebuilt the UI logic in your checks just to be able to check on API level. What I usually try to do: Ask myself if I really need to check this, if so then go through the UI and find out WHY there is logic in the UI. Is it possible to move the logic further down the chain?
    You raise a second interesting thought: I saw it quite often that people rebuild a lot of logic in their checking framework, that has already been build in the application. This is a maintenance nightmare. And you may repeat bugs from the application in your checks. Don’t do that. Choose good oracles the get the expected results for your checks and if possible make it easy to adjust the expected results. But don’t rebuild logic.

    What am I missing?

    I deliberately put this last. The only bing thing, which jumped to my mind is what Ben has already brought up. Start small and learn. A lot of your questions will then be answered over time. To get management of your your back for a while start with something that really brings value.
    I am not a fan of the “quick wins” and “low hanging fruits” myself here, but they have the nice effect of giving you credit and time for more. So maybe you can start small, but good in an area that quickly reduces some visible pain for management and then work from there?
    Once you gained your credibility you can use it to create an automation strategy and a story for selling it.

  9. Hi Patrick!
    Nice post and I agree with you in many points, but I think you do worry too much about it! I think TDD, BDD and automated regression testing is great, but only if it is properly done and if it is not substitute for real testing – manual, context driven or exploration.
    What are you afraid of? What is cause for your worries? It is only test automation strategy – you can do it!

    Cheers –
    Kristine

  10. Nice post! Your thoughts about false positives and false negatives triggered me to think about the automation process in my current workplace.

    You really have to pay attention to what you assert on when automating a check. To illustrate this let’s use a very basic example: you are automating the login process of a webpage. After filling in the username and password and clicking the login button, you have to tell your automated script what assertions to make to decide if the test has passed succesfully or not. You can do this in multiple ways. First, you can simply check if the login fields and button have disappeared. You can imagine this can easily lead to false positives: the button and fields may have disappeared, but the user could be left with an empty page. Does that sound like a successful login? Another way is checking if the url has changed, if an error message popped up, if certain components are visible like the menu, a welcome message or the name of the logged in user. Be careful with asserting on exact locations of items, specific message texts and other details that may differ accross browsers, screensizes, language settings and so on, since those may lead to false negatives in return.

    On regard of false negatives, evaluating them takes up valuable time and effort, which drastically reduces the benefit of your automated test script. To my opinion it is equally important to avoid those as well.

    It’s all about the balance between asserting on too little detail or too much :). It helps to have a clear view of when/where the automation is used and to have some kind of guideline on which types of checks are automated.

  11. I like to think of automation more as a tool to help QA, and less of an end product in itself. This is not an easy sell to managers who are envisioning a ‘security blanket’ that will magically catch errors whenever they happen. When automation breaks, QA has an opportunity to look very closely at areas of the program they may otherwise be tempted to take for granted, in an effort to either fix the test or flag a legitimate issue. If the automation suite is too large and/or complex, however, it can become a liability. It fails all the time, people expect it to fail, and dealing with it gets put off as a future task that never quite comes to fruition. A smaller, leaner and broader suite of tests (checks) can be really useful, as failures can (and more likely will) be examined in real time.

    A smaller scope and a robust framework (3rd party or home grown) are, to my thinking, keys for automation that helps the QA effort rather than confusing it. Bosses want to “Automate every test case!” without realizing what a maintenance nightmare its going to be. Automation is one tool of many to help the QA team find bugs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: