TL;DR: Implementing Test Automation (from a test case and framework perspective) are a great place to do exploration on several layers of the SUT. If you are just there to automate a test case, you miss so many chances to improve the system.
This tweet from Maaike Brinkhof inspired me initially to come up with this post:
That is addressing a topic that crossed my desk quite some times lately. When preparing a talk for our developer CoP at QualityMinds, a colleague asking me for advice how to structure their test approach, or one of my teams at my last full-time client engagement, where in the planning session for every story the same three tasks were created for QA. 1. Test Design, 2. Test Automation, 3. Manual Execution + ET.
As you might have already read about it, my understanding of “testing” is probably a bit different from others. What I want to describe today is a bit of a story how my actual testing looks like, when I am in a test automation context. Over the past couch*cough years I learned to enjoy the power that test automation provides for my exploratory testing on basically all layers of the application.
Disclaimer: I didn’t work in a context where I have to automate predefined test scenarios in some given tool for quite some time now. For the last 8-9 years, I worked embedded (more or less) in the development teams, mostly also responsible for maintaining/creating/extending the underlying custom test framework.
Test Automation provides for me the opportunity to investigate more than just the functionality, because I have to deal with the application on a rather technical level anyway. This way I can make suggestions to the system architecture, look at implementation, do a code review, clean up code (you probably know those colleagues that always forget to put a “final” on their variables causing lots of useless Warnings in the SonarQube) and understand better how the software works. Blackbox testing is okay, but I like my boxes light grey-ish. This can save me a lot of time, by just looking at if-statements and understanding how different paths through the functions look like.
Unit and integration tests were mostly part of the devs’ work, but when I preferred that level, I also added more tests on that level. But most of the time I implement tests on service and UI level.
I start with an end to end spike for the basic test case flow for the new functionality. This helps me in creating the foundation of my mental model of the SUT. And I also see first potential shortcomings of the test framework. Things like missing endpoints, request builders, elements in page objects or similar UI test designs and so on. First issues with testability might appear here, if according endpoints are missing, access to some aspects is not given, and whatever you can come up with in the current context that would make testing your application easier. So either go to the devs to let them improve their code or do it yourself and let them know what you did. (If you do it yourself, the second part is important! That belongs to communication and learning!)
This is also the moment when I most often look for implementation details, if they are consistent, logical, complete and usable. That goes for endpoint names, DTOs for request and response, REST method, UI components and all that. <rant>Especially these dreadful UI components, when for the same application the third or fourth different table structure is implemented, which makes it impossible to keep a consistent and generalized implementation for your UI test infrastructure, because you need to take care of every freaking special solution individually.</rant>
Once the first spike is implemented we have a great foundation for a standard test case. Now we can come up with the most common scenario and implement it. The happy path, if you want to call it that way. I try to put in all necessary checks for the SUT, that all fields are mapped and so on.
At that point I also come up with the first special scenarios I want to check, if they are not anyway already mentioned somewhere in the story description or test tasks. So I continue building up my test case and try some variations here and there, and compare the results with what I expect, what the requirements state (not necessarily the same thing), and how data passes through the code.
I tend to run my tests in debug mode, so that I can see what different variables hold. Additional request parameters, additional response fields, unnecessary duplication of information. That often gives me also more insights in the architecture and design. Why are we handing out internal IDs on that interface? Is that information really necessary in this business case? Why does it hold information X twice in different elements of the DTO? Can we probably remove one of them?
I also like to debug the application when running my test case. Do we pass through all layers? Why is that method bypassing the permission check layer? Do we need special permissions? Ah, time to check if the permission concept makes sense! This step is converting from one DTO to another? Couldn’t we then just take the other DTO in the first place? Persistence layer, hooray! Let’s check for null checks and according not null columns in the database. Did the developers forget something? I might not be able to pass null on that parameter via the UI, but probably through the API directly?
I found more scenarios, all similar but not the same. Can we simply add a parameter table to the test and let run one implementation multiple times? What would the difference be? Do I really need to add test cases for all of them? What would the value be?
<rant>Recently I had an assignment to analyze some implementations for a customer. And there was this one(!!!) method that took care of 26 different variants of an entity. There wasn’t even a unit or integration test for it. They left it for QA to check it in the UI end-to-end-tests or manually! 26 scenarios! That is a point, where I as a tester go to the devs and ask if we could re-design that whole thing. Is that out of my competency? I don’t think so? I uncovered a risk for the code base, and I want to mitigate that risk. And mitigating it by writing 26 different test scenarios is not the way of choice! So stand up and kick some dev’s butt to clean up that mess! </rant>
I send in request A, and get back response B. Can I simply evaluate response B that the action C I wanted to trigger happened or did the endpoint just enrich response B with the information from request A and didn’t wait for action C to actually do something? Trust me, I have seen this not only once! I also have seen test cases where the author checked the request for the values they set in the request, because they mixed it up with the response?
Back to action C! How can I properly evaluate the outcome of action C? In the past years I had several projects where you always found proxy properties of the result that you could check. This is a bit like sensors in a particle accelerator. You won’t see or measure the actual particle that you wanted to detect, but the expected result of an interaction with other particles. This often happens in software testing, too, when it’s not possible to observe all entities from the level of your test. Request A triggers action C, but you don’t trust response B. You rather check for result D that action C will have caused if everything worked properly. This actually requires a lot of system thinking and understanding.
Then comes the part where I “just” want to try out things, that some call exploratory testing, some call it ad-hoc testing, I call it also testing, as it’s just another, important part of the whole thing where I try to gain trust in the implementation. Anyways, so I take some test scenario and play around with it. Adjust input variables, add things, leave out things, change users, or whatever comes to mind. You know this probably as “Ha, I wonder what happens, when…” moments in your testing. I might even end up with some worth-to-keep scenarios, but not necessarily.
Earlier this year I also had the context that I was adding test automation on unit and service layers for a customer-facing API. So basically in the service layer tests I was doing the same thing that the customers would do when integrating this API. I was the first customer! And thanks to some basic domain knowledge I could explore the APIs and analyze what is missing, what is too much, what I don’t care about, etc. I uncovered lots of issues with consistency, internal IDs, unnecessary information, mapping issues, and more, because I was investigating from a customer perspective and not just blindly accepting the pre-defined format! This was exploratory testing at it’s best for my perspective in that context!
When I implement new automated test cases, I also always test for usability and readability of the tests. So when implementing a scenario and for example the test set-up is too complicated to understand or even create, then I tend to find simpler ways, implementing helpers, generalize implementations to improve those aspects of the test framework for the new extended context it has to work in.
As some of the last steps of implementing automation I go through the test cases and throw out anything that is not necessary and clean up the test documentation parts. I don’t want to do and check more than necessary for the functionality under test, and I want others to understand it as good as possible. Which I have to say is often not that easy to achieve, because I tend to implement rather tricky and complex scenarios to cover many aspects. As I mentioned before, I’m a systems thinker and systems tend to become rather complicated or even complex rather quickly and I reflect that in my test cases! Poor colleagues!
Some might call this a process. Well, if you go down to terminology level, everything we do basically is a process, but it doesn’t necessarily follow a common process model. Because when we refer to “a process” in everyday language, like Maaike in the tweet I mentioned on top, we usually mean “a process model”. And this is why I totally agree with her. And some people who ride around on the term “process” don’t simply understand the point she wanted to make!
Context and exploration are so relevant and driving forces for me, that it’s impossible for me to describe a common process model of what I do, and that common sense (I know, not that common anymore) and my experience help me most in my daily work and not using some theoretical test design approaches. Test Automation, Automation in Testing, Exploratory Testing and Systems Thinking in general all go hand-in-hand for me in my daily work. I don’t want to split them and I don’t want three different sub-tasks on the JIRA board to track it!
I’m just not one of these types that read a requirement, analyze it, design a bunch of test cases for it, implement and/or execute them, and happily report on the outcome. Of course I come up with test ideas, when I read the requirement. And if I see tricky aspects, I mention them in the refinement or planning, that they can already be covered by the devs or rejected as unnecessary. When I actually get my hands on the piece of code, I want to see it, feel it, explore it. And when I decide that the solution dev came up with is okay and actually what I expected it to be, then I’m okay to stop further testing right there.
When I started writing this article, some weeks ago, I was made aware of that Maaret Pyhäjärvi also wrote about the intertwining of automation and exploration. You can find for example this excellent article for Quality Matters (1/2020). And there’s probably more on Maaret’s list of publications. And probably other people also wrote great posts about this topic. If you know any, please let me know in the comments.
But I wanted to write this post anyway, to help myself understand my “non-process” better, and because some people on LinkedIn and Twitter asked for it. And probably it adds something for someone.