The “But” Heuristic

When you are in a dialog, especially in a written form, E-mail, Twitter, messenger service and you are answering to someone and your sentence starts with “But”: Delete the sentence and rephrase it.

“But” is often a dialog killer, virtually taking back what you just wrote, and also a hint for mansplaining happening.

“I don’t want to tell you what to do, but…” – I tell you now what to do.

“This is a good idea, but…” – It actually is a bad idea.

“I really like you, but…” – Get lost!

“Well, actually…” – The new “but” of too many men.

“Have you thought of…” – tough balance here – it can spark new ideas or simply mean “I know you are stupid, so I help you think”.

When you try to say/write/think “But”. STOP! Delete the sentence! When you already said it, excuse yourself and start to rephrase.
Think about what you really want to say.
Think about the person you are communicating with.
Think about the context.

Do you really want to kill the conversation, make a decision for someone else or tell the other person that they most probably didn’t think about this very important aspect that only you know? In most cases, I don’t think so.

Use “Yes, and…” instead. Try to be more supportive in your statements, don’t make the other person feel that you think they are stupid – even if you probably do! That is a sign of respect!

Communication is hard!

So, the next time you write “but”, think of me sitting on your shoulder, looking batshit grumpy at you. Stop typing, say sorry, delete the sentence, THINK, rephrase. Respect! That is what counts in most if not any conversation.

Advertisement

Exploratory Test Automation

TL;DR: Implementing Test Automation (from a test case and framework perspective) are a great place to do exploration on several layers of the SUT. If you are just there to automate a test case, you miss so many chances to improve the system.

This tweet from Maaike Brinkhof inspired me initially to come up with this post:

That is addressing a topic that crossed my desk quite some times lately. When preparing a talk for our developer CoP at QualityMinds, a colleague asking me for advice how to structure their test approach, or one of my teams at my last full-time client engagement, where in the planning session for every story the same three tasks were created for QA. 1. Test Design, 2. Test Automation, 3. Manual Execution + ET.

As you might have already read about it, my understanding of “testing” is probably a bit different from others. What I want to describe today is a bit of a story how my actual testing looks like, when I am in a test automation context. Over the past couch*cough years I learned to enjoy the power that test automation provides for my exploratory testing on basically all layers of the application.

Disclaimer: I didn’t work in a context where I have to automate predefined test scenarios in some given tool for quite some time now. For the last 8-9 years, I worked embedded (more or less) in the development teams, mostly also responsible for maintaining/creating/extending the underlying custom test framework.

Test Automation provides for me the opportunity to investigate more than just the functionality, because I have to deal with the application on a rather technical level anyway. This way I can make suggestions to the system architecture, look at implementation, do a code review, clean up code (you probably know those colleagues that always forget to put a “final” on their variables causing lots of useless Warnings in the SonarQube) and understand better how the software works. Blackbox testing is okay, but I like my boxes light grey-ish. This can save me a lot of time, by just looking at if-statements and understanding how different paths through the functions look like.

Unit and integration tests were mostly part of the devs’ work, but when I preferred that level, I also added more tests on that level. But most of the time I implement tests on service and UI level.

I start with an end to end spike for the basic test case flow for the new functionality. This helps me in creating the foundation of my mental model of the SUT. And I also see first potential shortcomings of the test framework. Things like missing endpoints, request builders, elements in page objects or similar UI test designs and so on. First issues with testability might appear here, if according endpoints are missing, access to some aspects is not given, and whatever you can come up with in the current context that would make testing your application easier. So either go to the devs to let them improve their code or do it yourself and let them know what you did. (If you do it yourself, the second part is important! That belongs to communication and learning!)

This is also the moment when I most often look for implementation details, if they are consistent, logical, complete and usable. That goes for endpoint names, DTOs for request and response, REST method, UI components and all that. <rant>Especially these dreadful UI components, when for the same application the third or fourth different table structure is implemented, which makes it impossible to keep a consistent and generalized implementation for your UI test infrastructure, because you need to take care of every freaking special solution individually.</rant>

Once the first spike is implemented we have a great foundation for a standard test case. Now we can come up with the most common scenario and implement it. The happy path, if you want to call it that way. I try to put in all necessary checks for the SUT, that all fields are mapped and so on.

At that point I also come up with the first special scenarios I want to check, if they are not anyway already mentioned somewhere in the story description or test tasks. So I continue building up my test case and try some variations here and there, and compare the results with what I expect, what the requirements state (not necessarily the same thing), and how data passes through the code.

I tend to run my tests in debug mode, so that I can see what different variables hold. Additional request parameters, additional response fields, unnecessary duplication of information. That often gives me also more insights in the architecture and design. Why are we handing out internal IDs on that interface? Is that information really necessary in this business case? Why does it hold information X twice in different elements of the DTO? Can we probably remove one of them?

I also like to debug the application when running my test case. Do we pass through all layers? Why is that method bypassing the permission check layer? Do we need special permissions? Ah, time to check if the permission concept makes sense! This step is converting from one DTO to another? Couldn’t we then just take the other DTO in the first place? Persistence layer, hooray! Let’s check for null checks and according not null columns in the database. Did the developers forget something? I might not be able to pass null on that parameter via the UI, but probably through the API directly?

I found more scenarios, all similar but not the same. Can we simply add a parameter table to the test and let run one implementation multiple times? What would the difference be? Do I really need to add test cases for all of them? What would the value be?

<rant>Recently I had an assignment to analyze some implementations for a customer. And there was this one(!!!) method that took care of 26 different variants of an entity. There wasn’t even a unit or integration test for it. They left it for QA to check it in the UI end-to-end-tests or manually! 26 scenarios! That is a point, where I as a tester go to the devs and ask if we could re-design that whole thing. Is that out of my competency? I don’t think so? I uncovered a risk for the code base, and I want to mitigate that risk. And mitigating it by writing 26 different test scenarios is not the way of choice! So stand up and kick some dev’s butt to clean up that mess! </rant>

I send in request A, and get back response B. Can I simply evaluate response B that the action C I wanted to trigger happened or did the endpoint just enrich response B with the information from request A and didn’t wait for action C to actually do something? Trust me, I have seen this not only once! I also have seen test cases where the author checked the request for the values they set in the request, because they mixed it up with the response?

Back to action C! How can I properly evaluate the outcome of action C? In the past years I had several projects where you always found proxy properties of the result that you could check. This is a bit like sensors in a particle accelerator. You won’t see or measure the actual particle that you wanted to detect, but the expected result of an interaction with other particles. This often happens in software testing, too, when it’s not possible to observe all entities from the level of your test. Request A triggers action C, but you don’t trust response B. You rather check for result D that action C will have caused if everything worked properly. This actually requires a lot of system thinking and understanding.

Then comes the part where I “just” want to try out things, that some call exploratory testing, some call it ad-hoc testing, I call it also testing, as it’s just another, important part of the whole thing where I try to gain trust in the implementation. Anyways, so I take some test scenario and play around with it. Adjust input variables, add things, leave out things, change users, or whatever comes to mind. You know this probably as “Ha, I wonder what happens, when…” moments in your testing. I might even end up with some worth-to-keep scenarios, but not necessarily.

Earlier this year I also had the context that I was adding test automation on unit and service layers for a customer-facing API. So basically in the service layer tests I was doing the same thing that the customers would do when integrating this API. I was the first customer! And thanks to some basic domain knowledge I could explore the APIs and analyze what is missing, what is too much, what I don’t care about, etc. I uncovered lots of issues with consistency, internal IDs, unnecessary information, mapping issues, and more, because I was investigating from a customer perspective and not just blindly accepting the pre-defined format! This was exploratory testing at it’s best for my perspective in that context!

When I implement new automated test cases, I also always test for usability and readability of the tests. So when implementing a scenario and for example the test set-up is too complicated to understand or even create, then I tend to find simpler ways, implementing helpers, generalize implementations to improve those aspects of the test framework for the new extended context it has to work in.

As some of the last steps of implementing automation I go through the test cases and throw out anything that is not necessary and clean up the test documentation parts. I don’t want to do and check more than necessary for the functionality under test, and I want others to understand it as good as possible. Which I have to say is often not that easy to achieve, because I tend to implement rather tricky and complex scenarios to cover many aspects. As I mentioned before, I’m a systems thinker and systems tend to become rather complicated or even complex rather quickly and I reflect that in my test cases! Poor colleagues!

Some might call this a process. Well, if you go down to terminology level, everything we do basically is a process, but it doesn’t necessarily follow a common process model. Because when we refer to “a process” in everyday language, like Maaike in the tweet I mentioned on top, we usually mean “a process model”. And this is why I totally agree with her. And some people who ride around on the term “process” don’t simply understand the point she wanted to make!
Context and exploration are so relevant and driving forces for me, that it’s impossible for me to describe a common process model of what I do, and that common sense (I know, not that common anymore) and my experience help me most in my daily work and not using some theoretical test design approaches. Test Automation, Automation in Testing, Exploratory Testing and Systems Thinking in general all go hand-in-hand for me in my daily work. I don’t want to split them and I don’t want three different sub-tasks on the JIRA board to track it!

I’m just not one of these types that read a requirement, analyze it, design a bunch of test cases for it, implement and/or execute them, and happily report on the outcome. Of course I come up with test ideas, when I read the requirement. And if I see tricky aspects, I mention them in the refinement or planning, that they can already be covered by the devs or rejected as unnecessary. When I actually get my hands on the piece of code, I want to see it, feel it, explore it. And when I decide that the solution dev came up with is okay and actually what I expected it to be, then I’m okay to stop further testing right there.

When I started writing this article, some weeks ago, I was made aware of that Maaret Pyhäjärvi also wrote about the intertwining of automation and exploration. You can find for example this excellent article for Quality Matters (1/2020). And there’s probably more on Maaret’s list of publications. And probably other people also wrote great posts about this topic. If you know any, please let me know in the comments.
But I wanted to write this post anyway, to help myself understand my “non-process” better, and because some people on LinkedIn and Twitter asked for it. And probably it adds something for someone.

Context eats Process for Breakfast

That was the title of a workshop that I gave at Let’s Tests 2016 in Runö, Sweden. In my opinion I messed up that workshop, as I still think that I was not able to communicate the intended message to the participants. So I want to give it a new try, over 5 years later, this time as a blog post. Because I still think that the message I wanted to teach back then is important and basically necessary to understand certain aspects of software development.

Breakfast?! Why breakfast? Well, that was the theme of this after lunch workshop.

The first task was given to one participant only. They had to describe to me: “How to make good coffee?” Now comes the tricky part. I don’t drink coffee, I don’t like coffee, I often even can’t stand the smell of coffee. But I want to be able to offer visitors a good cup of coffee. So the volunteer described me several steps, how to get some ground coffee, put it in the machine, add some water, bring it to a boil, let it filter through the coffee and et voila, coffee! So they told how to make coffee.
But how do I, as someone who never drank and will probably never drink one, know that I just made good coffee? The volunteer told me again to stick to the process and I will get good coffee. Okay, but how do I know that I don’t produce bad coffee.
You might guess what my problem here was and what I tried to address.

The next task was split into two parts. First, the participants had to individually “draw how to make toast”. We had some excellent process descriptions and most of all some fantastic drawings, of how to make toast. Then as a small group they had to throw their process steps together to get more detailed steps, add steps that were forgotten earlier. This time the task was “to make GOOD toast”! The results were again very detailed and incorporated some excellent improvements. But nobody was able to tell me why THEIR process produces GOOD toast.

You will probably sit there, reading this post, nervously shaking, as you might see since two paragraphs now, where I wanted to point my participants to. They made great task analysis and gave excellent process descriptions. But in these three rounds of exercises nobody gave me any details on how to create something “GOOD”. They all focused on creating “something”. And I had an excellent audience this afternoon, but they refused to “get it”.

A good friend of mine loves coffee, and I like to chat with him about coffee. (“Know your enemy” is one of my mottos!) So basically I know a lot about certain aspects of the coffee making process that can influence the outcome of the taste of the product. Getting the right beans, grinding them to the right coarseness, right water temperature and pressure, preheated cups, and so on. There are so many small adjustment screws in this process that can influence the taste of the outcome.
And the same goes for making toast. What bread do you choose, thickness of the slices, what kind of toaster, temperature, time, degree of roasting. Probably even, what do you serve with the toast? What aspects make it a good toast?

Or to refine it: what quality aspects define “good” in those contexts. The answer is relatively easy: It depends! On the context and the consumer!
In what environment are you preparing your coffee or toast? For yourself, for friends, do you work at a small breakfast place or in a large hotel? Do you know your consumer, do you have influence on certain decisions, and so on.

Everybody was so busy describing the steps, that they totally forgot about the quality criteria that need to go into decisions in several steps to make something “good”.

The part that had to be removed due to not enough time, was to describe one process from their work context. My favorite example would have been processes, like the bug reporting process, because most places have one of these.

What does a bug reporting process describe. Basically a lot of things, like the tool, the workflow, permissions, which priority to select, which information to provide, and so on. The only aspect I have seen so far in the about a dozen different bug reporting processes of my life, that positively influenced the quality of the bug reports are certain rules about what information to provide in a ticket. You can google for that and you will find some great examples.
But what makes a good bug report? When I know the process how to create the ticket and whom to assign it to? When I fill out all the necessary information? Well, that makes it a good ticket for the bug report management. That is only one aspect of the whole process. But is the individual result really a good bug ticket?

How much pre-analysis have been made? Did you add log files, minimum reproduction steps, test data and so on. Did you add some bug advocacy, e.g. business reasons why this bug needs to be fixed with the given priority. Who is your audience and will read your bug report and has to understand it? Of course there are contexts, where you already spoke with the dev and you agreed to enter a bug ticket to document the fix. These tickets will probably hold less information, because they are “good enough” in that context. In other situations you know that the bug won’t be fixed now, but only later. And you don’t know by whom. How do you decide then, what is “good enough”?

That is the message that I wanted to communicate 5 years ago, so here you get it now in all explicitness.
Your context and your consumer influence the “goodness” of your product. You can describe the best step-by-step instructions, but if you are not aware of all the small factors that change the quality of your product, and if you are not aware of who your customer is, and what they value, then your process will only ever describe how to get a product, but never how to get a good product.

As a big fan of the AB Testing podcast and the Modern Testing approach that evolved from the discussions between Alan Page and Brent Jensen, I’m very happy that Modern Testing principle #5 describes exactly that message:

#5 We believe that the customer is the only one capable to judge and evaluate the quality of our product.

You can cook a good coffee for yourself, but if your guest doesn’t like it, then it’s not good coffee for them. And if they like their toast a lot darker than you like it, it’s probably okay toast, but not good.

Your processes can be described as good as you can, but still the result might suck. Please analyze your processes and understand what quality aspects and decisions can be made at every step, and how they influence the outcome of the product. And learn how to understand the actual needs and wants of your consumers to help create a product they like and evaluate as “good”.

State of Testing Survey 2018 is live

Yes, it’s this time of the year again. The new state of testing survey is online. And here is your chance to support it by submitting your input.

Just go to http://qablog.practitest.com/state-of-testing/ and fill out the survey. It doesn’t take long, and helps with additional information on the state of testing.

You want to know, how the outcome will look like? Well, there is always a lovely crafted report on the results, interpreting the input, analyzing trends and providing valuable feedback to the industry. Example: http://qablog.practitest.com/state-of-testing-2017-report/

Here is your chance to participate: http://qablog.practitest.com/state-of-testing/

Thank you for helping,

Patrick

Time to give something back…

It’s been a tough year for me with highs and lows. And to keep it short for a change, I’m coming right to the point.

With the amazing support of a great group of people, Kristine and I were able to launch the first TestBash Germany. And at the end, who would have thought in the beginning, we were also making some profit for the Ministry.

Apart from some TestBash tickets for myself and finally a Pro Dojo account, I gave most of my share to the scholarship program of the Ministry of Testing to do more great stuff and support the future generation of badass testers.

I won’t know who will benefit from this donation, and this is fine for me, as I prefer to stay in the background anyway. But I want to make one donation that is in my control. I got myself one ticket to give away for the Mother of all TestBashes in Brighton on March 16th 2018.

Please use the comments on this blog to apply yourself or suggest somebody else for the conference day ticket. Please give a good reason, why you or the person you suggest would deserve to visit TestBash Brighton 2018.

You can apply or suggest someone for the ticket until January, 7th 2018. I will decide who will get the ticket by January, 13th 2018.

What I want in return? Nothing. It would be great to see a blog post from that person’s TestBash experience, but you don’t have to. All I expect is that the person shows up and has a great day at TestBash Brighton.

Disclaimer: This is only the admission for the conference day of TestBash 2018 on March 16th in Brighton, UK. No travel or accommodation included. I decide who gets the free ticket.

If you don’t want to publicly share your story, why you think you deserve the ticket, you can share it with me at info (at) testpappy (dot) com.

UPDATE: Thanks to the amazing Danny Dainton and Matthew Parker who throw in an additional £150 to cover a good part of travel and accommodation costs.

Update: WE HAVE A WINNER

And the winner is… Samantha Flaherty.

A new hope…

Boy, that sounds like a pathetic title. But this is a personal story of being thankful, so yes, the title has a right to be pathetic.

When I joined QualityMinds last year I made the conscious decision to enter a consulting position. I knew that it’ll be part of my job to change projects from time to time, do different kinds of workshops, talk with potential clients, and so on.

Now it’s about time to change project for the first time. And I want to use that chance and look back at the last nine months, something I won’t probably do for the next changes, but my first assignment was in several ways special to me. And it’s is also a way to say “Thank you!” to the project team.

As some of you might know, I left a place last year that would not immediately come into mind when you would try to describe “a great place to work”. So you might get a feeling why that project was so special to me, even if others might say, where’s the big deal, that’s business as usual for me. Oh lucky you!

Early September 2016 I joined the team responsible for the content management system behind a large telecommunication company’s online platform. I was the first “professional” tester to join the team, before me usually interns and team newbies did most of the testing. I was the only full-time tester together with two part-time interns, who already started shifting towards developer positions, in a SCRUM team with 12 developers. Bad ratio? Not at all, not in that team.

“Everybody can test!” was one of the team’s mottos, and it still is. Two special moments are attached to that motto. The first one was when taking over a work package that was new for the whole team and that blocked nearly all my capacity in the beginning. I got immediate help from three developers until the backlog was down to near zero. I was astonished and impressed.
After about half a year I was worried that the team got a bit too dependent on me, as when the word “testing” was mentioned usually all eyes were on me. So when going on vacation for one and a half weeks, I was worried how the SCRUM board would look like after my return. Well, the board was pretty empty, except the tasks that were currently in development. All stories that were finished during my absence were tested. My stomach feeling was gladly totally wrong.

When I arrived the automation suite with only about a hundred simple test cases was based on Protractor, despite the fact that the portal was not an AngularJS application. The tests were written around the actual Protractor features to simply use the provided webdriver. It was a nightmare to use and debug. When I asked if it would be possible to replace “that thing” with something in Java, that would be better to debug, and written in a language the whole team understood, I was only asked to put all pros and cons in a story for the next estimation meeting. I was running against open doors. Once it was a bit quieter again, the story was up for implementation and I was the architect behind my second test automation framework, this time even seeing it to become reality. And the first time in years the UI automation suite status went to green.

And after nearly 14 years as a tester I made several experiences of automated tests finding regression bugs. I might be one of the bigger test automation skeptics, not against automation itself, but being careful to not trust automation too much. But this test framework and the test suite and the tools I wrote found several issues and prevented some nasty bugs from going to production. That is a new sight on test automation for me.

The project itself has a very technical nature. So for the first time in my career I was not testing business processes. I was testing technical features. And that was a new experience for me, as I not only needed to know as much as possible about the whole system and how it worked together, but more often than not, I was looking into Java code, checking what had been done, trying to understand the impact and evolving risks right there from the code base. This also helped me to regain some basic Java knowledge.

There was also a rather boring part of my job, checking design changes that came in via merge request. It was not the most challenging work I have done in my IT-life, but it hugely influenced the way of looking at the web portal. Issues that were not even worth entering into the bug tracking system in my old company, were being fixed usually within a few days or faster. And we are talking mostly about aesthetic changes. That is something people care for here.

The Product Owner is doing a really good job as well. Never before did I have a product owner who stood for a clean line in the product. And hell, does she know that system. When I thought I tested nearly everything that a story could impact, she would find two or three more scenarios I never even have heard of. And still I was able to test with such a good quality, that she trusted my opinion as she did with the whole team.

Blaming or finger pointing is non-existent inside this team. Even my usual self-blame for bugs that are found during PO acceptance or in production were always generalized; nobody else working on the ticket found them, too, so let’s fix it and learn from it.

This team is great, doing a really good job. They have team spirit, always helping each other, encouraging to learn, and open for sharing. They have reached a SCRUM maturity level that impressed me from the day I was interviewed for the team. All things I haven’t experienced first-hand before in my rather silo-ed endeavors.

I brought my part to the table to help this team, but I also got a big amount back in the past 9 months. I learned a whole heap from this team. I learned stuff and experienced something, that 10 months ago I thought to be strange stories, fiction, special situations, or whatever reason you might find to not believe something’s existence.

Soon it’s time for a new challenge and finding out how I can help there.
Team CMS, thank you!

Who are these ominous “testers”?

There is a huge shallow agreement in my part of the galaxy. It’s the term “tester”.

I know quite a lot of people who declare themselves as “tester”, independent of what it says on their business card, contract or signature. Test Analyst, Test Specialist, Test Lead, Test Manager, Software Engineer in Test, Test Consultant, to name just a few of the classics. Most of them simply call themselves “tester”. Yet everybody has a very different understanding of his daily business.

There is a role in many projects defined as “the tester”. And this is the description from my understanding how non-testers usually see this role.
A tester is the person responsible for creating test cases, test scripts, test charters, or whatever you call the plan of what you want or need to test. This person is then responsible for the actual performance of testing, doing regression testing and/or writing, adapting or extending test scripts. And of course reporting (providing information) on what they have done and found, be it counting entities of test artifacts, writing reports, filling dashboards, and filing bug reports in whatever way is appropriate for the project. That’s usually about it.

But hold on, reality isn’t that easy. Many of the people I know who call themselves “testers” do way more.

Consultants – Asking questions is what you could do all day long. Why are testers asking questions? To improve their understanding of the system (product, project, company, etc.) It’s to analyze risks, and sometimes – in forms the remind of Socratic Questioning – trying to help others to understand things better.

Project Management and Coordination – Testing – still way too often – is near the end of the software development life cycle. Mostly it’s about managing your own work, but too often that has influence on others. So testers end up juggling things around to improve efficiency and effectivity of a project to deliver faster or on time.

Developers – Automation in Testing is one of the terms I like most, that describes it as a whole. There are not only automated regression test suites, testers using or writing tools to monitor log files, create or maintain test data, manage deployments, and so on. Many testers might not be more skilled than the average script kiddie, but that’s okay. It helps to do the job. Others are creating test frameworks with gigantic test suites with hundreds or thousands of tests.
And also if you help with code reviews

Quality Coaches – Testers often help when there is a lack of understanding in what quality is and what aspects of quality are most important for the product at hand. Too often testers are called “quality assurance” and are responsible that a product “with high quality” will be delivered. But a tester, in his key role, is not changing anything in the product. But with this add-on task a tester can do a great deal to help build a high quality product, not only when it’s too late.

Process Optimization – I know many testers who don’t like inefficient processes, and in the average software development project there are many processes and many need adaption and tuning. Often it’s the testers who take over that job to achieve better processes. Why? Because a good tester is involved somehow in nearly every process there is in a software development project, that’s why.

System Thinkers – Testers often play a central role, don’t have split responsibilities, but need to know everything about a product, they are involved in many aspects of product design, requirements engineering, and running the product on test environments. All that comes with improved knowledge about motivations, actors, flows and stocks. Used wisely it will help in many of the other “jobs” a tester takes over.

Product Designers – Testers are often the first users of a product and play an important role influencing the product design to improve usability and user experience.

Data Analysts – Testers want to understand how the product is used in production, what errors happen in production, what typical user flows are, and so on, to improve their own testing efforts or to re-prioritize tests.

Requirements Engineers – Testers help to write and improve requirements by adding testable acceptance criteria, analyze risks, dependencies, and other influences. Or they ask questions to sharpen the understanding of requirements.

Operations and test environment coordination – Many testers are responsible for their own test environments, so they have to set up the environment, configure the tools you need accordingly, set up databases, deploy the product itself, and run it. You have to monitor your environment and take care for it. Then there are the test devices, from different browsers to huge collections of mobile devices or whatever you are working with.

Process Owner and Maintainers – very often testers own the bug tracking process, many times it doesn’t stop there. I’ve found myself more than once in the position to maintain all the process templates, and so are others. You don’t have to own a process to maintain it’s artifacts. But the influence you have when doing that is often tremendous.

Coaching – When there are new people coming into the team, I know from several places where the testers are responsible for introducing them to the product. Because they often have the best knowledge about the product and how it’s used.
Of course there are these modern approaches where testers even coach developers how to improve their testing.

Communication – Testers should be masters of communication. Testers talk with analysts, requirement engineers, developers, administrators, managers, support people, customers, and so on. And everybody uses a different language. The only one who usually adapts is the tester involved.

Decision Makers – I know that the core role of a tester is to provide information to others to make informed decisions. But all to often the testers get the role to make a decision if the product is ready for delivery or not. Might it be for blaming reasons or trust into the person who has seen the product in detail as it is now, there are many more.
In addition my point on this matter is, that a tester makes lots of decisions every day how to test the system, which bugs to report, how to prioritize things, and so on. All influence the outcome of the project. Too often we are not aware of that.

Psychologists – Testers are everywhere in a project and they speak with people. So the tester is often in there to listen to developers ranting about requirements, managers about customers, and so on. Testers just happen to be there, and when you are good in Communication, you are good in listening. So, you get that job as well.

Product Owners – I know of testers who take a mixed role of PO and tester, I know POs who are especially good at testing, I know of POs who don’t find someone else to help out on vacation, but simply assign the role to the tester, usually when there is trust involved.

And the list goes on… and this makes it so difficult to compare two testers. In the job ads you will mostly find descriptions of the core tasks a tester should do. And managers think that testers are interchangeable. Nobody speaks of the other activities I listed here, some silently expect it, some have no clue that that is what a good tester does. Oh, I could tell you some stories…

When you meet a tester, this person is all of this above to certain degrees. It’s not how testers are seen by others, nor paid. But that’s what many of the people who call themselves “tester” that I know do, and it’s what makes them get up in the morning and be the best tester they can be.

Most of the people that I know who call themselves “tester” are genuine, enthusiastic and love what they are doing, from 9-5 and often also from 5-9. No matter what others think. Often they stand in the background, take the blame, fill up empty roles, do the work nobody else wants to do, and serve the team, to make the project a better place and help to make the product a better one than in the delivery before. And in their spare time they help to improve the world of testing and make it a better place and share information and experiences.

And good testers always know where their towel is! Happy Towel Day!

PS: If you don’t believe me, look at the variety of topics at many software testing conferences. It’s usually not about writing test cases.

State of Testing Survey 2017

It’s this time of the year again.
PractiTest and TeaTime with Testers are again collecting information about the current state of the testing industry and the trends going on and coming up. They are doing this now for the fourth time.

The survey doesn’t take long, and they are looking for feedback from all levels of the industry. From newbie to senior. And of course from all corners of the Earth.

The result is a well-crafted analysis comparing the current state with the past years to see trends in testing evolve. (see last years report) Last year the survey had over 1000 participants, and we need more to get an even better picture.

So please, take some minutes of your valuable time and participate in the State of Testing survey 2017. And if you can, please also share the link with your colleagues and friends in the industry.

Take the survey here!

Thanks for your help

Patrick

2016 – What a year…

2016 was a year with many highlights for me. And for those who know me a bit longer, the years before had way more lowlights than highlights, so 2016 really was a change for the better.

It started with the intention to finally look for a new job. And in March I had the first contact with a small test consulting company called QualityMinds in Munich, with a lot more to follow.

End of March I was speaking for the first time at a conference. And what a conference it was. TestBash Brighton! In front of nearly 300 people in the Corn Exchange was a fantastic experience.
Great thing was also that my wife and daughter joined me afterwards and we enjoyed a wonderful week in England.

In May I held my first conference workshop at Let’s Test in Runö, Sweden. Also it was the first time I volunteered as facilitator, which was a good experience as well. And like the year before it was a place where I had the chance to meet some of the folks I met online for the first time.

The day before I went to Sweden I handed in my notice. I decided I had suffered enough.

On the off-testing side of my life my artisan work in 2016 was nearly non-existent as my main focus was on re-building the interior of the attic. The final touches were made end of August, just before starting at my new place. On my birthday in September my daughter started school, which started a whole new chapter for the family as well.

Back to testing. On September 1st I started at my new company, QualityMinds, a place that hired me for being me, as I found out only later. I’m part of a great team of engaged people; to be honest, for the first time in my work life.
After 6 days I also started my first project, and in a real SCRUM environment. To the day I’m still shell-shocked that SCRUM can actually work. After experiencing lots of suffering with trying to introduce a bit more agile ways of working at my last place, I actually saw SCRUM in action. What a wonderful way of working, and finally I understood lots of blogs and tweets from people working in such an environment. It is possible. But my experience also told me that not every place is ready for such a way of working. But I love it.

Thanks to fabulous Danny Dainton I was back at a TestBash in October, this time in Manchester and as an attendee, enjoying the show for 2.5 days. That’s also where we (Kristine and me, together with our team of Vera, Marcel and Daniel (in absence)) finally laid the foundation for TestBash Germany, which was publicly announced in December!

In November I was at my first client workshop with my boss, which was an interesting experience. And I’m looking forward to more experiences of that kind.

In 2016 so much happened, and it feels so much longer than one year. By now the first 8 months of the year at my last place are nearly forgotten and so much great happened that I can honestly say, 2016 was a really good year for me!

What will 2017 bring.

Well, my crystal ball is still in the repair shop, so I assume that 2017 will be busy. As my project contract has been extended, means I stay in this project for at least a few more months.

The year starts with the Dutch double-feature TestBash NL and DEWT #7 end of January. In March I’ll be at TestBash Brighton to learn more about the organization of a TestBash hands-on.

My QualityMinds team has set a couple of interesting goals, that will keep us also pretty busy besides our project work. And thanks to our wonderful learning coach Vera I’ll be busy on the QualityLearning side as well. Both helping colleagues to improve and learning lots myself.

The most part of the rest of the year will be busy with preparing, organizing and advertising the first ever TestBash Germany in my hometown Munich on October 6th, 2017.

I also want to work on a few topics to submit for conferences later in 2017 or early 2018.

In case I have some time to spare, I also volunteered for now three friends from the testing community to help review their books in progress, which I’m very much looking forward. And I wish all three of you the best of success in your writing efforts.

And then let’s see what else might come my way. In the end I am called Agile Tester, so let’s be agile!

TestBash – more than a conference

If you don’t live under a rock, you know it was TestBash time again. For the first time in Manchester, Richard‘s home town. And north-west England was hungry for a conference like this. The event sold out way in advance and the majority of participants was from the area.

What brought me to Manchester was the fact that the amazing Danny Dainton asked me if I wanted his ticket, as he found out that by the time of the conference he will be father of a little daughter and wanted stay around his family. Danny I promise you that this will be the last time, but THANK YOU my friend! And that gesture already describes a big component of TestBash: sharing, caring, helping, and many more, I assume you know what I mean.

TestBash is also a relaxing conference. For speakers and attendees likewise. You don’t have to worry about missing something, it’s single track. And it is always a track of really cool topics and speakers. Speakers don’t have to worry about “how many people will show up” and nobody needs to run around, searching for the next room. If you meet someone in the breaks, just ask him about the last talk, you can be sure they saw the same talk.

There we come to the next thing, TestBash is about conferring and meeting people. For me it was again a mixture of meeting old friends, meeting folks I interact with via social media and finally meet in person, and meeting new people. I love that mixture. If my introvert side becomes stronger I can mingle around old friends, if my extrovert side reigns I will meet new people. Alongside the conference there are pre- and post-TestBash-meetups where people can spend more time with each other. Because time is crucial. There are more people too meet than time available, as always.
TestBash attendees are from different backgrounds, specializations, experience, age and everything. You get a lot of fresh views when talking to people there.

And there are the 99 second talks. The chance for every attendee to go on stage and talk about a topic of their choice for 99 seconds.

If I have to summarize TestBash Manchester in a few words. It was about the importance of learning, listening, being positive, yet still critical and ways to provide more value. The videos will come soon to the Ministry of Testing Dojo. It’s worth watching! You find some photos beneath the post.

The open space on Saturday presented a forum to talk about everyone’s topics and discuss it with others to get new insights, answers, questions, directions or guidance. And we had lots of good topics and a wide mixture of people to interact with. The office of LateRooms.com was a fantastic location high above the city (11th and 12th floor) and provided lots of openness and room to discuss, mingle, and confer. And Rhian from LateRooms.com was a fantastic and caring hostess. Thank you!

TestBash, it was fantastic to be part of you again this year. Rosie and Richard, great job! You are amazing.

Master of Ceremony: Vernon!

James Bach on “Managing Critical and Social Distance”

Iain Bright on the “Psychology of Asking Questions”

Kim Knup “On Positivity: Turning the frown upside down”

Stephen Mounsey on “Listening: an essential skill for software testers”

Duncan Nisbet on “Be More Salmon!”

Helena Jeret-Mäe and Joep Schuurkes on “The 4-hour Tester Experiment”

Mark Winteringham on “The Deadly Sins of Acceptance Scenarios”

Huib Schoots on “A Road to Awesomeness”

High Energy: Gwen Diagram on “Is Test Causing Your Live Problems?”

Beren Van Daele on “Getting The Message Across”

Open Space

The Agenda for Open Space

The add-on agenda