My community – quo vadis?

The basic intention behind this article was slumbering in my head for a while, I use the current emotions, to finally write it.

I am following the context-driven testing community now for about 4 years. Still I am not sure if I am part of the community itself, or as the question was changed during the last year, do I want to be a member of that community?

When I first found the context-driven testing community, I got answers to so many questions I collected during my first 10 years in the testing industry. I finally felt understood, or better I found people who favored similar approaches and were even able to express them in an understandable way. I found lots and lots of like-minded people, many of them I even consider as friends now, a handful or two even as good friends.
I began reading blogs and magazine articles and books recommended by the community in amounts never experienced before in my life. I would assume that I read more words in the last 4 years than in the 35 years before that. The community is a fantastic source for discussions, sharing information, insights and wisdom among all who are interested.

Through the CDT community I found my way into testing conferences and got the courage to stand up and speak. And from what I saw so far from conferences driven by CDT people and a few other testing conferences I visited or followed, there is a difference. The conferences closer to CDT excel in their range of topics by people sharing thoughts and ideas and personal stories. The “other” conferences often share success stories or marketing talks. There are multiple reasons for the existence of both. For me personally, the talks about thoughts and ideas help me more than the stories of what worked for others. And the personal stories tell me more about the people I interact with.

As many might already know, I found my way into the community after a EuroSTAR webinar by Michael Bolton in September 2012. And shortly after that I discovered James Bach. And those two became a main source for information, insights, approaches and points of view on so many topics, especially in the beginning when I was seeking for answers. But then I discovered so many others who were willing to share their knowledge and the portion of Michael’s and James’ influence on me became smaller and smaller.

I want to introduce a SCRUM analogy here. In general we are all part of a big team of equals with no hierarchy. But as in every team of equals you sometimes, maybe in tough situations, seek guidance through a leader. In my opinion, this is the reason for Michael and James standing out over the equals. They are two people who did so many good things and were willing to share so much of their time to help others. So if members of the community seek for guidance that’s of course one way to look.

From afar some seem to  assume that the CDT community is a group of disciples of James and Michael. Maybe even a sort of religion or a cult around them. And the community is often treated as such from “outsiders”. Maybe some members of CDT even see themselves as such, that’s their own thing. And, if something new is shared by James and Michael or approved by them, people read the words and spread the news. Challenges rarely happen. And I wonder why, because that is one thing I got taught by them. Challenge things! That’s one of the key drivers behind good testing. We accept too much as is every day. The tasks of a good tester is to challenge things and that also applies to things told about testing. (And yes, you can challenge the thought if challenge is a key driver of testing! As Michael often did with me, I invite you to do so.)

But then there are challenges from people, from outside the community and even inside. Which in general is good. And as James and Michael often emphasize, debate brings topics forward. Only, the tone often got extremely rough this year. There were several incidents around James and Michael, that I don’t want to list here, as most people will be aware of them, and also from others, where basic rules of interacting with people were ignored. There were also several harsh attacks from outside the community on the community itself, some on James and Michael directly. People were requested to take a side or stand against someone. Anyone arguing in favor was discredited at once. It seemed more like personal warfare.

The other day I woke up to a community that was asking for a code of conduct for conference speakers. Really? Why is that even necessary. Yes, I see, there is James who sometimes makes such a set of rules necessary. But do we really need a written set of rules for behaving like grown-ups and interact in a polite and civilized manner? Come on! There should hopefully be better ways to deal with that topic.

I don’t want to defend James behavior in any way, I want to distance myself from it. It was unacceptable. Still I highly value his addition to the body of knowledge to the community. But if he can’t behave on stage, maybe he should not be invited on stage by organizers. As it should happen for any other person who misbehaves on stage.

I spend much time every day with testing, thinking about testing, reading and sometimes even writing about testing. I don’t want to waste that time with bullying and rudeness, sadly the internet is already full of it. And until last year I thought that my community is free from such bullshit. I hope that people will stand up for their standards of politeness in interacting with people. In the end, a community is a group of like-minded people, and in the case of CDT, a group of people who believe in testing and want to bring that profession forward.

And while we are at it, people who call themselves context-driven should be aware of what that actually means, because often I observe people who argue with techniques and approaches generally accepted in the CDT world as best or even only choice to get the work done. That is context-imperial and is no quota better than consultants or companies who argue in favor of techniques or approaches as the only weapon of choice that are often chosen as bad examples in CDT discussions. If you are truly context-driven, don’t take it as a label to use approach X in every context, understand your context, know as many approaches out there, understand your stakeholders, your limitations and so on (context, eh), and then apply what you think fits best in your context. And if, for example, the context seems to favor test cases, and you hate test cases, you have three choices. Either leave, or find a method that really works better, or live with it and use test cases and do the best possible to get your work done professionally and help the project.

I for now will continue to build my own community, my collection of people I trust and like, my sources of information, recommendations, insights, wisdom and friendship. I will continue to share my thoughts with anyone who is willing to read it, I will take my time to discuss with anyone, who wants to question my statements. I will continue to reach out into other communities, as many useful things can be found there as well. I want to get shit done, and I want to be part of a testing profession that can help to build the future.
And I hope that I find the courage to stand up against bullying.

[Update] I want my community to be a safe place for women. I have learned so many things from the amazing women of the testing community in my career. A lot of my biggest break-throughs in my thoughts were triggered, motivated, intrigued by fantastic women. And I was always happy to be in a community with big female involvement, at least compared to some other technical disciplines. And by the way, the same is true for any other group. I don’t care about gender, sexuality, religion, color of skin, culture, age, or the style of music you listen to. We share a common ground that is in my eyes bigger than all possible gaps out there. We share the interest in testing, bringing forward the profession, learning about it, getting better at it. The interest in sharing and gaining knowledge. Testing has such a big human element, and from every influence we can only learn more and more, and get better and better. So I want my community to be a safe place. Challenges, debates, and discussions are welcome. You don’t have to agree on everything. But harassment, embarrassment, blame and alike have no place here.

Thanks to Lisa and Lanette for reminding me of this important fact that I have forgotten in the initial write-up. [/Update]

As my good friend Damian wrote, “labels can be helpful, and labels can be harmful.” (paraphrased) In general I don’t want to be labeled and put into a box. If there is one label/box I proudly accept, it’s “passionate”.

Thank you for your attention!


One very personal thing, I want to add here in that context:

Yes, I admit, I have a favorite person whom I was sort of attacking a few times over the past years, Rex Black. In general I have two arguments with him, that come up every now and then. 1) I don’t like the way the iSTQB certificate exam works. Partially driven by my disliking of the iSTQB syllabus, that many find useful, but I see some parts of it more harmful than useful for the wider testing community. 2) Yes, that’s a CDT topic, I don’t like the term “best practice”. Rex does. So we often argue about a freaking label, that for me is the personification of imprecision in management bullshit bingo lists. I know what he wants to say in favor of it and partially even understand it, but it usually only hits a nerve with me.
After some reflections last year, I stopped attacking Rex, because I found my own behavior was more than wrong. I even tried to interact on a normal level a few times with him, even if he tried stabbing again (might have been a reflex and I don’t take offence on it, based on our personal history). And it made me very happy when he answered to my first day of school tweet.

Rex, I hope we meet one day in person and can discuss about the topics face to face over a beer.


Filed under Uncategorized

TestPappy on “tacit knowledge”

TL;DR: This article only reflects the highlights of a discussion on tacit knowledge, it will not describe what tacit knowledge is. My key takeaway was, based on the exchanges in this twitter conversation, there didn’t appear to be a shared understanding of ‘tacit knowledge’ by all participants of the discussion.

The concept of tacit and explicit knowledge is an important one to software testing. Testing usually is a highly brain engaging discipline. And making things explicit is part of our every day. Writing test scripts, writing documentation to share knowledge, reporting on our testing, etc. But what about the tacit part, that often stays tacit?

In some discussion happening lately on Twitter there was a disagreement on the nature of tacit knowledge. I thought I have a certain understanding of what tacit knowledge is, but a personal discussion with a good friend left me not so sure anymore. There is explicit knowledge, knowledge that has been spoken, written, or somehow made available for others to consume without interacting with the initial knowledge owner. Then there is knowledge that is not explicit. You might call it “tacit” knowledge. So, a question that came up was, is tacit knowledge “only” very hard to describe or “simply” impossible to describe? Or is it “just” something that was not made explicit yet? That’s what I was trying to learn.

I posted a badly written question with a survey on Twitter the next morning:

And it triggered some interesting discussions. I will only quote few examples from the threads out of context, so you rather check them on Twitter yourself, if you are curious.
This will not be my definition of tacit knowledge, I will only summarize some insights and highlights I got from the discussions.

First of all I quickly realized that my question was badly explained and too ambiguous. I got the impression that several people thought that I don’t understand the principle of tacit knowledge. But that helped to trigger some further interesting thoughts.

Even if Wikipedia partially describes tacit knowledge as impossible to describe, alternating with “just” very hard to describe, most people participating in the survey agreed rather on “very hard to describe”.

The first (and longest) example used was learning how to ride a bicycle. Actually most of the examples were very technically oriented. The discussion also changed between “teaching” a robot to ride a bike and teaching a person to ride a bike.

For Stephen Blower it seemed to be very important that the learning from explicit knowledge is successful on the 1st try.
When I remember how I learned how to ride a bike, I remember a lot of failed attempts to do so. So is explicit knowledge sometimes held to different standards than tacit knowledge?
But even if it is on the 10th try means that it is possible, and maybe only hard to describe or wasn’t described yet.

And there was no agreement on the bicycle example being really a good example. Which for me showed an important thing. Several people that I all highly respect, especially for their deep understanding of many test related topics, were not able to agree on a “simple” example of tacit knowledge.

Several times I got the tip to read Harry Collins’ book “Explicit and tacit knowledge“. But is that just an excuse, because people are not able to explain it in their own words?
For me that raised also the question, is that the only truth about tacit knowledge or are there other explanations existing as well? Maybe contradicting Collins?

I was not happy to get answers with a reference to a book.

A new example came on the scene, using cooking, as in following a recipe (that in my opinion presumes a lot tacit knowledge) or “simply” cooking based on what you know (your tacit knowledge).

Now that is a position that was completely new to me. “An experience long forgotten.”

But Mark Federman also came up with alternative reading material on the topic. The concept of “Ba” is very interesting on how knowledge can be managed within organizations and companies. It also describes the spiraling lifecycle how tacit knowledge becomes explicit, to then become tacit knowledge again for other people. It is though not talking about if tacit knowledge is hard or impossible to make explicit. It simply skips that part and assumes that knowledge is transferable from individuals to groups and organizations.

One aspect that makes a good example for tacit knowledge in my opinion are emotions. Mohinder mentioned them shortly, but nobody picked it up from there.

Emotions are actually very hard, if not impossible to describe, if you can’t rely on comparison or the other person having had the very same experience. But is that even possible? Is naming the emotion just shallow agreement? Do both really feel the same?
Are emotions an example of something that is impossible to describe in a sufficient way? Or do we tacitly agree on a more or less shallow understanding to make life not too complicated?

In the learning to ride a bike thread, John Stevenson tried to shift the aspect away from the actual riding a bike, but to ride a bike in cultural context.

One aspect that was important for me, some things are maybe possible to make explicit, but they are not worth it.

My summary: Tacit knowledge seems to be of tacit nature by itself. Very hard to describe, though some, e.g. Collins, have done it. At least to a degree that satisfied many readers. What I can observe on Twitter or in blog posts is, that people boldly use the term tacit knowledge as if it has a well-known meaning. But my impression is rather that it’s shallow agreement. Most people I know are aware of the concept of tacit and explicit knowledge, some read Collins, some read summaries, some read other interpretations, some learned from someone. But my short, and malformed, question and the following discussion showed me clearly, that it seems not be that clear of what it actually is.

I, at least, have learned a lot about tacit knowledge in the course of the discussion. And I want to thank everyone involved for their input.



Filed under Test Theory, Uncategorized

Is software quality really getting worse?

I made a big fault in my last blog post. I claimed that software quality is getting worse in the past decade or two without giving any facts to undermine that claim. And I’m very sorry for posting a personal perception like that.

I still think that software quality is not getting better. And I want to give you the points why I think that is.

What is quality?

Quality is value to someone (who matters).
– Jerry Weinberg & add on by James Bach

Quality as I would describe it better consists of a bunch of ilities (like Capability, Reliability, Usability, etc). ISO 25010 counts about 30, the famous poster from Rikard Edgren & Co. contains over 50. There are many factors that add to the perceived value of a product. Maybe mine differ so much from yours that your opinion on the same fact is completely opposite to mine.

Quality of people producing software

I came into professional IT in the year 2000. In the years of the boom. CS in universities was the subject to take. People sat on the floor or had to stay outside in crowded class rooms. There was a huge demand in IT people and universities were not fast enough in producing new talents. So companies tried to attract people from other areas into IT. I remember starting in my big test project in 2003 with over 70 people in my team. There were only two people with formal CS degrees and me with vocational training as developer. The rest of the team came from various backgrounds. I don’t want to say that those folks were bad, not at all. Some of the best testers I worked with so far were among them, having diplomas in philosophy, meteorology or similar. But definitely not all were good or should have pursued a career in IT. But we needed the people, so they stayed.

With my current job I got the responsibility of hiring people for my team myself. My amount of experience is not big at all, and maybe other reasons influence the perception. In the last three years I have seen about 150 CVs and interviewed 40+ people for 2 job openings in the past years. And the ~150 were only those HR sent through. And the success in finding people with the right skills who fit into the team was very low.

A question I tend to ask is how the candidates came into software testing. Most given answer was something in the sense of, “friends told me that there is an opportunity for me to get into IT”. My colleagues searching for developers and business analysts have similar time consuming experiences with only slightly higher success rates.
When speaking with friends in the industry asking them about their hiring experience, all share the same tenor. It gets harder and harder to find good people.

Software companies and products popping up everywhere

Success stories like Facebook and Instagram, collecting billions of dollars for their ideas and the users they have, local success stories getting still some millions from bigger companies who want to have the product. These are the factors that inspires people to come up with their own ideas to make some big bucks. Thanks to the internet its easier than ever to publish software and draw some attention to it. If your software goes viral (enough) you have made it.

There are lots of companies who try to emulate success stories and come up with a similar product. Once a product gets some fame, you can be sure that there will be dozens of clones available in no time, trying to get a piece from the cake. Thanks to the law of the market those products quickly reach their end of life, some keep dwelling in the shadows.

Frequency of updates

The internet and modern software development approaches like Agile and DevOps make it possible to update software quickly. There are companies using DevOps approaches bragging with daily or even more frequent updates. Time to market is one of the driving factors. Software companies don’t have months and years to come up with a product anymore. If you want to earn money and stay ahead of your opponents, you have to move quickly. Agile and the approach to publish MVPs (minimum viable products) is getting strong to produce the right thing. And it works e.g. great with websites and other centrally hosted applications. But with that approach, e.g. as seen in the app market for smartphones, the industry also produces thousands of apps that start with some small feature and then being either forgotten or growing from there. Often the aspect of viability is not very distinct.
Two decades ago I first heard the name “banana projects”, “ripes at the customer”. Back then it was a dispraise to get that title. With MVP that approach was made the weapon of choice. Don’t waste time on producing a product, fail fast and learn. That approach has two sides. Software companies save money and get a chance to produce something with potential, on the other hand users waste time to evaluate dozens of products to find a solution that fits. Win-win situation? I’m not so sure.

Big companies like Apple, Google and Microsoft have problems with prestige projects, affecting millions of customers. There are fixed dates to hold, promoted by marketing, without listening to project stakeholders. There is a keynote, they need to publish on a given date. Thanks to some late changes we see problems over and over again coming in the initial versions asking for patch releases soon after. Android has a different problem with device fragmentation growing by the hour. Teams concentrate on the majority of the user, evaluating if problems reported by users from the minority are worth fixing at all.

Mobile apps are the example I see that phenomenon every day. I have about 100 apps on my smartphone. Every work day I get at least 1 update for one of the apps. Sometimes up to 8. Shortly before and after releases of new versions of the operating system that number gets double-digit. On all my devices I spend a fair amount of time when using it with maintenance and updating. Something I don’t want to waste my time with, to be honest.


Hardware is getting faster and faster, and developers simply don’t have to care much about optimizing their code. Just add some cores or some GB of RAM, and it works faster again. This has of course always been a phenomenon. Developers were simply not trained to optimize the last bit of their code. Hardware gets replaced even faster these days, so why worry about slow devices? And so goes the spiral.

A long time ago when most things already got delivered in CD-ROM, there was an initiative that tried to fit awesome stuff on floppy disks (you might remember those things that look like a 3D print of the save symbol), taking max. 1.4MB of disk space and running performant on older CPUs as well. Sadly I couldn’t find any links.

Then there was Fli4l, a project that produced Linux distributions that fit on floppy disks to run some valuable software on old machines like firewalls, proxy servers and web servers booting from floppy disks.

You say in times of 128GB USB sticks that’s no longer necessary. Well, exactly there lies the problem in my eyes. People (developers and users alike) don’t care, because they don’t have to.

Times change

Times change and so do development approaches, ways of distributing software, and ways of using software. Is the demand for software higher than 10-20 years ago or is that demand artificially induced?

With the few topics I tried to explain here I still say from my point of view and in my opinion that software quality in many cases is decreasing. Some companies make it right and use the available approaches and technologies to improve their software. But how many are out there that are doing it wrong or simply deliver bad software faster. If you don’t come across those pieces of software in your life, good for you, and I hope it stays that way for a long time.



Filed under Uncategorized

Reinventing Testers and Testing to prepare for the Future

When I want to speak about the future of testers and testing, I first have to think about the situation we have now, and why I want to change anything about that.

In the past decade or so there were multiple threats to testing and testers. “Test is dead”, “Testers need to learn to code”, of course the ongoing discussion (especially by tool vendors) “all testing can be automated”, and some more.

Then there was a workshop on Orcas Island mid of May 2016, organized by James and Jon Bach. The topic was “Reinvent Testers” and I don’t want to go into any details about the workshop since I was not even there and don’t have more information than what is available on the website. It was not the workshop that created some disturbance in the equilibrium of the community, it was just one slide – out of context – that was spread via Twitter.

The statement that got my attention was “there is an ongoing and longstanding attack on the testing role“. I don’t want to focus on any of the points on the slide, because I don’t think they support the statement in the header in any way. And I don’t want to say anything against the chosen hashtag, which was a typo (testing instead of tester) as far as I know, combined with a side blow on some minor political campaign currently ongoing in the U.S.

Perze Ababa was so kind and shared the slide after that one with some people on a slack channel. And that slide was reflecting in my opinion the actual problem way better. I don’t want to share the slide here, since I was not part of the workshop and I did not ask James and Jon for approval of sharing it here. But I want to quote some statements from the slide that better describe the current situation of how the tester role and testing as a phase are often seen by other participants of the software development life cycle.
“We must eliminate the need for judgement among testers, so that there is no controversy about what is a bug!”, “Nobody smart wants to be a tester!”, “Testers need to assure quality!”, “Testing is too unpredictable, you should instead make a Definition of Done and use Acceptance-Test Based Development to know when you get there!” and “You should write down all your ‘test cases’, tracing them to requirements, and track metrics against them!”.
This is only a small part of the statements on the slide, but I guess most of my readers get the idea of the tone I’m referring to and were themselves more than once in similar confrontations.

How I see the current situation of testers and testing as a phase

When I look at the current situation of testers and the standing of the testing phase in projects in general, I can understand why there seems to be a problem.

The testing phase was born in waterfall-like approaches many decades ago, and testing usually comes in the end, when everything was sort of stable. And that is where many people (testers and non-testers alike) still see the testing phase. Talk to the average factory-tester and he or she would advise to have a stable piece of software available for the testing phase. When you have to run all tests in the end and want to make a statement if all tests passed, that actually makes some sense in that context to not have a moving target and execute your test scripts with 10 or more different versions of the software. Reality might look different.
Being the phase in the end also gives you the disadvantage of biasing opinions that in case of a delay it’s your fault.

Two decades ago development approaches started changing away from waterfall. XP, SCRUM, Kanban, Agile, Lean, whatever the buzzword you prefer, it’s all about getting code faster into production or the hand of the customer. Faster feedback is the driver to develop the right thing and not waste much money on walking into the wrong direction.

The testing phase has a problem in those environments. It did not evolve together with the development approaches. Maybe developers sometimes don’t appreciate the feedback they get from a late testing phase, enforcing them to redo some of their work or getting told that they implemented the wrong thing. From a developer perspective that I experienced and also from a management perspective, testers should only verify that specifications have been implemented correctly. And most of the new development approaches found ways to get exactly that feedback faster without the need of testers. It seems nobody was missing additional feedback and information coming from the testers. Who knows why?

Testers who stayed in such projects simply forgot to adapt and tried to “only” attach a testing phase at the end of whatever development approach was chosen. There was also not much support from the testing industry. iSTQB that initially was taught in waterfall context only came up with a new certificate, the Certified Agile Tester. Not much of a help. ISO29119 is a documentation-heavy framework that hasn’t found an answer for Agile approaches yet. In the beginning of the millennium niches like the context-driven testing community, with a couple of thousand testers world-wide, were simply not big enough to make a difference in an industry with over a million people.

Nowadays testing phase testers have the problem that development approaches have evolved so far, that they cannot keep up any more.

Testers in general have a problem they didn’t cause. Testing was seen as simple task on one hand and was tried to be kept cheap on the other, since it added no value to the product, so many cheap people from various backgrounds were hired into testing jobs. The career opportunities in most companies are usually to step up some ladder and move out of testing positions. So the best testers were simply lost to other disciplines over time, because of the lack of opportunities in their profession. And I also claim that a good portion of people who stay in testing more than 5 years lack motivation to improve or don’t lack laziness. They just stay there because that’s all they think they can do and they have a safe job.

Management is also treating testing still like 2-3 decades ago, because back then it was easy to manage, so why change that. No motivation from that side either.

How Agile & Co. cope with testing

Projects, products and companies that work Agile have dealt with the situation and try to include testing earlier and let the developers do the job. Lots of checks that come up from testing are getting automated to speed up regression testing / checking. They try to keep testing/hardening sprints short and everybody helps there when not busy with fixing bugs. Approaches like BDD, TDD or ATDD support the cooperation between business analysts and developers, and help developers produce more fitting solutions.
Some companies even take it to the extreme and let developers accompany their code into production, see there if it works, and then leave it alone. No testers involved.

Do you think those companies, at least where it works, are crying for a testing phase or more testers on the project? I don’t think so.

In the AB Testing podcast, Alan Page and Brent Jensen regularly share a view on how Microsoft tackles software development processes, and it seems they are doing not too bad.

When claims come up that 90% of the people working as testers should get a new job, James Bach was citing such a request in a podcast I stumbled across last year, I say, may it be only 70% but it’s true. If more and more companies successfully change their development approaches, we will need less testers. And of course I want to keep only the top 30%, not any 30% and not the cheapest 30%.

How the role of testers will evolve

To survive in the world out there, testers have to speed up their game. Testers cannot waste a big chunk of their time with documentation. My motto since being team lead for a small team is, keep people testing. Time spent knee-deep in the system is usually more valuable than writing pages of test cases.

Testing in my opinion is all about providing necessary information for stakeholders to make informed decisions about the project. And those information are collected by actually spending time with the people and the system, not by preparing, updating and grooming long lists of test cases and other useless pieces of documentation.

Testing should be shifted left as far as possible. Regression testing should be reduced to the absolute necessary minimum. As Michael Bolton stated in a discussion yesterday, when you have to do lots of regression testing because the developers don’t understand how changes to the code affect the product, the developers have a problem, not the testers. Those problems need to be addressed differently. Not by blaming that the regression test phase is too expensive (I have a friend, who knows someone, where that might be the case).

Having testers in your team should not be a surrogate to writing and maintaining automated unit and integration tests, because developers are too lazy to write them, or the architecture is not favoring writing them. Then you have a different problem. Throwing testers at the problem won’t help solve it. But it provides someone else to blame.

Testers should take care of the bigger picture, things that are hard to automate, cope with complex problems. Testers should help developers to create good automated scripts, help them to understand and test the application and train them to do the simple tests earlier and faster and provide feedback faster. And yes that needs skills, competence and behaviors of a systems thinker, of a detective, of an explorer, and of a coach. That’s nothing you can buy cheap on the next corner. Those people are rare.

The role of testing needs to be changed?

Some people participating in the discussion call out to testers to take different roles in the project to still provide value. I see that as a wrong approach. As a tester in the role as tester, used in the right situations, I can provide value to the project only I as tester can provide. And I don’t want to give up my role as a tester. I want to continue asking questions, experimenting with the system, analyzing strange problems, I don’t want that to go away.
If my job as tester on my project is not filling out my whole day, or I as an individual decide that I can be more useful and contribute to the project in different other roles that may even improve my (part-time) job as a tester, that is a great opportunity. And I strongly advise you to take that chance. But your role as tester doesn’t change.

I don’t think that the role of testing should be changed by taking on new tasks. Those are simply other roles that can be filled by the same person (specializing generalist comes to mind). I recommend to make better use of the role testing and focus instead. Tasks that are not testing and produce more waste than value should be taken away from the role of testers. Enable testers to act faster!

What testers need to do is evolve or reinvent the way they work, not by taking additional jobs in the project, but finding a way how to produce the necessary information faster, earlier and more efficient. And testers need to dare to hand over simpler testing tasks to developers to stages or phases where it makes more sense.

The testing phase should be changed. If you need an unnecessary long regression test phase, you better rethink your development approach. If you run tests and checks in that phase that take days and weeks, you might want to rethink that approach. If you regularly run into problems in the testing phase you have a problem.

And when Brent Jensen says that data scientists and telemetry are attacking testing (episode 39 of AB Testing podcast), I can only hope that testers position themselves accordingly, stack up their arsenal and be part of those approaches. For me that describes excellence in testing. Monitor and gather feedback from production and use scientific methods to use that data to inform further actions. Fantastic! Where can I sign up? I am a tester, and I want to master those tools.

Responsibility for the overall decrease of quality

Software developing companies have a common problem in the last few years, starting maybe over a decade ago: Product quality gets worse and worse. The possibilities of faster cycles to production makes many companies negligent about actual quality, because it’s easy to provide a hotfix. Only the rules of the market help to filter out the worst of them and also to prevent the ones with good quality from the start to get a chance in the market, because they tend to be too slower.

I would see that partially as a problem of neglecting testing phases, doing the wrong testing, doing testing wrong, or hiring the wrong people to do the testing.

Testers need to fight and find their way back into those projects and provide value to help the team raise the level of quality again.

[Update: The claims offered are my personal views, and I go into more details here.]

What companies need to do

Companies need to decide whether they need or want to stay in waterfall-like approaches, having that big testing phase at the end, and taking all that comes with it into account. Waterfall is not bad, and Agile is not for everybody. And always remember: A good waterfall is much better than a bad Agile-ish attempt.
Some may even have to bite the bullet and outsource their testing to some expensive, documentation-overload-producing consulting company. There might be good reasons for that.

One hint: You might want to rethink your approach if you are in regulated environments and take it as excuse to say you need a documentation-heavy testing process. I have heard from multiple people that this is just a myth.

Invest in the future: Prepare your people to fill that high sophisticated role as tester. Find those who are willing to invest in their craft and profession. Spend time and money on training and conferences, let them interact with like-minded people, get them engaged, get them motivated, and pay them a decent salary to have them focused on the job and not on how to pay the bills next month.
And if you ask what happens if you train the people and they leave, better ask what happens when you don’t train them and they stay!

What testers need to do

Managers also need to change their view about testing and testers. But it’s the task of the testers to build trust in the information they provide and demonstrate the value they can add beyond test scripts and developers testing themselves. Then managers will hopefully change their opinion that testers only need to provide a count of test cases to show their value.

Do everything you need to get better in your craft. Learn tools, methods, approaches. Get better with soft skills like communication, critical and creative thinking, problem solving. Stay up-to-date. Interact with your peers in- and outside your company. Go to local meetups. Read books, blogs. Go on Twitter and engage with other testers. Learn from them where to find more sources for information.

If you want to be in that 10-30% that should stay in the profession, you better start now.

And my last tip: Be proud to be a tester!



Filed under Uncategorized

Return to Runö – Let’s Test 2016

Last year, Let’s Test 2015 was my first ever conference and it started my desire to visit more conferences and contribute to them as well. In addition it was my way to have chance to visit further conferences, since my company was not willing to further invest in any external training or conferences, only when it’s directly applicable to an immediate problem we have and cannot solve ourselves.

The call for proposals for Let’s Test 2016 was asking for crazy stuff, and I tried to come up with a proposal that showed some craziness. Or at least enough craziness to fit the expectation. I came up with two rather last minute proposals and was giving it a chance.
Weeks later I got a message from Dan Billing, “Wanna pair with me?” I had no clue what he was talking about. “Check your mail. Read and you’ll understand.” No mail, no capisce. But whatever it was, I agreed to it. My mail provider is sometimes a bit slow. And when finally the mail came in, I couldn’t stop smiling. One of my ideas got accepted! The reason behind Dan’s question was, they wanted us to pair up with someone else from the other speakers and do an additional crazy workshop.

My next moment of happiness was when it turned out that I got a flight about 60% of the usual cheapest price to Arlanda. Since paying for all conferences in 2016 myself, that came in just right.

Let’s Test 2015 was a special experience for me. I finally met several of the people I followed on Twitter in real life. They really existed. And there was this fantastic, inspiring and driving energy all over the place. What can I say. I met even more people in real life this year and the energy was there again this year, to my surprise in a different flavor, but as exciting and driving as last year. It’s an energy that lets even introverts meet new people, lets you discuss 3,5 days in a row for 16-20 hours a day about anything test-related or not, make you hold workshops after midnight, stay fully engaged and energized despite severe back pain and not much sleep.


Oh yeah, one downside of Runö: the chairs. At least some of them. Maybe in addition to all the tension that lately build up, they tried to kill my back. It was obviously a muscular thing, since motion helped. But boy, I had never had such pain for nearly 4 days in a row. You won’t believe it, the energy of Runö helped me through that, without the right pain killers (only had paracetamol with me) or any professional help. It’s simply fascinating. If I’d had the same situation at home, I would have called a doctor the second day or dragged myself there to get a shot or two.
The only other downside, I found so far, one that most multi-track conferences have, you have to decide where to go to, and it’s guaranteed that you miss something else that you would have gone to, if you could.

By chance I found out about a nature reserve only a few minutes away from the conference place. So Monday after dinner I took the chance to get some more exercise for my back and visited that beautiful place. If you ever go to Runö and you need about an hour for yourself or to talk in quiet, take that walk. It’s such a beautiful place. (I added some photos at the end of this post.)

At Let’s Test I gave my first ever workshop. “Context Eats Process For Breakfast”, a workshop about process analysis, modelling and understanding the borders. I applied for a half-day workshop and only got a 2 hour slot. I tried to strip down the initial plan a bit and hoped for improvising during the workshop and to not skip any essential parts. I was able to finish in time, but sadly I had to hurry through some feedback rounds a bit too fast. I hope the workshop was still useful for my participants. I collected over 3 pages of feedback after self-reflection and I think I will try to send that or a similar proposal again for other conferences.


Dan and my midnight workshop on Monday night, yes starting at midnight, was not very well visited. But in the end we were 7 people and used the “prepared” format to discuss a very personal problem of one of the participants, which might not have happened in a bigger group or a different combination of people. At least I hope we – as a group – were able to help a little bit with making a better decision.


In addition to giving two workshops, I also volunteered to be a facilitator for other sessions. A job that can be a good support for the speaker, if necessary, to organize the Q&A session, simply help to prepare and clean up the room, or where you are not involved at all. I had examples of all four, and I will definitely sign up again. And I will encourage everyone to do the same.

Now let’s have a short overview of the sessions I attended. It started – of course – after some nice and decent sounds of an AC/DC cover band with the opening keynote by Rob Sabourin and his wonderful wife and best friend Anne. Anne is not a tester, she is an obstetric nurse. She was able to keep the whole audience listen to every word she had to offer, telling a wonderful story from her long experience as nurse. The topic was about triage, something that testers also need to do often. What are the factors that you need to consider in certain situations to make the right decision? The keynote turned into some kind of big workshop, and was a worthy kick off for a great conference.


Chris Blain‘s workshop about “Context-driven hopes & dreams” was an interesting one for me, helping me a bit along the way on my search if I’m really context-driven. And it also had one ingredient I would have loved to have in my workshop. The speed Chris used for his workshop was fantastic. It was so relaxed, it gave all participants enough time to even introduce themselves, setting their context for others to understand. Something that is often forgotten, when multiple people talk about testing. And I have to say that my group produced the best result of all groups.

Foto 23.05.16, 11 04 38

My day continued with Chris Blain and Rob Sabourin, this time combined in a workshop about “Task Analysis and the Critical Incident Method”. I liked the workshop a lot, but it also scared the hell out of me, especially when after the break Rob started an exercise that was so close to one of mine for my workshop. But it gladly it took another turn.

I skipped the after-dinner workshop to visit the nature reserve, and listened to the party from outside (it was loud enough) for some great conversations. And at midnight Dan and I had our small workshop as already mentioned above. It was another long day in Runö, and worth every minute.

After some nice breakfast – the food was again very delicious on all days – I went to Mark Winteringham’s workshop about using Postman to test web services. I had only played around with SoapUI early last year, so I was gladly taking the chance to learn some more about that topic. Mark created a wonderful and easy to understand example application, with enough bugs for some encouraging moments, and challenging enough to better understand the topic.

After lunch I had my workshop, talking about breakfast. But hey, they had the chance to put it in another slot.

If you haven’t heard of “Transpection Tuesday” I don’t know under what rock you are living or in what part of the CDT community you reside. Well, in the next session Helena and Erik shared their techniques and approaches and gave some small insights into this living legend. Something many of us envy them for, but to be honest, there is no reason to. You can start the same thing, you “just” need to find someone to partner up.

After dinner, I was facilitating my first session. “Testing, wine, and food” by Lou and Jo Perold. I don’t drink (much) alcohol, so I was not sorry, that there was a table limit of 20, and that I – as facilitator – was not included actively. But I chose that session deliberately and for a good reason. I enjoy watching people engaging with food and drinks in a respectful way. And I like talking with those people about their experience and fascination. Observing 20 people carefully trying to understand the taste of some food and three different wines and the combinations of wine and some of the foods delivered what I was hoping for. It was a fun topic, and it was taken with seriousness and respect, and I was happy to silently observe it and provide an additional pair of hands and feet.


On day 3 I missed the first session, as it was starting at 6 a.m., and my day 2 ended only around 2:45 a.m. But after breakfast and checking out I was ready for Dan Billing’s social engineering workshop. The room was quite packed and engaged, and Dan delivered a great workshop about an important topic. If you don’t believe that you leave many traces on the web, you might want to start trying a self-experiment. Just saying.

Then it was time for the session I was waiting for for so long. Damian Synadinos‘ “Tips&Tricks from Jester to Tester”. A workshop about improv theater techniques that are applicable to (mob) testing as well. I love improv comedy (shows like “Whose line is it anyway?”), I love techniques that improve spontaneous testing, and I was really looking forward to see Damian combining both in that workshop. I also was the official facilitator, but felt really useless in that role. Nevertheless I enjoyed the workshop, the engaged discussions, and funny improv pieces, as well as the testing pieces. I had high expectations and got not disappointed.


After lunch it was time for my last facilitation and supporting Nicola Owen with her SpeakEasy talk “The Art of Picking Your Battles”. Nicola delivered a great talk and had a very attentive audience that was very engaged in the following open season (Q&A). If you want to read more about her session, you can find her view here.


The closing keynote arrived, and I had to realize something that I thought was not possible. This was my first session by Fiona Charles. I was somehow avoiding her so far, which should not have been possible. Fiona had 5 sessions, and this was the first I made it to. Nearly impossible.
The closing keynote was truly worth a closing keynote. The topic “Gaining Consciousness” was all about learning about the project context. Most of us consider ourselves as context-driven, but how many really invest time to find out about the project context and do something with that information. The talk / workshop (yes, again a keynote in workshop format) was inspiring to think about our daily behavior, how much we really care for the context. As a closing keynote it really did a fabulous job. It kept me busy with that one thought for a couple of days now, and I guess it will for some more time. So thank you Fiona for providing that one thought to definitely follow up on for me this time.

Let’s Test, you have done a great job again. I will do a lot of things to return to you in 2017, because it is truly a highlight of the yearly conference circus. The energy of Runö is pure fascination to witness. The only sad thing is, that I – again – had not have enough time to spend with everyone I wanted to talk to more intensely. But that would need a few weeks, I’m afraid.

Thank you Johan, Henrik and Linda!


Photos from the nature reserve


Leave a comment

Filed under Uncategorized

MVP or not?! – A misunderstood pic goes viral

This morning a pic showed up in my timeline that I had hoped not to see again. But it seems the pic has gone viral and is used out of context numerous times. It was a picture from the Agile Testing Days Scandinavia keynote “Why we do not need testers on the team” by Bent Myllerup to demonstrate the concept of an agile approach to software development, delivering value more often and developing software iterative and incremental.

Media preview

Let me describe what I see in this picture, and why I think it’s the wrong picture to describe the concept of MVP (the minimum valuable product). My problem is with the right part of the slide.

This picture is made by Henrik Kniberg and described in this blog. Henrik is also stating that his picture is used often, and not always in the right context. So why do I think that this picture is not showing what it should?

The upper part: For me it looks like that the customer wants a car. Somehow he gets multiple deliveries, while the product is built. I’m not sure, why would you deliver a partial product that’s unusable. But this picture should show the advantage of Agile, so I accept it as a stupid exaggeration for the sake of context.

The lower part: This picture should describe the principle of iterative and incremental development with multiple deliveries bringing (some) value to the client early to gather feedback and deliver the right product in the end. But now comes the problem of using manufacturing comparisons for software development; they usually don’t work. When software is built incrementally, you don’t need to start over after a delivery, and the part that was delivered must not be completely redesigned. In this example the customer gets 5 different products. But the second is not based on the first, the third not on the second and so on. This is an example of poor understanding of the needs of the client. This could either be that the client doesn’t know what he wants, and we slowly find out together what he wants, with an enormous effort to generate five different products. Or we were not able to ask the right questions and get additional information only by delivering something wrong to get more clues on what the client wants.
In the first case at least the client has to pay 5 different products, I suppose. In the second I presume we deliver five products for the price of the last one, which would not bring us enough money for all the effort we had.
I can imagine one scenario where this is absolutely valid. When you are a salesman of a company offering those five products and a customer comes in and doesn’t know what he wants, he can present him five different solutions, until the customer found what he wanted. But this is not Agile! And it does not apply to software development either, at least not if you want to stay in competition tomorrow.


So let’s look at different situations what a “minimum valuable product” is, and how you come from there to offer the client what he really wants. The scenarios I want to look at are:

  • one customer
  • few customers
  • many customers

MVP for one customer

A customer comes to us with a problem (or multiple) to solve. Asking the right questions we could find out what the customer most obviously wants to have. We find out what the most basic thing might be to solve a part of the problem. Then start off in that direction and showing him our first throw. The customer then can tell us, if that goes in the right direction or if we need a course change. We adjust and add value to the product as we go on, until the customer really gets what he want.

I like Cassandra Leung’s approach for that to find a real live example:

Media preview

MVP for a couple of customers

Let’s assume we have a couple of customers who share a similar problem. Our task would be to ask enough questions to all of them to find out about what parts of the problem they have in common, what parts are shared by some of them, and which parts are unique to some customers.

Our MVP in this case would start with providing some basic value of the common part, so that all customers could benefit from it. Getting feedback we sharpen the understanding for the problems of our customers and how to solve parts of the problems in a common way and where we need to start focusing on solving problems differently for some of them. Developing features now after customer directions might also influence some of the customers, showing them different approaches how to solve their problem. This might bring value quicker to them and also influence the desire for the final solution.

An example would be special business software, focusing e.g. on one branch of the industry, where multiple market participants share a common problem to be solved.

MVP for many customers

This is more like looking for a new product that many customers would want to use. So you need to find a common problem for them to be solved. Since there are too many market participants to ask, you better get a product owner who knows his business and has a good understanding and overview of the problem at hand. Now we start much like solving a problem for only one customer. When we bring the MVP to the mass market we start getting feedback from “many”, if our solution is serving the many or not. Based on a vision or on feedback from the customers you now start adding features, continuously collecting feedback from the users. When the product grows you might start developing features that add value to only parts of your clients.

You should not shy away to remove features, if they are not used enough. Features cost money to maintain, and if they bring no or not enough value to your customers, get rid of it again. You might also learn that your MVP is solving problems you hadn’t had in mind, when you were starting to develop it. Then it’s good to have early feedback to change your course and address a different group of customers.

A real live example would be e.g. a text messenger for smartphones. SMS were expensive, but internet data connections were cheap compared to that. So text messengers using the internet were coming up. Then adding feature by feature, like sending pictures, voice bits, videos, etc. Some features are not used by all, some more often, some less, some are removed again, when they flop.

My tip, when explaining the concept of MVP to someone, describe the context. Makes it easier to understand.
If you think I did misunderstand the concept of MVP, please let me know so in the comments.


One example to show how MVP development works in non-software industries, car manufacturers. It all started with Ford’s model T. There was one kind in the beginning, then slowly starting to evolve for different problems to be solved. One company that accompanied me all my life is Audi. When I was young there were about three different models. A small medium (80), a big medium (100), and a big model (200 / V8), which then became A4, A6, and A8. Something to find for most people. Then people were looking for smaller cars and Audi wanted to have those customers as well, the A3 came. Then they invented something even smaller with the A2 and something sporty with the TT. Then the palette increased on the small end with the A1, and sporty versions in the medium to large segment with the A5 and A7. Not speaking about adding cabriolets and station wagons in most segments. 2- and 4-door versions. In the meantime SUVs came up, starting with the Q7, later came the Q3 and Q5 to present small and medium-sized SUVs. It took a couple of decades, but now most customers find something to solve their problems (taking into account the size of your purse of course). And it all started with a single model. But Agile is not a solution here, because you don’t want to have half a car.


I want to thank the following people for making up my thoughts today, triggering the problem, challenging my statements, agreeing and disagreeing with me. I hope I didn’t forget someone.
Dan Ashby, Jose Diaz, Bent Myllerup, Thomas Ponnet, Aleksis Tulonen, Robert Meaney, Hannes Lindblom, Jokin Aspiazu, Michele Cross, Cassandra Leung, Tim Ottinger



Filed under Uncategorized

Test Automation – Am I the only one?

What would the world of testing be without test automation? Well, I assume a bit slower than it is with it.

In this post I don’t want to speak about:

  • There is no such thing as automating “testing” – I know!
  • It’s not about using tools to assist testing
  • It will not be a rant against vendors who declare 100% test automation is possible – no it’s not!
  • Test Automation is for free, besides license costs – no, sir. It’s f**king expensive.
  • Test Automation doesn’t find bugs, it simply detects changes. Humans evaluate if it’s a bug.

So what is this post about? It’s about my personal fight with test automation and the risks I identified attached to it, that don’t seem to bother most of the people working with test automation. So, am I worrying too much? You want to know what bothers me? I will explain.

There are lots of people who treat test automation as silver bullet. “We need test automation to be more efficient!”, “We need test automation to deliver better software!”, and “We need test automation because it’s a best practice!” (Writing the b word still makes me shiver.) If you are in a product/project that doesn’t use test automation, you quickly get the impression that you are working outdated.

My personal story with test automation started a while back with entering my current company. Implementing test automation was one of the main goals why I was brought in. After nearly 2.5 years there was still nothing, because me and my team were busy with higher priority stuff. Busy with testing everything with minimal tool support. Sounds a bit like the “lumberjack with the dull ax” problem, when you belong to the test automation as silver bullet fraction. No time to sharpen the ax, because there are so many trees to chop. In May 2015 I got the assignment to finally come up with a test automation strategy and a plan how to implement it. Reading several blogs about the topic, especially Richard Bradshaw’s blog, quickly formed some sort of vision in my head. I know, against a vision, take two aspirins and take a nap. But really, a plan unfolded in my head. And again we had no time at hand to start with it. Some parts of the strategy were started, some proof of concepts were implemented. Since 3 weeks now I got a developer full-time to finally implement it. Things need time to ripe at my place.

Now I am a test lead with no real hands-on experience how to automate tests and I have a developer who can implement what formed in my head. But with all the time between creating the strategy, a strategy I still fully support as useful and right, and implementing it, I also had enough time to use some good critical thinking on the topic.
And finally, last week at the Agile Testers’ Meetup Munich the topic was “BDD in Scrum”, and thanks to QualityMinds who organized the event, we got not only a short introduction to BDD, we also had the opportunity to do some hands-on exercise.

Why am I not a happy TestPappy now that everything comes together? Here are my main pain points. Risks I would like to address and that I need more time to find out about.

Why do people have more trust in test automation than in “manual” testing? It seems that people are skeptic when it comes to let people do their job and test the right things. But once you have written 100 scripts that run on their own, 3-4 per day, every day of the week, producing green and red results. It seems to me that no one is actually questioning an automated test, once it’s implemented.

Automated checks with a “good quality” need well skilled people. Your stomach makes itself ready to turn, when you read about “good quality”? Good, we are on the same page. The most important quality characteristics in my opinion an automated check should have are completeness and accuracy, stability, robustness, and trustworthiness, scalability to some degree, maintainability and testability, and some more. That’s a shitload of things to take care of, when writing some simple checks. To be honest, most of these criteria our application itself doesn’t have to a sufficient degree, at least not, when it has to stand up to my demands. How could a test team take care of all that when generating hundreds of necessary tests? Now I got lucky and was able to hire a developer to implement the automation framework of my dream, so I got some support on that front. But once we start implementing the checks itself, me and the testers need to implement or at least help to implement them. How do you take care of the problem that all checks need have “good quality” to be reliable not only today but also next week or next year.

How do I know that my script checks the right thing? I’m a very explorative tester. I usually don’t prepare too much what I’m looking for. I let my senses guide me. So when I hand over a certain area to be covered by a script, I have to make decisions what to cover. At least in my context I am pretty sure, that I will miss something, when I give that out of my control. How do you handle that?
My first attempt to implement some automated checks 3 years ago was to call every page that I could reach without too much logic and making a screen shot. I would then just quickly walk through the screenshots and check for oddities. But this is more a tool to assist my testing, and not able to run without me or some other human. Simply comparing screenshots and only checking screens with differences to a baseline is not really possible, since displayed data are usually changing often in my context.

What am I missing? My favorite topic of the past 2 years is Ignorance, in this case the unknown unknown. How do I have to handle that question? I’m sure I miss lots of things with a check, but can I be sure that I don’t miss something important? Once an area is covered with automation, how often do you come back to investigate it? Review if you missed something, if you need to extend your checks, or redesign the whole approach?

How to trust a green test? There is always the problem about false positives and false negatives. False negatives are wasting time, but in the end I have double-checked the area and covered more than the script, so I’m okay to handle those. But false positives are mean. They say everything is all right, and – hopefully – they hide in a big list of other positive tests. So for every single check, every single assertion you need to think about if there is a way that the result is “true”, when it’s really “false”.
Now it also depends on what the script is covering. If you forgot to cover essential parts, you will miss something. But the check will not tell you, it simply can’t. It’s not programmed to do so.

Automating the API/service level? Richard Bradshaw presented that nice topic to automate things on the right level on Whiteboard Testing. Many tests would be better to run on the API/service level. I agree to a certain degree. As long as there is no business logic implemented on client (e.g. browser) side that I need to emulate. When I need to mock front-end functionality to effectively interact with an API, I have to re-create the logic, based on the same requirements. Do I implement the logic a second time to also test if the implementation is correct? Do I somehow have the possibility to re-use the original front-end code and miss problems in there? Do I trust the test implementation more than the front-end implementation? If so, why not put the test code into production?

And the list of worries would go on a bit, but I stop here.

Please help me! Do I worry too much about automating stuff? I would be very happy for some comments on my thoughts, if these thoughts are shared or maybe solved? And if they are overdrawn, I want to know as well.


Filed under Automation