Is software quality really getting worse?

I made a big fault in my last blog post. I claimed that software quality is getting worse in the past decade or two without giving any facts to undermine that claim. And I’m very sorry for posting a personal perception like that.

I still think that software quality is not getting better. And I want to give you the points why I think that is.

What is quality?

Quality is value to someone (who matters).
– Jerry Weinberg & add on by James Bach

Quality as I would describe it better consists of a bunch of ilities (like Capability, Reliability, Usability, etc). ISO 25010 counts about 30, the famous poster from Rikard Edgren & Co. contains over 50. There are many factors that add to the perceived value of a product. Maybe mine differ so much from yours that your opinion on the same fact is completely opposite to mine.

Quality of people producing software

I came into professional IT in the year 2000. In the years of the boom. CS in universities was the subject to take. People sat on the floor or had to stay outside in crowded class rooms. There was a huge demand in IT people and universities were not fast enough in producing new talents. So companies tried to attract people from other areas into IT. I remember starting in my big test project in 2003 with over 70 people in my team. There were only two people with formal CS degrees and me with vocational training as developer. The rest of the team came from various backgrounds. I don’t want to say that those folks were bad, not at all. Some of the best testers I worked with so far were among them, having diplomas in philosophy, meteorology or similar. But definitely not all were good or should have pursued a career in IT. But we needed the people, so they stayed.

With my current job I got the responsibility of hiring people for my team myself. My amount of experience is not big at all, and maybe other reasons influence the perception. In the last three years I have seen about 150 CVs and interviewed 40+ people for 2 job openings in the past years. And the ~150 were only those HR sent through. And the success in finding people with the right skills who fit into the team was very low.

A question I tend to ask is how the candidates came into software testing. Most given answer was something in the sense of, “friends told me that there is an opportunity for me to get into IT”. My colleagues searching for developers and business analysts have similar time consuming experiences with only slightly higher success rates.
When speaking with friends in the industry asking them about their hiring experience, all share the same tenor. It gets harder and harder to find good people.

Software companies and products popping up everywhere

Success stories like Facebook and Instagram, collecting billions of dollars for their ideas and the users they have, local success stories getting still some millions from bigger companies who want to have the product. These are the factors that inspires people to come up with their own ideas to make some big bucks. Thanks to the internet its easier than ever to publish software and draw some attention to it. If your software goes viral (enough) you have made it.

There are lots of companies who try to emulate success stories and come up with a similar product. Once a product gets some fame, you can be sure that there will be dozens of clones available in no time, trying to get a piece from the cake. Thanks to the law of the market those products quickly reach their end of life, some keep dwelling in the shadows.

Frequency of updates

The internet and modern software development approaches like Agile and DevOps make it possible to update software quickly. There are companies using DevOps approaches bragging with daily or even more frequent updates. Time to market is one of the driving factors. Software companies don’t have months and years to come up with a product anymore. If you want to earn money and stay ahead of your opponents, you have to move quickly. Agile and the approach to publish MVPs (minimum viable products) is getting strong to produce the right thing. And it works e.g. great with websites and other centrally hosted applications. But with that approach, e.g. as seen in the app market for smartphones, the industry also produces thousands of apps that start with some small feature and then being either forgotten or growing from there. Often the aspect of viability is not very distinct.
Two decades ago I first heard the name “banana projects”, “ripes at the customer”. Back then it was a dispraise to get that title. With MVP that approach was made the weapon of choice. Don’t waste time on producing a product, fail fast and learn. That approach has two sides. Software companies save money and get a chance to produce something with potential, on the other hand users waste time to evaluate dozens of products to find a solution that fits. Win-win situation? I’m not so sure.

Big companies like Apple, Google and Microsoft have problems with prestige projects, affecting millions of customers. There are fixed dates to hold, promoted by marketing, without listening to project stakeholders. There is a keynote, they need to publish on a given date. Thanks to some late changes we see problems over and over again coming in the initial versions asking for patch releases soon after. Android has a different problem with device fragmentation growing by the hour. Teams concentrate on the majority of the user, evaluating if problems reported by users from the minority are worth fixing at all.

Mobile apps are the example I see that phenomenon every day. I have about 100 apps on my smartphone. Every work day I get at least 1 update for one of the apps. Sometimes up to 8. Shortly before and after releases of new versions of the operating system that number gets double-digit. On all my devices I spend a fair amount of time when using it with maintenance and updating. Something I don’t want to waste my time with, to be honest.


Hardware is getting faster and faster, and developers simply don’t have to care much about optimizing their code. Just add some cores or some GB of RAM, and it works faster again. This has of course always been a phenomenon. Developers were simply not trained to optimize the last bit of their code. Hardware gets replaced even faster these days, so why worry about slow devices? And so goes the spiral.

A long time ago when most things already got delivered in CD-ROM, there was an initiative that tried to fit awesome stuff on floppy disks (you might remember those things that look like a 3D print of the save symbol), taking max. 1.4MB of disk space and running performant on older CPUs as well. Sadly I couldn’t find any links.

Then there was Fli4l, a project that produced Linux distributions that fit on floppy disks to run some valuable software on old machines like firewalls, proxy servers and web servers booting from floppy disks.

You say in times of 128GB USB sticks that’s no longer necessary. Well, exactly there lies the problem in my eyes. People (developers and users alike) don’t care, because they don’t have to.

Times change

Times change and so do development approaches, ways of distributing software, and ways of using software. Is the demand for software higher than 10-20 years ago or is that demand artificially induced?

With the few topics I tried to explain here I still say from my point of view and in my opinion that software quality in many cases is decreasing. Some companies make it right and use the available approaches and technologies to improve their software. But how many are out there that are doing it wrong or simply deliver bad software faster. If you don’t come across those pieces of software in your life, good for you, and I hope it stays that way for a long time.


Reinventing Testers and Testing to prepare for the Future

When I want to speak about the future of testers and testing, I first have to think about the situation we have now, and why I want to change anything about that.

In the past decade or so there were multiple threats to testing and testers. “Test is dead”, “Testers need to learn to code”, of course the ongoing discussion (especially by tool vendors) “all testing can be automated”, and some more.

Then there was a workshop on Orcas Island mid of May 2016, organized by James and Jon Bach. The topic was “Reinvent Testers” and I don’t want to go into any details about the workshop since I was not even there and don’t have more information than what is available on the website. It was not the workshop that created some disturbance in the equilibrium of the community, it was just one slide – out of context – that was spread via Twitter.

The statement that got my attention was “there is an ongoing and longstanding attack on the testing role“. I don’t want to focus on any of the points on the slide, because I don’t think they support the statement in the header in any way. And I don’t want to say anything against the chosen hashtag, which was a typo (testing instead of tester) as far as I know, combined with a side blow on some minor political campaign currently ongoing in the U.S.

Perze Ababa was so kind and shared the slide after that one with some people on a slack channel. And that slide was reflecting in my opinion the actual problem way better. I don’t want to share the slide here, since I was not part of the workshop and I did not ask James and Jon for approval of sharing it here. But I want to quote some statements from the slide that better describe the current situation of how the tester role and testing as a phase are often seen by other participants of the software development life cycle.
“We must eliminate the need for judgement among testers, so that there is no controversy about what is a bug!”, “Nobody smart wants to be a tester!”, “Testers need to assure quality!”, “Testing is too unpredictable, you should instead make a Definition of Done and use Acceptance-Test Based Development to know when you get there!” and “You should write down all your ‘test cases’, tracing them to requirements, and track metrics against them!”.
This is only a small part of the statements on the slide, but I guess most of my readers get the idea of the tone I’m referring to and were themselves more than once in similar confrontations.

How I see the current situation of testers and testing as a phase

When I look at the current situation of testers and the standing of the testing phase in projects in general, I can understand why there seems to be a problem.

The testing phase was born in waterfall-like approaches many decades ago, and testing usually comes in the end, when everything was sort of stable. And that is where many people (testers and non-testers alike) still see the testing phase. Talk to the average factory-tester and he or she would advise to have a stable piece of software available for the testing phase. When you have to run all tests in the end and want to make a statement if all tests passed, that actually makes some sense in that context to not have a moving target and execute your test scripts with 10 or more different versions of the software. Reality might look different.
Being the phase in the end also gives you the disadvantage of biasing opinions that in case of a delay it’s your fault.

Two decades ago development approaches started changing away from waterfall. XP, SCRUM, Kanban, Agile, Lean, whatever the buzzword you prefer, it’s all about getting code faster into production or the hand of the customer. Faster feedback is the driver to develop the right thing and not waste much money on walking into the wrong direction.

The testing phase has a problem in those environments. It did not evolve together with the development approaches. Maybe developers sometimes don’t appreciate the feedback they get from a late testing phase, enforcing them to redo some of their work or getting told that they implemented the wrong thing. From a developer perspective that I experienced and also from a management perspective, testers should only verify that specifications have been implemented correctly. And most of the new development approaches found ways to get exactly that feedback faster without the need of testers. It seems nobody was missing additional feedback and information coming from the testers. Who knows why?

Testers who stayed in such projects simply forgot to adapt and tried to “only” attach a testing phase at the end of whatever development approach was chosen. There was also not much support from the testing industry. iSTQB that initially was taught in waterfall context only came up with a new certificate, the Certified Agile Tester. Not much of a help. ISO29119 is a documentation-heavy framework that hasn’t found an answer for Agile approaches yet. In the beginning of the millennium niches like the context-driven testing community, with a couple of thousand testers world-wide, were simply not big enough to make a difference in an industry with over a million people.

Nowadays testing phase testers have the problem that development approaches have evolved so far, that they cannot keep up any more.

Testers in general have a problem they didn’t cause. Testing was seen as simple task on one hand and was tried to be kept cheap on the other, since it added no value to the product, so many cheap people from various backgrounds were hired into testing jobs. The career opportunities in most companies are usually to step up some ladder and move out of testing positions. So the best testers were simply lost to other disciplines over time, because of the lack of opportunities in their profession. And I also claim that a good portion of people who stay in testing more than 5 years lack motivation to improve or don’t lack laziness. They just stay there because that’s all they think they can do and they have a safe job.

Management is also treating testing still like 2-3 decades ago, because back then it was easy to manage, so why change that. No motivation from that side either.

How Agile & Co. cope with testing

Projects, products and companies that work Agile have dealt with the situation and try to include testing earlier and let the developers do the job. Lots of checks that come up from testing are getting automated to speed up regression testing / checking. They try to keep testing/hardening sprints short and everybody helps there when not busy with fixing bugs. Approaches like BDD, TDD or ATDD support the cooperation between business analysts and developers, and help developers produce more fitting solutions.
Some companies even take it to the extreme and let developers accompany their code into production, see there if it works, and then leave it alone. No testers involved.

Do you think those companies, at least where it works, are crying for a testing phase or more testers on the project? I don’t think so.

In the AB Testing podcast, Alan Page and Brent Jensen regularly share a view on how Microsoft tackles software development processes, and it seems they are doing not too bad.

When claims come up that 90% of the people working as testers should get a new job, James Bach was citing such a request in a podcast I stumbled across last year, I say, may it be only 70% but it’s true. If more and more companies successfully change their development approaches, we will need less testers. And of course I want to keep only the top 30%, not any 30% and not the cheapest 30%.

How the role of testers will evolve

To survive in the world out there, testers have to speed up their game. Testers cannot waste a big chunk of their time with documentation. My motto since being team lead for a small team is, keep people testing. Time spent knee-deep in the system is usually more valuable than writing pages of test cases.

Testing in my opinion is all about providing necessary information for stakeholders to make informed decisions about the project. And those information are collected by actually spending time with the people and the system, not by preparing, updating and grooming long lists of test cases and other useless pieces of documentation.

Testing should be shifted left as far as possible. Regression testing should be reduced to the absolute necessary minimum. As Michael Bolton stated in a discussion yesterday, when you have to do lots of regression testing because the developers don’t understand how changes to the code affect the product, the developers have a problem, not the testers. Those problems need to be addressed differently. Not by blaming that the regression test phase is too expensive (I have a friend, who knows someone, where that might be the case).

Having testers in your team should not be a surrogate to writing and maintaining automated unit and integration tests, because developers are too lazy to write them, or the architecture is not favoring writing them. Then you have a different problem. Throwing testers at the problem won’t help solve it. But it provides someone else to blame.

Testers should take care of the bigger picture, things that are hard to automate, cope with complex problems. Testers should help developers to create good automated scripts, help them to understand and test the application and train them to do the simple tests earlier and faster and provide feedback faster. And yes that needs skills, competence and behaviors of a systems thinker, of a detective, of an explorer, and of a coach. That’s nothing you can buy cheap on the next corner. Those people are rare.

The role of testing needs to be changed?

Some people participating in the discussion call out to testers to take different roles in the project to still provide value. I see that as a wrong approach. As a tester in the role as tester, used in the right situations, I can provide value to the project only I as tester can provide. And I don’t want to give up my role as a tester. I want to continue asking questions, experimenting with the system, analyzing strange problems, I don’t want that to go away.
If my job as tester on my project is not filling out my whole day, or I as an individual decide that I can be more useful and contribute to the project in different other roles that may even improve my (part-time) job as a tester, that is a great opportunity. And I strongly advise you to take that chance. But your role as tester doesn’t change.

I don’t think that the role of testing should be changed by taking on new tasks. Those are simply other roles that can be filled by the same person (specializing generalist comes to mind). I recommend to make better use of the role testing and focus instead. Tasks that are not testing and produce more waste than value should be taken away from the role of testers. Enable testers to act faster!

What testers need to do is evolve or reinvent the way they work, not by taking additional jobs in the project, but finding a way how to produce the necessary information faster, earlier and more efficient. And testers need to dare to hand over simpler testing tasks to developers to stages or phases where it makes more sense.

The testing phase should be changed. If you need an unnecessary long regression test phase, you better rethink your development approach. If you run tests and checks in that phase that take days and weeks, you might want to rethink that approach. If you regularly run into problems in the testing phase you have a problem.

And when Brent Jensen says that data scientists and telemetry are attacking testing (episode 39 of AB Testing podcast), I can only hope that testers position themselves accordingly, stack up their arsenal and be part of those approaches. For me that describes excellence in testing. Monitor and gather feedback from production and use scientific methods to use that data to inform further actions. Fantastic! Where can I sign up? I am a tester, and I want to master those tools.

Responsibility for the overall decrease of quality

Software developing companies have a common problem in the last few years, starting maybe over a decade ago: Product quality gets worse and worse. The possibilities of faster cycles to production makes many companies negligent about actual quality, because it’s easy to provide a hotfix. Only the rules of the market help to filter out the worst of them and also to prevent the ones with good quality from the start to get a chance in the market, because they tend to be too slower.

I would see that partially as a problem of neglecting testing phases, doing the wrong testing, doing testing wrong, or hiring the wrong people to do the testing.

Testers need to fight and find their way back into those projects and provide value to help the team raise the level of quality again.

[Update: The claims offered are my personal views, and I go into more details here.]

What companies need to do

Companies need to decide whether they need or want to stay in waterfall-like approaches, having that big testing phase at the end, and taking all that comes with it into account. Waterfall is not bad, and Agile is not for everybody. And always remember: A good waterfall is much better than a bad Agile-ish attempt.
Some may even have to bite the bullet and outsource their testing to some expensive, documentation-overload-producing consulting company. There might be good reasons for that.

One hint: You might want to rethink your approach if you are in regulated environments and take it as excuse to say you need a documentation-heavy testing process. I have heard from multiple people that this is just a myth.

Invest in the future: Prepare your people to fill that high sophisticated role as tester. Find those who are willing to invest in their craft and profession. Spend time and money on training and conferences, let them interact with like-minded people, get them engaged, get them motivated, and pay them a decent salary to have them focused on the job and not on how to pay the bills next month.
And if you ask what happens if you train the people and they leave, better ask what happens when you don’t train them and they stay!

What testers need to do

Managers also need to change their view about testing and testers. But it’s the task of the testers to build trust in the information they provide and demonstrate the value they can add beyond test scripts and developers testing themselves. Then managers will hopefully change their opinion that testers only need to provide a count of test cases to show their value.

Do everything you need to get better in your craft. Learn tools, methods, approaches. Get better with soft skills like communication, critical and creative thinking, problem solving. Stay up-to-date. Interact with your peers in- and outside your company. Go to local meetups. Read books, blogs. Go on Twitter and engage with other testers. Learn from them where to find more sources for information.

If you want to be in that 10-30% that should stay in the profession, you better start now.

And my last tip: Be proud to be a tester!


Return to Runö – Let’s Test 2016

Last year, Let’s Test 2015 was my first ever conference and it started my desire to visit more conferences and contribute to them as well. In addition it was my way to have chance to visit further conferences, since my company was not willing to further invest in any external training or conferences, only when it’s directly applicable to an immediate problem we have and cannot solve ourselves.

The call for proposals for Let’s Test 2016 was asking for crazy stuff, and I tried to come up with a proposal that showed some craziness. Or at least enough craziness to fit the expectation. I came up with two rather last minute proposals and was giving it a chance.
Weeks later I got a message from Dan Billing, “Wanna pair with me?” I had no clue what he was talking about. “Check your mail. Read and you’ll understand.” No mail, no capisce. But whatever it was, I agreed to it. My mail provider is sometimes a bit slow. And when finally the mail came in, I couldn’t stop smiling. One of my ideas got accepted! The reason behind Dan’s question was, they wanted us to pair up with someone else from the other speakers and do an additional crazy workshop.

My next moment of happiness was when it turned out that I got a flight about 60% of the usual cheapest price to Arlanda. Since paying for all conferences in 2016 myself, that came in just right.

Let’s Test 2015 was a special experience for me. I finally met several of the people I followed on Twitter in real life. They really existed. And there was this fantastic, inspiring and driving energy all over the place. What can I say. I met even more people in real life this year and the energy was there again this year, to my surprise in a different flavor, but as exciting and driving as last year. It’s an energy that lets even introverts meet new people, lets you discuss 3,5 days in a row for 16-20 hours a day about anything test-related or not, make you hold workshops after midnight, stay fully engaged and energized despite severe back pain and not much sleep.


Oh yeah, one downside of Runö: the chairs. At least some of them. Maybe in addition to all the tension that lately build up, they tried to kill my back. It was obviously a muscular thing, since motion helped. But boy, I had never had such pain for nearly 4 days in a row. You won’t believe it, the energy of Runö helped me through that, without the right pain killers (only had paracetamol with me) or any professional help. It’s simply fascinating. If I’d had the same situation at home, I would have called a doctor the second day or dragged myself there to get a shot or two.
The only other downside, I found so far, one that most multi-track conferences have, you have to decide where to go to, and it’s guaranteed that you miss something else that you would have gone to, if you could.

By chance I found out about a nature reserve only a few minutes away from the conference place. So Monday after dinner I took the chance to get some more exercise for my back and visited that beautiful place. If you ever go to Runö and you need about an hour for yourself or to talk in quiet, take that walk. It’s such a beautiful place. (I added some photos at the end of this post.)

At Let’s Test I gave my first ever workshop. “Context Eats Process For Breakfast”, a workshop about process analysis, modelling and understanding the borders. I applied for a half-day workshop and only got a 2 hour slot. I tried to strip down the initial plan a bit and hoped for improvising during the workshop and to not skip any essential parts. I was able to finish in time, but sadly I had to hurry through some feedback rounds a bit too fast. I hope the workshop was still useful for my participants. I collected over 3 pages of feedback after self-reflection and I think I will try to send that or a similar proposal again for other conferences.


Dan and my midnight workshop on Monday night, yes starting at midnight, was not very well visited. But in the end we were 7 people and used the “prepared” format to discuss a very personal problem of one of the participants, which might not have happened in a bigger group or a different combination of people. At least I hope we – as a group – were able to help a little bit with making a better decision.


In addition to giving two workshops, I also volunteered to be a facilitator for other sessions. A job that can be a good support for the speaker, if necessary, to organize the Q&A session, simply help to prepare and clean up the room, or where you are not involved at all. I had examples of all four, and I will definitely sign up again. And I will encourage everyone to do the same.

Now let’s have a short overview of the sessions I attended. It started – of course – after some nice and decent sounds of an AC/DC cover band with the opening keynote by Rob Sabourin and his wonderful wife and best friend Anne. Anne is not a tester, she is an obstetric nurse. She was able to keep the whole audience listen to every word she had to offer, telling a wonderful story from her long experience as nurse. The topic was about triage, something that testers also need to do often. What are the factors that you need to consider in certain situations to make the right decision? The keynote turned into some kind of big workshop, and was a worthy kick off for a great conference.


Chris Blain‘s workshop about “Context-driven hopes & dreams” was an interesting one for me, helping me a bit along the way on my search if I’m really context-driven. And it also had one ingredient I would have loved to have in my workshop. The speed Chris used for his workshop was fantastic. It was so relaxed, it gave all participants enough time to even introduce themselves, setting their context for others to understand. Something that is often forgotten, when multiple people talk about testing. And I have to say that my group produced the best result of all groups.

Foto 23.05.16, 11 04 38

My day continued with Chris Blain and Rob Sabourin, this time combined in a workshop about “Task Analysis and the Critical Incident Method”. I liked the workshop a lot, but it also scared the hell out of me, especially when after the break Rob started an exercise that was so close to one of mine for my workshop. But it gladly it took another turn.

I skipped the after-dinner workshop to visit the nature reserve, and listened to the party from outside (it was loud enough) for some great conversations. And at midnight Dan and I had our small workshop as already mentioned above. It was another long day in Runö, and worth every minute.

After some nice breakfast – the food was again very delicious on all days – I went to Mark Winteringham’s workshop about using Postman to test web services. I had only played around with SoapUI early last year, so I was gladly taking the chance to learn some more about that topic. Mark created a wonderful and easy to understand example application, with enough bugs for some encouraging moments, and challenging enough to better understand the topic.

After lunch I had my workshop, talking about breakfast. But hey, they had the chance to put it in another slot.

If you haven’t heard of “Transpection Tuesday” I don’t know under what rock you are living or in what part of the CDT community you reside. Well, in the next session Helena and Erik shared their techniques and approaches and gave some small insights into this living legend. Something many of us envy them for, but to be honest, there is no reason to. You can start the same thing, you “just” need to find someone to partner up.

After dinner, I was facilitating my first session. “Testing, wine, and food” by Lou and Jo Perold. I don’t drink (much) alcohol, so I was not sorry, that there was a table limit of 20, and that I – as facilitator – was not included actively. But I chose that session deliberately and for a good reason. I enjoy watching people engaging with food and drinks in a respectful way. And I like talking with those people about their experience and fascination. Observing 20 people carefully trying to understand the taste of some food and three different wines and the combinations of wine and some of the foods delivered what I was hoping for. It was a fun topic, and it was taken with seriousness and respect, and I was happy to silently observe it and provide an additional pair of hands and feet.


On day 3 I missed the first session, as it was starting at 6 a.m., and my day 2 ended only around 2:45 a.m. But after breakfast and checking out I was ready for Dan Billing’s social engineering workshop. The room was quite packed and engaged, and Dan delivered a great workshop about an important topic. If you don’t believe that you leave many traces on the web, you might want to start trying a self-experiment. Just saying.

Then it was time for the session I was waiting for for so long. Damian Synadinos‘ “Tips&Tricks from Jester to Tester”. A workshop about improv theater techniques that are applicable to (mob) testing as well. I love improv comedy (shows like “Whose line is it anyway?”), I love techniques that improve spontaneous testing, and I was really looking forward to see Damian combining both in that workshop. I also was the official facilitator, but felt really useless in that role. Nevertheless I enjoyed the workshop, the engaged discussions, and funny improv pieces, as well as the testing pieces. I had high expectations and got not disappointed.


After lunch it was time for my last facilitation and supporting Nicola Owen with her SpeakEasy talk “The Art of Picking Your Battles”. Nicola delivered a great talk and had a very attentive audience that was very engaged in the following open season (Q&A). If you want to read more about her session, you can find her view here.


The closing keynote arrived, and I had to realize something that I thought was not possible. This was my first session by Fiona Charles. I was somehow avoiding her so far, which should not have been possible. Fiona had 5 sessions, and this was the first I made it to. Nearly impossible.
The closing keynote was truly worth a closing keynote. The topic “Gaining Consciousness” was all about learning about the project context. Most of us consider ourselves as context-driven, but how many really invest time to find out about the project context and do something with that information. The talk / workshop (yes, again a keynote in workshop format) was inspiring to think about our daily behavior, how much we really care for the context. As a closing keynote it really did a fabulous job. It kept me busy with that one thought for a couple of days now, and I guess it will for some more time. So thank you Fiona for providing that one thought to definitely follow up on for me this time.

Let’s Test, you have done a great job again. I will do a lot of things to return to you in 2017, because it is truly a highlight of the yearly conference circus. The energy of Runö is pure fascination to witness. The only sad thing is, that I – again – had not have enough time to spend with everyone I wanted to talk to more intensely. But that would need a few weeks, I’m afraid.

Thank you Johan, Henrik and Linda!


Photos from the nature reserve


MVP or not?! – A misunderstood pic goes viral

This morning a pic showed up in my timeline that I had hoped not to see again. But it seems the pic has gone viral and is used out of context numerous times. It was a picture from the Agile Testing Days Scandinavia keynote “Why we do not need testers on the team” by Bent Myllerup to demonstrate the concept of an agile approach to software development, delivering value more often and developing software iterative and incremental.

Media preview

Let me describe what I see in this picture, and why I think it’s the wrong picture to describe the concept of MVP (the minimum valuable product). My problem is with the right part of the slide.

This picture is made by Henrik Kniberg and described in this blog. Henrik is also stating that his picture is used often, and not always in the right context. So why do I think that this picture is not showing what it should?

The upper part: For me it looks like that the customer wants a car. Somehow he gets multiple deliveries, while the product is built. I’m not sure, why would you deliver a partial product that’s unusable. But this picture should show the advantage of Agile, so I accept it as a stupid exaggeration for the sake of context.

The lower part: This picture should describe the principle of iterative and incremental development with multiple deliveries bringing (some) value to the client early to gather feedback and deliver the right product in the end. But now comes the problem of using manufacturing comparisons for software development; they usually don’t work. When software is built incrementally, you don’t need to start over after a delivery, and the part that was delivered must not be completely redesigned. In this example the customer gets 5 different products. But the second is not based on the first, the third not on the second and so on. This is an example of poor understanding of the needs of the client. This could either be that the client doesn’t know what he wants, and we slowly find out together what he wants, with an enormous effort to generate five different products. Or we were not able to ask the right questions and get additional information only by delivering something wrong to get more clues on what the client wants.
In the first case at least the client has to pay 5 different products, I suppose. In the second I presume we deliver five products for the price of the last one, which would not bring us enough money for all the effort we had.
I can imagine one scenario where this is absolutely valid. When you are a salesman of a company offering those five products and a customer comes in and doesn’t know what he wants, he can present him five different solutions, until the customer found what he wanted. But this is not Agile! And it does not apply to software development either, at least not if you want to stay in competition tomorrow.


So let’s look at different situations what a “minimum valuable product” is, and how you come from there to offer the client what he really wants. The scenarios I want to look at are:

  • one customer
  • few customers
  • many customers

MVP for one customer

A customer comes to us with a problem (or multiple) to solve. Asking the right questions we could find out what the customer most obviously wants to have. We find out what the most basic thing might be to solve a part of the problem. Then start off in that direction and showing him our first throw. The customer then can tell us, if that goes in the right direction or if we need a course change. We adjust and add value to the product as we go on, until the customer really gets what he want.

I like Cassandra Leung’s approach for that to find a real live example:

Media preview

MVP for a couple of customers

Let’s assume we have a couple of customers who share a similar problem. Our task would be to ask enough questions to all of them to find out about what parts of the problem they have in common, what parts are shared by some of them, and which parts are unique to some customers.

Our MVP in this case would start with providing some basic value of the common part, so that all customers could benefit from it. Getting feedback we sharpen the understanding for the problems of our customers and how to solve parts of the problems in a common way and where we need to start focusing on solving problems differently for some of them. Developing features now after customer directions might also influence some of the customers, showing them different approaches how to solve their problem. This might bring value quicker to them and also influence the desire for the final solution.

An example would be special business software, focusing e.g. on one branch of the industry, where multiple market participants share a common problem to be solved.

MVP for many customers

This is more like looking for a new product that many customers would want to use. So you need to find a common problem for them to be solved. Since there are too many market participants to ask, you better get a product owner who knows his business and has a good understanding and overview of the problem at hand. Now we start much like solving a problem for only one customer. When we bring the MVP to the mass market we start getting feedback from “many”, if our solution is serving the many or not. Based on a vision or on feedback from the customers you now start adding features, continuously collecting feedback from the users. When the product grows you might start developing features that add value to only parts of your clients.

You should not shy away to remove features, if they are not used enough. Features cost money to maintain, and if they bring no or not enough value to your customers, get rid of it again. You might also learn that your MVP is solving problems you hadn’t had in mind, when you were starting to develop it. Then it’s good to have early feedback to change your course and address a different group of customers.

A real live example would be e.g. a text messenger for smartphones. SMS were expensive, but internet data connections were cheap compared to that. So text messengers using the internet were coming up. Then adding feature by feature, like sending pictures, voice bits, videos, etc. Some features are not used by all, some more often, some less, some are removed again, when they flop.

My tip, when explaining the concept of MVP to someone, describe the context. Makes it easier to understand.
If you think I did misunderstand the concept of MVP, please let me know so in the comments.


One example to show how MVP development works in non-software industries, car manufacturers. It all started with Ford’s model T. There was one kind in the beginning, then slowly starting to evolve for different problems to be solved. One company that accompanied me all my life is Audi. When I was young there were about three different models. A small medium (80), a big medium (100), and a big model (200 / V8), which then became A4, A6, and A8. Something to find for most people. Then people were looking for smaller cars and Audi wanted to have those customers as well, the A3 came. Then they invented something even smaller with the A2 and something sporty with the TT. Then the palette increased on the small end with the A1, and sporty versions in the medium to large segment with the A5 and A7. Not speaking about adding cabriolets and station wagons in most segments. 2- and 4-door versions. In the meantime SUVs came up, starting with the Q7, later came the Q3 and Q5 to present small and medium-sized SUVs. It took a couple of decades, but now most customers find something to solve their problems (taking into account the size of your purse of course). And it all started with a single model. But Agile is not a solution here, because you don’t want to have half a car.


I want to thank the following people for making up my thoughts today, triggering the problem, challenging my statements, agreeing and disagreeing with me. I hope I didn’t forget someone.
Dan Ashby, Jose Diaz, Bent Myllerup, Thomas Ponnet, Aleksis Tulonen, Robert Meaney, Hannes Lindblom, Jokin Aspiazu, Michele Cross, Cassandra Leung, Tim Ottinger


Test Automation – Am I the only one?

What would the world of testing be without test automation? Well, I assume a bit slower than it is with it.

In this post I don’t want to speak about:

  • There is no such thing as automating “testing” – I know!
  • It’s not about using tools to assist testing
  • It will not be a rant against vendors who declare 100% test automation is possible – no it’s not!
  • Test Automation is for free, besides license costs – no, sir. It’s f**king expensive.
  • Test Automation doesn’t find bugs, it simply detects changes. Humans evaluate if it’s a bug.

So what is this post about? It’s about my personal fight with test automation and the risks I identified attached to it, that don’t seem to bother most of the people working with test automation. So, am I worrying too much? You want to know what bothers me? I will explain.

There are lots of people who treat test automation as silver bullet. “We need test automation to be more efficient!”, “We need test automation to deliver better software!”, and “We need test automation because it’s a best practice!” (Writing the b word still makes me shiver.) If you are in a product/project that doesn’t use test automation, you quickly get the impression that you are working outdated.

My personal story with test automation started a while back with entering my current company. Implementing test automation was one of the main goals why I was brought in. After nearly 2.5 years there was still nothing, because me and my team were busy with higher priority stuff. Busy with testing everything with minimal tool support. Sounds a bit like the “lumberjack with the dull ax” problem, when you belong to the test automation as silver bullet fraction. No time to sharpen the ax, because there are so many trees to chop. In May 2015 I got the assignment to finally come up with a test automation strategy and a plan how to implement it. Reading several blogs about the topic, especially Richard Bradshaw’s blog, quickly formed some sort of vision in my head. I know, against a vision, take two aspirins and take a nap. But really, a plan unfolded in my head. And again we had no time at hand to start with it. Some parts of the strategy were started, some proof of concepts were implemented. Since 3 weeks now I got a developer full-time to finally implement it. Things need time to ripe at my place.

Now I am a test lead with no real hands-on experience how to automate tests and I have a developer who can implement what formed in my head. But with all the time between creating the strategy, a strategy I still fully support as useful and right, and implementing it, I also had enough time to use some good critical thinking on the topic.
And finally, last week at the Agile Testers’ Meetup Munich the topic was “BDD in Scrum”, and thanks to QualityMinds who organized the event, we got not only a short introduction to BDD, we also had the opportunity to do some hands-on exercise.

Why am I not a happy TestPappy now that everything comes together? Here are my main pain points. Risks I would like to address and that I need more time to find out about.

Why do people have more trust in test automation than in “manual” testing? It seems that people are skeptic when it comes to let people do their job and test the right things. But once you have written 100 scripts that run on their own, 3-4 per day, every day of the week, producing green and red results. It seems to me that no one is actually questioning an automated test, once it’s implemented.

Automated checks with a “good quality” need well skilled people. Your stomach makes itself ready to turn, when you read about “good quality”? Good, we are on the same page. The most important quality characteristics in my opinion an automated check should have are completeness and accuracy, stability, robustness, and trustworthiness, scalability to some degree, maintainability and testability, and some more. That’s a shitload of things to take care of, when writing some simple checks. To be honest, most of these criteria our application itself doesn’t have to a sufficient degree, at least not, when it has to stand up to my demands. How could a test team take care of all that when generating hundreds of necessary tests? Now I got lucky and was able to hire a developer to implement the automation framework of my dream, so I got some support on that front. But once we start implementing the checks itself, me and the testers need to implement or at least help to implement them. How do you take care of the problem that all checks need have “good quality” to be reliable not only today but also next week or next year.

How do I know that my script checks the right thing? I’m a very explorative tester. I usually don’t prepare too much what I’m looking for. I let my senses guide me. So when I hand over a certain area to be covered by a script, I have to make decisions what to cover. At least in my context I am pretty sure, that I will miss something, when I give that out of my control. How do you handle that?
My first attempt to implement some automated checks 3 years ago was to call every page that I could reach without too much logic and making a screen shot. I would then just quickly walk through the screenshots and check for oddities. But this is more a tool to assist my testing, and not able to run without me or some other human. Simply comparing screenshots and only checking screens with differences to a baseline is not really possible, since displayed data are usually changing often in my context.

What am I missing? My favorite topic of the past 2 years is Ignorance, in this case the unknown unknown. How do I have to handle that question? I’m sure I miss lots of things with a check, but can I be sure that I don’t miss something important? Once an area is covered with automation, how often do you come back to investigate it? Review if you missed something, if you need to extend your checks, or redesign the whole approach?

How to trust a green test? There is always the problem about false positives and false negatives. False negatives are wasting time, but in the end I have double-checked the area and covered more than the script, so I’m okay to handle those. But false positives are mean. They say everything is all right, and – hopefully – they hide in a big list of other positive tests. So for every single check, every single assertion you need to think about if there is a way that the result is “true”, when it’s really “false”.
Now it also depends on what the script is covering. If you forgot to cover essential parts, you will miss something. But the check will not tell you, it simply can’t. It’s not programmed to do so.

Automating the API/service level? Richard Bradshaw presented that nice topic to automate things on the right level on Whiteboard Testing. Many tests would be better to run on the API/service level. I agree to a certain degree. As long as there is no business logic implemented on client (e.g. browser) side that I need to emulate. When I need to mock front-end functionality to effectively interact with an API, I have to re-create the logic, based on the same requirements. Do I implement the logic a second time to also test if the implementation is correct? Do I somehow have the possibility to re-use the original front-end code and miss problems in there? Do I trust the test implementation more than the front-end implementation? If so, why not put the test code into production?

And the list of worries would go on a bit, but I stop here.

Please help me! Do I worry too much about automating stuff? I would be very happy for some comments on my thoughts, if these thoughts are shared or maybe solved? And if they are overdrawn, I want to know as well.

My TestBash 2016 experience 

It’s now a week since my first ever TestBash, not only as an attendee but thanks to Rosie and Richard also as a speaker and workshop helper.

Arriving on Wednesday in Brighton made for a smooth start. Checking in, going to the beach for a few minutes and then heading off to the pub for the Pre-Pre-TestBash Meetup. I finally had a chance to meet Rosie, Kim and Emma in person. These three do so much great for the testing scene. Rosie world-wide and Kim and Emma organizing the Brighton meetups. And I got a chance to meet Pekka in person. What a show!
This being only my third conference I had the advantage of meeting several known faces again, which is a great feeling, and also meeting lots and lots of new faces. The good thing about this testing community is that people are integrated in no time. And TestBash is no difference there.
I also got a chance to meet Martha, Emma  Armstrong and Nicola Sedgwick, part of the folks I’d work with the next day as helper on the workshop day.

Arriving early for the workshop on Thursday at the Brighton Dome gave me a first idea of the venue the conference day will be in. And it’s a great place.

Nicola and Christina had a great set-up planned for their workshop and they were wonderfully prepared with those amazing sketch notes. Martha and I checked in the attendees and then we were joining the cafe crowd in the Dome, with Rosie and Julie. Using the chance to work on my slides some more, I also slipped in and out of the workshop to make some photos and getting some impressions. Then I was busy with some more organizing for the lunch break, and running errands with Martha. Leading the crowd to the Corn Exchange for lunch and then back to the room to rearrange for Emma’s workshop.

Emma’s topic was the visibility of testing. An important topic. “What are you doing?” – “I’m testing”. Well that doesn’t state much, and leaves room for a lot of interpretation. Emma brought that to awareness of the participants.
Emma created groups of four, in one of which I ended up filling in. And it was not just some group, I got the chance to pair up with Lisa Crispin. And what an inspiration she is. She’s just asking great questions, not counting on assumptions. When there is a PO, ask her. Focusing on visibility while also trying to test something was hard. But somehow we managed to find a way to make our testing visible and got the ideas Emma shared with us.

At nightfall it was time for the Pre-TestBash meetup at the OhSo at the beach. It was packed with testers. Again, several familiar faces, and lots of new faces. At some point I ended up in a corner speaking German with testers from the UK which were originally from Germany and some German speaking testers. But also meeting Noah Sussman was a great chance for me. I really like this guy and the opinion he shares on Twitter. Still fighting with a cold I nearly lost my voice in that bar. So I was heading back to the B&B earlier than planned to work more on my slides.

I got the chance to bounce off some ideas with Damian via DM that evening, which was a very calming help. So I changed and rearranged my slides some more. Hey, it was just the evening before the talk. 😉

The next morning I changed the slides some more, happy that my voice somehow survived, and headed to the Brighton Dome for the conference day. Checking in, getting my goody bag, giving my slide deck to Mark, and mingling in with the crowd. Of course meeting several new and familiar faces.

Then it was time for the conference. To be honest, I knew who would present, but I didn’t remember looking at the topics. On the one hand, I was nervous, on the other hand, it’s a single track conference. So just sit there and enjoy. I can assure you that Rosie and her team made a great choice plus my talk. And thanks to Mark and MC Vernon who made a great job to moderate the whole show, I was able to enjoy most of the morning sessions without too much nervousness.

Lisa and Emma talking in their keynote about building the right thing and sharing great stories. Dan talking about the importance of security testing, which he gladly seems not to be getting tired of. Katrina talking about pairing up for testing and how she set up the experiment at her company. And then it was showtime for John Stevenson and speaking about model fatigue. To be honest, I missed most of the show, and I’m glad that it’s being recorded. I got wired up and prepared for my talk.

By the time I was called to the stage I completely forgot my nervousness. The light in the dome made the audience more a vague existence. But not enough to prevent interacting with the crowd, which was good. Long story short, time passed quickly and my talk went over a few minutes, which (sadly) left no time for Q&A.

And off to lunch with Helena, who gave me immediately some honest and great feedback. And also helped me how to improve my talk, if I want to show it again.

At lunch serendipity struck again. Out of the 300+ people I could sit next to, I sat down next to a tester from Munich who works now with a former colleague of mine. Funny coincidence!

After lunch we were in for a treat. Another first time conference speaker. Michael “Wanz” Wansley, yes, the Grammy awarded singer, who happens to be also a tester for Microsoft, sharing his story as Gatekeeper. What a stage presence and a great show!

Anna and Andrew talked next about their experience with setting up the right approach and amount of test automation in their company.
Nicola Sedgewick talked about a very sensitive topic. How thick the skin of a tester has to be. And she described too many things that are way familiar to me, which made the talk a very special and emotional one for me.
Bill Matthews spoke about one of the most interesting topics for the immediate future. How to test smart algorithms. A very interesting challenge. And I will try to spend some time in the near future to think about that topic more.
Last it was another first time speaker. Nicola Owen talked about her experiences with being part of a test team with no own chance for decisions and being the only tester on a team and her challenges with both extremes.

Then it was time for a lot of great 99 second talks. A lot of those people we will see back on stage soon, I’m pretty sure. And I don’t mean those that we see from time to time on conference stages around the world already.

Off we went to the Mesmerist for some food and Post-TestBash mingling, including a song, poetry slams and parrot jokes. I got some time to talk more with Mark and Martha, met Dr. Jess, and had a great long talk with Danny.

On Saturday morning I joined the crowd at the Breakfast Club for a cup of tea, and because it was too tough to say goodbye to the amazing folks at TestBash.

But now my vacation started, getting my wife and my daughter from the airport for some more time to spend in Southern England.

Thank you TestBash for this amazing experience and being an outstanding conference on the schedule. I will always remember you!

State of Testing Survey 2016

[UPDATE] The State of Testing Survey Report 2016 is now available here! Thanks everyone for participating! [/UPDATE]


Usually I like to write long posts trying to explain my thoughts, but this time I’ll keep it short.

Unless you are living under a rock you should be aware of the “State of Testing” survey. PractiTest and the awesome test magazine “Tea Time with Testers” is organizing the 3rd state of testing survey. And guess what, the booth is open and it’s not too late to participate.

It’s already the biggest survey in the field of software testing with over 900 participants last year, and the goal for 2016 is to find more than 1000 testers willing to participate, making the survey even more valuable.

If you care about your profession as a software tester, and you want to contribute to the community, simply go here and help with your feedback by taking the survey for 2016.

You are not sure what to expect from the survey? Well you better take a look at the 2015 report and see for yourself.

And please don’t forget to tell others in your company, via your social media sites or at local meetups. Spread the word!