State of Testing Survey 2016

[UPDATE] The State of Testing Survey Report 2016 is now available here! Thanks everyone for participating! [/UPDATE]


Usually I like to write long posts trying to explain my thoughts, but this time I’ll keep it short.

Unless you are living under a rock you should be aware of the “State of Testing” survey. PractiTest and the awesome test magazine “Tea Time with Testers” is organizing the 3rd state of testing survey. And guess what, the booth is open and it’s not too late to participate.

It’s already the biggest survey in the field of software testing with over 900 participants last year, and the goal for 2016 is to find more than 1000 testers willing to participate, making the survey even more valuable.

If you care about your profession as a software tester, and you want to contribute to the community, simply go here and help with your feedback by taking the survey for 2016.

You are not sure what to expect from the survey? Well you better take a look at the 2015 report and see for yourself.

And please don’t forget to tell others in your company, via your social media sites or at local meetups. Spread the word!



Challenge accepted – Integration Testing

James Bach published a very interesting blog post lately reporting about a recent conversation he had with a student about the topic “integration testing”. In the end James did not share his thoughts about the topic, only some teasers what he thought about the answers given by his student. James set the challenge for his readers to think more about the topic. And guess what, that’s what I did. Here are my thoughts.

My past being an integration tester

12 years ago I grew up as a tester in a rather big system landscape, being the responsible test specialist for a particular system that had three incoming interfaces from three different systems and two outgoing ones to two separate systems, being capable of managing millions of data sets per day. We – the test team – were responsible for the end-to-end tests of the system stack, and we even had a test phase called I&T (Integration and Testing). In the beginning we were in charge of the environment were all back-end systems were actually integrated with each other for the very first time.

So do I know by heart what integration testing is? No! So I gave it a good sit and think.

Input from the twittersphere

Rather obvious, Alan Page also read the article and tweeted an interesting statement:

In the first moment I liked it, being shallow or not. I read that statement late at night and had a good night’s sleep to think about it.
And I was coming to the conclusion that what Alan described was not integration testing. Those are interface tests, unit tests, component tests, tests for robustness, or whatever you want to call that (have to think about that later), but it’s not what I’d describe as integration testing. Several elements are missing for that.

To be clear, I would say those tests described by Alan are very important, and with very important I mean it. Those tests are responsible to check for robustness of the interfaces and can find out if a system goes down when getting some strange message. This is especially important for webservices and other interfaces possibly talking with many different partners.

What I think integration testing is

So what is integration testing in my opinion:
For me it is important during integration testing to actually have two or more systems (pieces of software (systems, programs, modules, classes, etc.)) that you bring together in one (test) environment and let them communicate in their “natural” way.

It’s a test phase that is not primarily using drivers and stubs for the interfaces under test. If your individual systems support sending test messages via the interface without using the actual functionality, or resending messages, that’s okay.

What will be checked during integration testing

The context is important. All possible production scenarios should be tested in this phase. With all possible I don’t mean all possible. Being a tester you know what I mean, reduced to a manageable number of scenarios. It could also cover besides real payloads empty payloads, one partner answering slow or not at all, duplicate messages, and many more. Depending on what is possible that your sending interface can produce. Many of these scenarios should have been covered already in the robustness tests. In this phase there should be nothing going over the interfaces that they didn’t seen during previous test phases.

During integration testing it’s all about the real thing. Can those two or more components actually communicate? You could have checked the incoming interface a billion times with test scripts, but the first contact with the real thing is always different. There is a quote that a plan is only valid until you have contact with the enemy. I think the same is true for interfaces. There are so many other aspects that can go wrong, that should be covered during integration testing at last. I can only think of scenarios like problems with different time zones, different character sets, encryption, connection speed, protocols, and a few more. But I would estimate there are hundreds of possible other problems waiting out there.

What integration testing does not cover

When one communication partner will be changed in the future and something happens with the payload, that the unchanged system is still able to handle that. And that the system is able to handle anything that comes its way without going down or turning inside out. Robustness is a topic for the phase or test types that Alan described.

What James mentioned was also the level or form of integration, but I have not finalized my thoughts on this, so it’s not mentioned yet.


That’s it with my thought for today. I might update this post once I had further insights. Thanks for reading and please let me know via the comments what you think of my explanation what integration testing is and what not.