The Bowl of Communication

I started with my new company this year. And when I came here, the test team had a bad reputation. Managing from trans-Atlantic, bringing in offshore, being a bit offshore at the same time, even if dev is sitting also here, and having someone on the team, most people try to avoid due to bad communication skills. The team members here in Munich were not talking to each other, areas in the system were split between people, even if nearly all parts of the system interact some how.

To break the ice, I tried to communicate as much as possible with development and project management. But nobody came to my desk.

So I brought in the “Bowl of Communication”. Being a hobby woodturner helps. I put that nice thick bowl of beech on my desk and filled it up with sweets. I positioned it near the door, so that everybody coming along would see it. And it soon started to work. People came in, to grab some sweets, have some small talk, used the chance to discuss something technical.

During summer the bowl was empty. Could have brought gummi bears, but chocolates are always good, and our room was tooooo hot. Yes sir, south side, and no air-condition. Visitors came more scarcely.

In September the weather got better again. The project was a bit frustrating, so bring in the sweets again. And it helped again. People were stopping by again, sometimes several times a day. Bringing in news from the project, adding information on where to look more intensively, letting of steam, discussing bug reports, philosophizing about the specification. And all supported by a small bag of gummi bears or chocolates.

What one small bowl of sweets can change…


And yes, that’s “Explore it!” by Elisabeth Hendrickson (@testobsessed)
Always good to have some interesting looking books on the desk. Sometimes also a good way to start communication!!!


I am a man of strong convictions…

“I am a man of strong convictions, but I hold them very lightly”
by James Bach

What does that mean? It means that James will fight tooth and nail for the things he believes, the things he holds dar, the convictions he has, but he reserves the right to be convinced otherwise, and if he can be convinced, he is perfectly willing to drop the previous conviction.

comment by Mike Larsen

Why “why” makes a tester’s life easier…

A good story tells not only what has been done/achieved, but it also tells you WHY. How could you rate the solution (what), if you don’t know the problem (why)?

I have some examples where a short explanation of why saves time and adds so much more.

Writing test cases
Have you ever followed test cases step by step, not thinking what you do, just checking if the system meets the expected result to the letter? Were you happy with that? If yes, please stop reading.

Have you ever tried to learn something from a detailed pre-scripted test set, that explains only what to do in every detail?

“Change the string Test1234_-%$ to Test1234 _-%$”. OK, what? Let’s check the expected result. “An error message should occur.” or better “Saving should not be possible.” Now that explains everything.
A short explanation that spaces are not allowed in that string, but alphanumeric and some special characters are, would have helped to understand the what. And in this special, but real, case you create one set of test data with the same string per every execution of that step. If someone tells you, what you have to do and why you do that, you can easily come up with your own examples of test data. And maybe even find new bugs.

With one good why, you can save so many test steps and make the test case insensible to changes which would lead to a lot of non-testing time going through piles of old test cases and test steps and finding all the little adaptions that need to be made. Or having some trained monkey come to you every other minute telling you that this button is either not there anymore or the label has changed, and if that’s a bug? If you then ask, “Why me?”, because you/someone should have added a “why”.

In my experience it is much easier to write a test case, that can also be executed from someone else, when you explain more detailed why you do the following steps. OK, that someone has to have a brain and be able to use it. Rare, but possible… (sorry, it’s Friday night, I’m in the mood)
Imagine, a year from now, you have long left the project, someone tries to follow your detailed steps, but there have been some minor changes to the system, not much really. A button was added, a string has changed, there are now three options instead of two. What would you hope for? That someone explained why this test case or step has to be done. In case some changes were made the chances are high that the tester knows what to do.

Note Taking
When taking notes, what do you put down? I am a lazy note taker. I tend to write down IDs of test data sets, maybe changes I made and some results. Maybe some hints, more a kind of insider. Same for you?
I don’t know about you, but a short why would help me understand that note in a couple of weeks, without much thinking and trying to reconstruct.
More important when you fill your notes of a test session, that will be reviewed or debriefed. A short “why you did that” shows quick what you were thinking about and gives the debriefer a fast idea if that meets his expectations or if you might have missed something. In case of the test notes it might even help others to learn from your written thoughts, when they read them in a couple of month.
Shmuel Gershon said in his EuroStar webinar about note-taking, always add the why. And I have to agree, really it helps. No, I have to admit, it would help, if I would not be too lazy taking good notes.

Daily Doing
Someone has given you a task, what to do. Do you always ask “why”? Just think a bit about using “why” in your daily doing. You do so many things without asking. A short thinking about “why” might help you better understand the task. Here we come back to, understanding the problem first, then create the solution.
If you are the one giving out tasks, do you add the problem to be solved, the “why” someone has to do “something”. Of course not, you’re the boss, everybody has to do what you say.

If you want someone to learn from your test cases or you want to remember next year what brought you to write it in this or that way, always add the WHY. If you want to know if the time you will spend to fulfill a task is spend right, ask “why” first. If you want a solution to your problem from someone else, add the “why”.

My 2 cents on metrics in software testing… (Part I)

As always, it depends on your situation and your context. This is my personal view on using metrics. And in this article I want to add information and thoughts about the different metrics that are commonly used in software testing projects, when it comes to measuring the success of Testing Service Providers.

In the past I gained a bit of experience with adding off-shore resources to test teams, working together with an off-shore dev team and using a complete near-shore test team for certain parts of a big integration project.
My new team is supported lately, again, by a couple of testers from a Testing Service Provider located abroad. They were assigned to another project team the first half of the year, now they are back to my team. Since I started working at the company only by January 2013, I had no chance yet to evaluate the quality of their work. So I started to look for methods how to measure external testing sources effectively.

When I skimmed through the “Practical Approach to Software Metrics” by Cem Kaner, I came across some points that I want to summarize with my words.
* We use metrics to gain information, but most metrics are invalid (to some degree) for that purpose.
* We have to learn about strength, weakness and risks of our tools / metrics, to improve them and mitigate risks.
* We need to look for the truth behind numbers.
* We need to use detailed, qualitative analysis to evaluate the validity and credibility of the metrics.
So this will be rough guide for me to evaluate the metrics I found.

And I will never forget the statement, if you measure someone by numbers, the measured will become this number.
And there is a saying in Germany from the electrical engineers, that’s translated like: “who measures, measures crap”, in regards to influencing the system being measured by the measuring itself.

One of the first sources of information I found was this webinar by RBCS about “Measuring Testing Service Providers” that I read about on Twitter. Since I know Rex Black only from the Twitter bashing contests about “ISTQB” and “Best Practices”, my expectations were set to a certain level. But I have to tell you, I got not that disappointed I expected to be.

Since RBCS is a testing service provider itself and coaches about measuring his own kind, that’s a bit like asking the wolf to help protect your sheep barn from wolves. But since the companies that use testing service providers are not interested in sharing their knowledge and experience with the world, all that’s out there is coming from TSPs and consulting agencies. But let’s take a look now at those metrics.

“Find defects”
Measure the count and priority of defects and compare them with the defects found in production. The metric is called “defect detection effectiveness” or “defect detection percentage” (DDP) and I first learned about it nearly a decade ago. This is to evaluate the effectiveness of the different test stages. You simply count all bugs found in all test stages available, including production and you compare the number of defects (usually in addition by priority) of your test stage with the next one. From stage to stage you should find fewer defects. The theory expects good test stages to have a DDP of 85% – 90 % and up.
This is only possible to measure once the product/project is live for a certain time frame (usually 90 days). So you get a result only way after the job has been finished. You only get valid results of your TSP if you outsourced the stage completely or have another way to limit the calculation to the work packages the TSP tested.
And you get valid results only for certain kinds of projects. You need a good base for your production bugs. Do you have many customers, a few or only one? How is the discipline of your customers when it comes to using the defect process? Are your customers telling you about every bug they found? And you will need some time to filter through the bugs to prepare the data for comparison.

“Find important defects”
Of course this is a variant of the “Find defect” metric, focusing on e.g. priorities of critical and high or whatever grades you measure your defects with. So all restrictions of the “Find defect” metric count in here, too. Plus there are the risks, that you might not have the same prioritization, once the project is live. And the TSP might try to rate his found defects higher to increase this metric.

“Cover the Test Basis”
I quote this directly from Rex’s slide: “Engagements should include clearly defined test scope (e.g., requirements, risks, etc.), which is the test basis”, then you might measure the test basis coverage.
That is basically a good idea. But how do you define if a requirement, risk, etc. is covered completely, enough, at all, or to your satisfaction. Without a very good understanding on how to test what item on the list you measure, this metric shows completely nothing. And as always with a metric, every item counts the same. Is that the truth in your projects?
You might have a valid point here if you use the “metrics” used in a report dashboard, like for session based testing. (e.g. as described here)
But you still have to trust the TSP about the degree of coverage or you need a very exact description for every item.

“Report in Time”
This “metric” counts if the regular reports are delivered on time. Now that is nice to evaluate the testing skills of your TSP.
Yes, discipline is important for a TSP. But if the report is on time tells you nothing about the quality of the report nor the quality of testing that the report is about. So, nice add-on, maybe, but not useful for the original purpose.

The next metric is only in, because Rex mentioned this.
“Assign skilled, qualified testers”
And the according metric would be, “percentage qualified testers assigned”. I won’t count that in as a reliable field for installing a metric due to many reasons. Qualifications, resumes, certifications can be trimmed in a certain way and in a certain degree. If you really speak with the people themselves that you will hire, there are always some good in self-marketing and some not so good. But that doesn’t say a thing about their abilities as a tester. And of course there is always a chance, that you end up with another “resource” than you initially hired, because that resource was not available and of course you get someone with the same experience and quality. Right!

“Finish within approved budget”
Well, that could be either a metric or a criterion for finishing the tests. Stop testing, when you’re out of budget. But when it comes down to a metric, even Rex states, that you need a good estimation process and change request process in place. OK, but when you don’t hit the budget, was it the estimation or change request process or was it the performance of the TSP?
And Rex mentioned, of course, the positive return on invest. But why should I meet the budget a 100% to keep it positive, Rex? Is your ROI, however you define that for testing, calculated so tight, that you cannot afford to spend even 10% more without additional benefit?

And now to the surprising part of Rex’s webinar. That’s a method I have already seen in action, but completely forgot about.
“Stakeholder Surveys”
“Meaningful, Actionable Results Reporting” and
“Defect Report Satisfaction”
Now that’s something where I see the aspect of quality measured. You ask the stakeholders and project members about an evaluation of different topics, give them grades like in school, and measure that over time.
If you can keep this on an objective level, and use good facts as reason and examples for your evaluation, that has in my opinion the most value.
Negative aspects about that “metric”. First the objectivity; if some project members don’t like each other or have other personal differences, that will influence the report. Second, using the results for project-political reasons (saw that the last time I participated in the evaluation process). That will falsify your context and with that the value of the metric. And last to name here, it is very intense and time-consuming to make this right. So far from this set of slides.

I know of a metric, that is pretty special, when it comes to measuring TSP.
“Number of test cases executed”
If you use a pre-scripted approach and have a certain number of test cases to execute, that is a well known way to measure your progress. You can split it to priorities if you want, but of course it doesn’t take into account a lot of other things, like size of the test cases, time for execution, and so on. It lacks a bit context. And it tells you nothing about the quality of your TSP.
And who is writing the test cases? Do you have them already and you have experience on the execution of the test set. Great, then you will have a benefit for measuring the TSP, if you take the quality of each and every execution into the context. If the TSP writes the test cases himself or gets even paid per test case, that will be a mass production of stupid test cases, guaranteed.
I remember of a special call for tender for a complex project. The customer wanted to pay per test case and wanted a rough number of planned test cases without giving much information about the infrastructure. Now that is one serious base for estimation and offering.

Lately I found a nice white-paper by Infosys: Realizing Efficiency and Effectiveness in Software Testing
What can be used for testing projects might be adapted to measuring outsorced testing as well. Some of the metrics were already covered by Rex’s slides, so I won’t go through all of them, that I find useful.

The “test progress curve” (S curve), well that’s one nice piece of theory. In 11 years of testing I have never seen a S-curve without faking. The theory behind that is quite simple and understood, but reality is something that does not look like that. So even if you want to measure the test progress with this. Keep in mind, the S-curve won’t stand long. So the difficult task is, where to set the expectation.
But you have to measure the test progress somehow, that’s for sure. So keep in mind, you might find the S in the end, but it won’t be there all the time or not at all.

“Test execution productivity trends” is a metric that I would like to try. Short description from the white paper: “The test execution productivity may be defined as the average no. of test cases executed by the team per unit of time.”
It might fit well into the theory of thread based test management, I have to find out more about that. In case of using pre-scripted test cases, where you might have an experienced basis for execution length, this can be covered pretty good. I think the metric needs to be adapted to every project in a way to normalize the measured values. Not every test case takes the same time to execute. You need to take into account number of found bugs, problems with test environment availability, and simple things like meetings, status reporting and so on. So not a simple task, but you might get some good numbers if you can keep it up.

That’s what I’ve found so far, very disappointing overall.

In my opinion you need to monitor at least progress and quality.
What metric you use depends on your project and the possibilities you have.
What you need to do if the metrics don’t hit the expectation depends on your project and the context why it missed the expectations.

If you’re already measuring your projects with some of the metrics described above and never asked yourself what the numbers tell me. Try an experiment/role play and try to explain the numbers from different positions. Don’t forget to subtract tacit knowledge in your experiment. Now re-evaluate the value of your metrics. If you still think those metrics are good, congratulations. I would like to hear about your successfully used metrics. Either via comment, email or Twitter.

This was only part 1 about this topic. I will try to write more about metrics I found and some of the metrics I tried.

Reading the customer’s expectations

For me it is very important to know my customer, know the problems and challenges he has, and to speak with him on a level, where he realizes that I understand his problems and his point of view.

I’m working in the QA department for a small software company. And so far, QA had no contact to the customers. That might not be the problem, if you have an account manager who transports the information to QA. We have such project managers, but QA was not listening so far. There were written requirement documents against which will be checked and that’s it. Of course, there might be the problem that a project manager is not used to look at the customer through QA glasses. And if he doesn’t know that QA can be flexible, why would he even think of other ways to approach the customer.

I’m used to work and speak with my customers and get an idea of his expectations from QA point of view. So I’m glad to get more and more opportunities to partizipate in client meetings and having the chance to “read” our customers and their expectations. For my team this will result in changes to the test strategy and approaches. Thanks to the participation of the customer we can now facilitate user stories in our test cases and test sessions. On the other hand this improves the customers understanding of what we do in QA (what he gets for his money spend), and has a better feeling that his point of view is used when determining the quality of the product.

I would even go so far, that I say the customer will be willing to spend more money on QA, when he is able to participate in the whole QA process. Nobody is willing to spend a certain amount of money for the paragraph, that the product has been QAed by company standards. You are willing to spend a bit more money for a product if it has a certain certificate that signals a certain amount of trust, even if you have no idea what has to be done to get that certificate on the product. But you’re willing to spend more money, if you know what is done to determine the quality of the product and that this reflects your problems and challenges. Certified or not, this is even better.

If you have a product that is sold only to one or a couple of customers, take your time to analyze your customers expectations. Don’t try to use the same strategy for all customers. Like I read in a comment the other day. If the only tool you have is hammer, all things you see are nails. Try to fill your tool-box with different approaches, strategies and techniques to meet your customers expectations. This will also improve the approaches and strategies you take for the other customers.

Be curious about who your customer is, what problems he wants to solve with your solution and what is important to him.
Try to partizipate in demonstrations and discussions. Observe the situation, observe your customer when he looks at the screen. Where is his main focus? What details are important to him? If not already defined, try to find out what the problem really is, that he needs so get solved and how important that is to him. Try to read his gestures and mimic. You will learn many things about your customers.
And don’t underestimate bugs found in production. Don’t just try to reproduce them to know how to retest them after the fix. Try to understand what your customer has done, that you obviously didn’t do. Learn from that.

But now it is up to you to use this knowledge and adapt your strategy, approaches and techniques. Involve the customer in reviews, improve your reporting and like Michael Bolton and James Bach always say, tell your client a story about your testing. If the client finds himself represented in your story, he will buy it. If not, he will challenge you.

Please don’t hide in your QA offices, go out and experience the customer.

Thinking about a how to get a good regression test set

I used the phrase “regression testing” for about 10 years, never even thinking about the definition of the word regression. I just accepted the term as it was used commonly and frequently in our projects. This was before enjoing a webinar of EuroStar with Michael Bolton. One hour talking about the term regression and regression testing. This webinar changed my tester life.
One thing was, that I began to read blogs and articles about testing, the other thing was, I started to think about many of the terms I used in my daily life and if I used them wrong so far. I tried to challenge some of my colleagues with discussions about those terms that are used every day in our project. The outcome was, that they, too, had not spend much time thinking about those terms and accepting it, as it was.

Coming back to regression and regression testing. In both my old company and my new one regression is used as a synonym for regression testing. Because a common problem seems to be, that people don’t know what regression means. Maybe the reason is, that I am German and in German the word “Regression” is not present in the common vocabulary.

Regression: to regress originating from the latin word regressus, means to go back or as a noun a backward movement.

Wikipedia says under “Software Regression”:
A software regression is a software bug which makes a feature stop functioning as intended after a certain event (for example, a system upgrade, system patching or a change to daylight saving time).

So what is our intention, when we speak of regression testing? To check if the (hopefully) unchanged features are still working as intended.

Instead of looking for a definition of regression testing, I want to use the four different but intersecting concepts, that Michael Bolton offered in the aforementioned webinar:

  • Any test that we’ve performed before.
  • A set of automated checks, run periodically and repeatedly.
  • Testing that we perform after some change.
  • Testing to probe whether quality has got worse.

When different roles / stakeholders in a project speak about regression testing, which concept or which mixture of concepts do they mean? My tip, ask them. In long grown teams / projects it is interesting to see if all are on the same page or if the definitions vary. If they miss one of the concepts completeley, challenge them by asking about them.

What test cases will be used?
On the one hand we can use any test cases that we’ve performed before. Because their goals should be defined to check some features of the product at test, that we want to re-check. But be careful when reusing test cases that were originally written to test a new requirement / feature / change request. Those test cases might be going to deep into detail and be too time consuming.
If you don’t have a good selection of test cases that you can use for regression testing, what have you done until now? Now is the time to start creating a regression test set.
When automated tests are available, great. Just run them all if possible. In case the features are still working the same as in the last version, all tests have to get the same result as they did on the last run. And yes that means, that we also would expect failed tests to fail the same way. How would you react if a test that failed before is now passing, without knowing of a bug fix for that problem?

Do we have to change the test set every time?
That depends on the reason or event that triggers the execution of the test set. If you change a feature, the related regression test cases have to be reworked. If you prepare for a major release change, what has been changed? Are your test cases general enough to be executed on the new version? If you switch the platform like the web server or the operating system, I would go with the last test set.
In case the software changes, check if those changes affect your test set.

And what is a short regression (test) doing?
I think we all heard from our project managers and steakholders the term, let’s cover this with a short regression (test). And with short, they mean you don’t have much time and budget for this. I will show one of the many choices how to reduce your test set for a short regression test. If that fits to your project or if that fits every time, you have to decide yourself.

I am wondering lately, mostly now that I have to test again, how does a good regression test case look like?
Since it all comes down to money in the end, my main aspects would be these:

  • check at least all the box features
  • reasonable in execution time
  • easy to maintain

For me the answer to this question would be, the test case fits into the big picture of my regression test set strategy. To meet those properties above, I created a strategy to rework the regression test sets of my projects.

My regression test set strategy
Get a good visual of your product at test (use Mindmaps, Visio charts, whatever suits you best). You should find all of your features represented in this picture. Try to make a picture of the specification, as good as you can get with the design techniques at hand.

Create a set of test cases that represent all of your features, also involving roles, use cases and business processes of your customers. Use scenario testing techniques together with claim testing, to cover your complete list of features. Create positive checks of all your features. When performing this list of test cases you get a result if all features are still working. At least for the well-minded users. This test set is in my opinion the smallest test set you should perform, when speaking of regression testing (Short regression test set). If you have to make this even shorter, use risk analysis to skip the not so important functions. But please remember to report, that you skipped it.
Since maintainability and reusability of those test cases is important, keep them at a detail level that is sufficient enough for most testers to understand and high-level enough not to change it every time someone changed a configuration or translation. e.g. if your product or feature is well-documented on the GUI or more or less self-explaining, you can skip most of the details what to do in the test case. You can concentrate on the expectation and goal of the test case rather than on the test steps.

The context-driven part
Now that we have checked that all our features are still working, let’s hunt for some bugs. Nobody said, that you cannot look for new bugs, when performing regression tests.
Use your positive check scenarios and combine them with different assignments (e.g. James Whittaker speaks of tours) to explore and test those features (use function and domain testing techniques or whatever comes to your mind). Not every tester thinks the same, not every tester is capable of the same test techniques. And that’s ok. Because bugs aren’t all the same either. And when it comes down to testing and looking for bugs you should leave your team members a certain degree of freedom what to do to your system. Most testers get new ideas how to break a feature during test execution. And using variation in your techniques is a good way to find new bugs. And in case a tester has no idea, what harm to do next, she can still check old session protocols to get an inspiration.
Take a test case or a subset of test cases and create a variant of them concentrating on different testing techniques to test those features for bugs. When those test cases are performed, you can report, that a search for bugs with certain techniques in this area was performed. Like all test cases, it cannot say, we tested this feature completely and it is bug-free. So this statement is enough for me.
Since you are never finished testing, please don’t discuss if it wouldn’t be better to write down all negative tests performed for a certain feature to repeat them every time again, I disagree. The test case will be too long and unmaintainable in no time. And why should you find more bugs when repeating the same tests over and over again. You can give hints in the test charter what techniques you could use, but this should be the maximum to influence you or other testers. You should try to come up with new ideas every time. Try to learn new techniques and use them in those sessions. If you have a day of no inspiration, use old session protocols, talk with your team members or simply concentrate on positive tests. We all have those days sometimes.
If you have a special technique that you find often bugs with, feel free to use this technique in all of your test executions, mention them in the test charter, but don’t script them in detail. And if this technique really finds bugs always and everywhere, you should have a word with the development lead how to improve the skills of the development team to prevent those bugs in the near future.

Make a list of the last fixed bugs found in production. Retest those bug fixes again. It’s really bad to bring an already fixed bug back into production. Try to retest all of the bugs on your list. Sometimes the reoccurrence of a low priority bug is more noticeable, than a high priority bug in a constellation that happens only now and then, and only a few users would ever see.

So what do we have:

  • a big picture of the product
  • matching test cases for verifying the features
  • a strategy for negative testing (bug hunting)
  • a couple of bug fixes to retest

This should leave us with good input to report to stakeholders, a big picture to understand the software, a reasonably maintainable test set and an approach for bug hunting which challenges your team, the capability to find new bugs and a reduced risk of reoccurring bugs that were fixed lately.

I don’t know of this is the best solution for my projects, but I know that after some analysis it should be a better solution than the existing one. But it still has to prove this. I’m sure that this strategy is not reworked for the last time. There is a big part of context-driven testing, which is project-appropriate application of skill and judgment, as it is short and easily described on Cem Kaners blog. This also means, that this concept might only fit partially, or even not at all to your situation. Don’t forget to include the expectations of your stakeholders.
My original intention of this article is to get you think also about your usage of the words regression and regression testing and to give some hints how to improve a strategy every now and then, when the context changes or there might be other or newly learned things that might fit better. Maybe my improvement of my project’s test strategy helps you to come to a better strategy for your project.

Update (11.06.2013): I could have saved a lot of time thinking about regression testing and writing this blog post, if I simply came across Iain McCowatt’s blog earlier. He wrote a five part series about regression testing, that is going even deeper. A must-read if you haven’t already done so: Exploring Uncertainty
But at least I now know that I’m not the only one thinking that way.

Comments are welcome!

The value of the ISTQB certificates

I want to add something to the ongoing bash to hit on the ISTQB (Keith Klain’s petition and the “discussion” between Rex Black(RBCS), Keith Klein, James Bach and more on Twitter), that does not fit in the 140 characters of Twitter.

To start with, I have an ISEB foundation certificate from 2004, and the ISTQB Advanced Level Test Manager from 2005. As James Bach wrote a couple of month ago, it’s OK to be certified, if you don’t take it too serious. And that’s what is true for me.

Because I realized too late that only taking the course is teaching you nothing as a tester. But! There is a but. But you get an overview of a small portion what is out there in the world of testing. And from there you have to improve yourself and BE a good tester. So the certificate is just a piece of paper stating that you have taken the course and you were able to answer a couple of questions.
I have forgotten most of the stuff, because I was not able to use it all and frequently. And I did not take the initiative to learn on my own, starting with the sources provided in the course material.

Currently RBCS is promoting/congratulating a guy via Facebook and Twitter that he is now certified for the full advanced level. No offense to the guy, I don’t know you. But I know a couple of guys with full advanced level certification, because my former company had some locations were they pushed the people to take the courses, because the company was also certified training provider for ISTQB (cheap in-house training). The exam was the official one, so no bonus there, they earned their full advanced level, whatever it’s worth. But I worked with some of the guys, and I would not let them test notepad without intense monitoring.

To come to the end, I finally took the initiative to learn on my own, improve my skills and add to the community. But my certificates didn’t help with that.
It’s a shame that those certificates are important to find a job.

For those of you, who are still reading, I would assume you are keeping yourself up-to-date. No matter if you are certified or not.
We have to go out there and reach those who don’t participate in self-training, no matter if they are certified or not.


Why I started this blog?

This is an easy question, but the answer is not short.

I recently read a tweet or short notice about what makes a good tester.

1. he smiles when working
2. he adds to the community

I can’t remeber the source. If anyone reads this and remembers, please tell me, I really want to give credit and state this correct.

Smiling is something that is hard to achieve at the moment, but this will come back some day, I’m sure.

So we still have the community. Since I changed companies by the end of last year, I have won a little bit of time every day in the commuter train. And I like to read all the blogs and articles, tweets and comments of the testing community.
My new job as a QA lead makes me think a lot about different aspects of testing, which is absolutley fantastic. I haven’t added much to my tester knowledge since my ISTQB Test Manager course late in 2005 and a two-day seminar from Hans Schäfer about risk-based test management in summer of 2006. I was working as a test manager from 2008 until end of last year. So I got mainly project management training. But my new job brings me back into the tester role. So I have to learn testing methods, techniques and all that stuff again. And I see now, that I haven’t learned much in my time as a tester from 2003 until 2007.

Reading / Learning / Thinking about all the testing stuff is great, and I enjoy it every day. And I want to give back to the community and share my testing experience with those who are willing to read it.

Plus, one benefit from writing about it is, that I rethink a topic more often and come to a better conclusion for myself.
For my new company I created a test strategy for the product I’m the responsible QA lead for. And I wrote over a period of two weeks on the set of slides while improving them bit by bit. I discussed it with my test manager and with a test specialist from another product/project and improved it further. But I was still missing the big questions, testing my strategy.
These questions finally came in the presentation. And I had a rather good answer for all of them, because I was prepared in every little bit of my strategy, why I want to do it this way, and why other ways are not as good. I’m really proud of this strategy. I flipped through the pages yesterday, about 2 months after the presentation and said, “yes, that’s still the right way”.

To summarize, why I started this blog. I want to add to the community, I want to have the chance to think more about certain topics by writing about them, and I want to discuss those topics with you. So please, use the comments to challenge my statements. I’m glad for every constructive feedback I can get.