Weekend Testing America – Testing Deep (WTA-52)

Somehow I managed to join my first session of Weekend Testing, in my case session WTA-52 of Weekend Testing America.

It was a very promising topic, “Testing Deep”. What does it mean, when do we know we are there, and is there a point of “enough”?


The session was facilitated by Justin Rohrman, because Michael Larsen was busy, and there were 13 participants including Justin and Michael. I guess most of the participants, if not all, I know from Twitter. So it was interesting for me, to interact with them kind of live for the first time.

Justin set the mission with some questions to consider during the session:
– how do we know what we are doing is deep testing?
– what do we do different (thought process, approach, techniques, etc.)?
– how to actually do deep testing (hint: staring at a feature longer doesn’t make it “deep”)?
– how do we know when enough is enough?

And he offered some ideas to start with

1 – what is it
2 – how do we know when we are there
3 – how do we know we are not being shallow
4 – how does it feel
5 – what are we actually doing different
6 – how is our mental process different‏

Also I want to mention, that the credit goes to Michael Bolton, who also attended, and James Bach. Justin got the idea for this topic when taking Rapid Testing Intensive (RTI).

Justin chose the online collaboration wordpad titanpad.com for the SUT. And I think this was a good choice for a software to try deep testing. I guess all had immediately an idea what that software was intended to do. So Justin gave three example areas to chose from for “deep testing”. Some participants where building teams to test the collaboration features together, and some, like me, were testing alone. Well, sort of. Being in a Skype chat with 12 other people who explore the same software is not really being alone.

Exploring / Hands on “deep” testing

I set my expectation for testing deep. Instead of wandering around and building up my model of the whole application layer by layer, I chose Export/Import and I wanted to stay at one part of the feature as long as possible. I opened up my XMind and started sketching the feature to test. I started testing the Export feature, file format HTML. I soon realized that my private notebook is not yet set up for supporting testing. But at least I had Firebug installed already and just downloaded Notepad++. Soon I started finding minor problems in the HTML structure, translating empty lines in enumerations, and so on. The ideas, what to do next kept popping into my head. But then I moved to the next export format, taking time, coming up with ideas and exploring further. But as I have to reflect now, the time I took for exploring each export format got shorter and shorter.

And then on Skype problems kept popping in. But I was trying to keep my focus on export. Some interesting bugs in Import were mentioned, so I soon extended my focus a bit. Maybe too much. I wanted to test Import, too. I also wanted to see those errors. I wanted to use the file I exported, and import them back.

At the end I started a bit interacting with the others. Trying to better understand the situation they faced and trying to help. Yes, I know funny. Why should I be needed to help them?


The hands on part was over after nearly an hour. How time flies by. So all were gathering in Skype again for the discussion.

Justin took some interesting notes about his feelings during the session. I find this a good source of information, at least something that informs yourself and those who know you. I read something in a blog lately about capturing the feeling of a tester at the beginning and the end of a session. After that list from Justin, I really have to try that in action.

An interesting comment came from Neil Studd, refining the initial question:
You can’t go deep until you know how deep deep is.

Amy wrote:
How about ‘going beyond the expected’. I felt like I was forcing myself to go back and take another look even when I thought I’d seen it all.

I found that idea rather familiar with what I tried myself. At least in the beginning. I tried to force myself to keep on digging. And I have to say, in the beginning it was rewarding, I found more issues on each return.

Richard and Amy came to the conclusion, they found some sort of wall, where the collating of information by simply using the application stopped. They came up with a model, how to get to more information beyond that wall. I am pretty sure (and hope) that Richard will write something up about that model, so I am stopping here.

I came up with the definition, that it’s “digging so deep that the Information you encouter there are no longer in the responsibility of the owner/creator/dev of the app?”. But is it really helpful to test every feature until you arrive close to the hardware level? I don’t think so now.

I liked Neil’s addition: “if we use depth with an ocean analogy, the seabed is not flat – some investigations are likely to hit bedrock (i.e. non-issues outside our control) sooner than others‏”.

We then discussed a bit around what deep is, without coming to a conclusion that most were comfortable with.

Then Michael brought first this definition from the RST:
Here’s what we say in Rapid Testing: testing is “deep” to the degree that it reliably and COMPREHENSIVELY fulfills its mission AND to the degree that substantial skill, effort, preparation, time, or tooling is required to do so.

I keep falling over the term “fulfills its mission”. At my company the time for testing is sort of fixed and restricted as a ratio of the amount of development effort (in many cases, not all). The time sets for me the main part of my daily mission. So I can only test as deep as possible in the given amount of time. So testing is kind of shallow, in most cases, by definition of the time box. Everything that goes beyond that is deep testing for me. So every feature I decide to spend more time on as planned by someone else, someone who does not know what all is possible, or maybe necessary, is testing deep.

I questioned Michael, if depth is something he would want to “measure” in some way to report on it. Because as with Neil’s example earlier, for some features the sea bed is not as deep as for others. Michael’s response was, that the extent of the mind map one created while exploring might show the depth of investigating. The answer is in most ways okay for me, because it shows that you invested time there and digged up lots of information. If you really hit ground or how deep you came is still hard to tell. But define “ground”, there lies also the solution to the question we were working on.

Michael brought in this list in regards to the SFDIPOT:
For a given feature or function…
– to focus on that feature or function
– to consider a wide variety of risks
– to use and/or develop a very detailed structural diagram
– to break the function down into a detailed set of sub-functions, and to test each one
– to use highly diverse and extensive data sets
– to identify and exercise as many interfaces as are there
– to test on a wide variety of platforms
– to consider and work on a wide variety of operational models
– to consider and test for lots of interactions with time‏

SFIDPOT is hanging on my office wall to remind me every moment I need it. But I didn’t recognize it at first. Maybe too much to read in the Skype chat.

Justin brought then up this idea: If you are testing and discovering / creating the model as you go, you are always at the “bottom” of the model. So, are you always doing deep testing when this is happening?

I was not completely happy with that definition, because it would mean, that you are at this stage from the very beginning. But something of this idea still revolves around in my head.


In the last 5 minutes we were asked to share a definition of what “deep testing” is. Some gave it an immediate try, and some were retrieving to come back with the ultimate answer later. From the definitions that came up, I found none that satisfied my view on the topic, that I just started to think about for the past two hours.

Michael gave a good summary of what is needed to go deeper, and therefore complete the picture he set earlier with his definition and SFDIPOT:
Time: All this takes time to develop, maintain, and perform.
Determination: It’s hard to blunder into deep testing. You have to want it.
Skills: You need to know how to model products, identify testable conditions, and design experiments to evaluate them.
Learning: You need to a rich and detailed model of the product and its risks (may be a mental model, formal model, or both).
Requisite Variety of Test Activities: You need to work out a pattern of test activities that will find the obscure, yet important bugs, based on a good theory of risk.
Tooling: You may need tools to help you cover large areas or to reach otherwise inaccessible areas of the product.
Environments for Testing: You need a requisite variety of test platforms configured and available for tester use.
Data for Testing: You need a requisite variety of test data so you can trigger the important bugs.
Team Support: You may need lots of eyes and minds poring over it. Developers can help immensely by exposing the code.
Testability: You may need special features in the product that help you observe and control it.

After sleeping a night over the session I tried to come up with a perfect definition for me of “Deep Testing”. But the idea is still so vague and tacit, that I am not able to write it down now. If I will be able, you will read it here…

Michael explained the parts how you can structure the width of the hole, SFDIPOT, and also the abilities you need to exploit each of the areas in depth. The initial definiton “testing is “deep” to the degree that it reliably and COMPREHENSIVELY fulfills its mission”, sounds to me, that this is the ultimate depth of testing something. OK, then everything above that level is to a certain degree shallow. But even that is still vague and depends on context.

Now comes the hard part, at the job, you need the budget to get the time to to do all this, so you should be able to estimate and sell this strategy. But how do you know, how deep deep is and how much time it takes? That’s something to sleep over the next night(s).

Thank you Weekend Testing America, that was an interesting and inspiring session. Thanks to all who attended and enriched the discussion.


Judging the Software Testing World Cup Preliminaries for Europe – Part 1

Oh what a week…

But let’s start at the beginning. In the evening of May 11th I was reading my Twitter timeline and found the tweet from Huib Schoots that the team of the STWC is looking for judges for the European preliminaries. In a sudden reaction I said to myself, sure why not. Let’s see if I even get accepted. I mean, have you seen that list of judges for the other continents… a lot of great names.
20 minutes later came the answer to send my email address to Huib per DM, and off it went.

OK, now my brain started working. What have I done. What am I even supposed to do as a judge; I mean in detail. So I started investigating, when is the event, what are the rules, and so on. At the end of the week I had access to Google groups, was able to read the test reports of Oceania and North America, and I got most of my questions answered. My expectations became more detailed.

Then the blogs from Kim Engel and Ale Moreira about their experiences came on. And now I had the impression that my expectations where at least heading in the right direction. Only difference, they judged areas with not nearly the 250 teams like Europe or Asia. So the tension got a bit bigger, because the team of judges was still rather small. But we grew to 10 (or is it now 11) judges until the event and even a strategy to get the judging for Asia and Europe conquered was thought of.

The date was set to June 13th. And then the deadlines, business and private appointments came in. Project deadline for one customer got set also to June 13th and production support for another two customers was high on the calender. June 14th was set the date to fell the tree in our neighbors garden, which for me as a hobby wood turner meant, a lot of wood to take care of this and the next days. We even had to reschedule our weekly grocery shopping to get everything done.

The week came closer, so let the disasters begin. My vacation in the week after the event was cancelled. The Asia preliminaries, one week before Europe, got cancelled and rescheduled after the SUT went down after about 1 hour. At work, the one project deadline got extended for several days, and the production support became a nightmare. Then my test environment database server broke down, again, for over two days. Yes, dear Asia contestants, that can also happen in real life.
My father in law, who was the guy to fell the tree, had to tell us on Monday that he had another appointment on Saturday and couldn’t help then. So he started cutting that tree alone on Tuesday, instead of planned 4 people helping him, only my wife was there to help. And it was the hottest day of 2014 so far. So when I came home, again late, there was a neatly chopped down tree next doors. And the wood began to crack due to the heat. So I started carrying the best pieces to my workshop and started sawing it. Now they wait for getting pre-turned, but because evenings at work got later and later, no chance yet.

Then Friday 13th was finally there. I had blocked my calendar from 5pm on and got a bit nervous. Some emails from the judges came around to get a bit more prepared for the event. But still no access to HP Agile Manager, and no idea or even clue of the SUT. Maik mentioned on Twitter on Thursday, that we will finally get some mobile focus in the contest, but no more hints. By noon on Friday I asked for my still not set up account for the HP bug tracking tool and got set up quickly, thanks to Maik and the team from HP. So at least I was able to use some 5-10 minutes to get a bit familiar with the tool until the event started. Then a short first chat on Skype with some of the judges. After a work-rich afternoon, 5 pm came closer and I tried to get rid of work and all the discussions we had. Finally at 5:20 I shut down all my work apps and prepared my tools for the event. The Skype chat was buzzing and at 5:30 the Google hangout video chat for the judges was supposed to start, but I was not able to get in. The firewall at work blocked it! So I had to set up my tablet to use Hangouts. There was some trouble with the sessions on Hangout and the live stream to YouTube, so both IDs changed a few times. I was not familiar with Hangouts until that day, so a bit challenging  for me. And since I was not able to use my work laptop with the headset I later found out, that my tablet created the worst echo ever on the video chat. I am so sorry for the audio quality for everyone, when I was not on mute!
At 5:33 the judges got the mail the teams got a few minutes earlier. And finally we got the information about the SUT and who the product owner is. Jason Coutu, who was himself a judge for the North American preliminaries.

Then shortly after 6pm everything was finally set up. I was participating in the Google Hangouts video conference, a Skype chat, had the Youtube channel open to read all the comments, had my Tweetdeck open and all three instances that were set up of the HP Agile Manager, the bug tracking tool.

The fun began. Matt and Jason started talking about the SUT and described for everyone some of the rules again, e.g. no Load Testing! (as far as I know, nobody did. So thanks!) Jason gave a short introduction to his product and some information here and there. And the questions on the YouTube comments came in, mixed up with access problems for some teams to the SUT, some to the bug tracking tool. Some of the judges started posting questions to the Hangout chat for Matt and Jason to answer, we got the info to Jason which users had problems logging in, and another stream was routing the bug tracking tool problems via Maik to HP. In the Skype chat we had short discussions about how to answer certain questions and tried to get them out quickly. Every now and then, we also wondered about a question or the other. But a bit fun for the judges should also be allowed. We were ignoring several categories of questions, what Matt also explained live, to not give some team a bonus.

Dear contestants, be aware, we had the information of the SUT a bit later than you, and all information we had, was said on YouTube and written in the comments. We had no information, that you were not given also. So we tried to repeat some of the answers as good as we could.

Then a problem in the configuration of HP came up. Some users got set up with the wrong account settings and had access to more bugs than they were supposed to. The team from HP tried to fix that as fast as they could, and we judges tried to look into that as well, to see how to spot illegal activities in the bugs. Matt got out a warning via YouTube, that the software under test was the Sales Tool, not the HP Agile Manager. We knew before, that there were some restrictions to the set-up, because the tool is usually not made for non-cooperative teams using the same bug tracker. Especially one team seemed to be very disappointed about that fact. Instead of getting the message of the problem out, and continuing to test the SUT, they even “hacked” some bugs. But since it was a misconfiguration, they were simply allowed by the system to access those bugs, and all information of who changed what were stored. So, we will find out, if somebody changed data from another team.

The rate of comments got lower around halftime of the event, so Matt used the time to talk with us judges. Slowly the bug count got interesting, since we hit the 1500 mark around halftime. The judges got more routine in their tasks, most access problems got fixed, and also Jason used the time for some breakfast I suppose.

Near the end, the rate of comments increased again, mostly about the test reports. We tried to get out all information quickly, even repeating it several times. e.g. to which email address and until which time. Yes, people, even if the event started with some hiccups, the deadline was not moved. Again, welcome to real life! What we were not able to do is answer the requests of confirmation if the reports came in. Maik shared the account info only after the time was up, to prevent some incident of deactivating the auto-responder that the mail arrived. Dear teams who checked the mail address in the beginning and saw the auto-respond there, that’s the reason why you didn’t get it later, when sending the real report. Exchange sends it only once for every mail address.

Around 9:10 the video chat ended on Youtube. Maik and Matt spread some more info about the next steps in judging all the test and bug reports. Then judge after judge left the chat sessions.

I was one happy judge in the end of the evening. The event took place, I was able to focus on the event and to not look at anything work-related. Except some hiccups in the beginning, and the problems with some of the user setups in the Sales Tool and HP, all went smoothly. I met some great people, that I will be working together with in the next weeks. And I got a great comment from Matt in the end, that totally lightened up my weekend. Thanks, Matt!

The tree is gone, the wood is waiting next to my lathe, and tomorrow we will get the list of teams everyone has to evaluate for the first round. I am happily looking forward to some nice green wood turning and to read and rate all of your test and bug reports in the next weeks. Oh yes, and I will also have a lot of work at work.

I will try to blog about the second part of the journey as well, so keep monitoring this channel.

What I found out about “Tacit and Explicit Knowledge” so far

First I want to mention. I can’t wait to finally read Harry Collins’ book, “Explicit and tacit knowledge” to get more know-how about this topic.

I first was made aware of this topic by some mentions from Mike Larsen, James Bach and Michael Bolton. That was together with the mention of the book, that I still was not able to read yet.

That topic made me a bit more aware of how thinking, knowledge, sharing knowledge, coaching, etc. works. But why? After I was aware of that difference in types of knowledge I began observing that in my daily work and life.

One work example situation is writing a test case. By exploring you found out, how a certain feature works and what scenarios you have to test. You learnt first-hand about the feature, by trying out, by clicking around. You gained a lot of information. Now it’s time to write some test cases or charters for the next time, sometimes called as regression testing. What information do you put in the test steps? How detailed do you have to write it down? What information to skip? In the first place, after learning about the feature, most of your information is tacit. Only you know it, because you did it, you tried it out. You gained lots of information. But what part of those information do have to make explicit, also known as write them down? What information can be put as implicit knowledge, because as a user of a PC or that application, you should have that kind of basic knowledge. But do you really?
I began watching myself from the outside, when writing a test case. What am I writing down? What information is important enough to be noted? Every now and then I get a question or statement from a colleague about some feature, and I say, yes, I know, and do you know about this and that? So I can also make knowledge explicit by talking about it. Shouldn’t I have written about this earlier, is usually my first thought now. Back to the test case. So what do you write down? Everything should not be possible. So you focus on the what you think important information and take a lot of information for granted. But how about the person who has to perform the test the next time? Do you write a test case for trained monkeys or for thinking testers who know the product?
When you write a test case the next time, take a test step and look at the application. What information do you know about this test step? What have you written for that step? Why have you written this information and what other information might be interesting to know besides that?

Another example from work is the so called test automation, or as I like to call it check automation. You take a test case and tell some computer code to perform those actions and what to look for, if the step is passed or failed. The number of questions you ask is usually not very high. That’s the difference between human and machine performance. The human being, hopefully rather intelligent and not bored to death, is looking at the screen and notices most things on the screen and combines the information. The computer is only looking at those places of the screen, that you tell him to look at and what to look for. The information the automation checks for is explicit. You wrote it down as code with exact instructions. The “checks” you perform while executing the same test case are mostly tacit. You are doing them, because you know, what to look at and what to look for.

A visual test map (mind map) is a big collection of explicit information and stands for an even bigger amount of tacit or implicit information. In the map you want to focus on the important stuff, so you skip the less important information. When reading the mind map you know how to fill the gaps, but can all other readers do so as well?

The best example outside of testing I know is cooking. Cooking uses a lot of tacit or implicit knowledge. The recipe mostly states what ingredients to use, how long to cook and wait, and so on. You need a lot of knowledge to understand those instructions. You also need a lot of experience in different cooking techniques to “perform” a recipe. And in every recipe there is the famous “dash of salt”. Now how much is that?

Think more about how you think. Be aware of tacit knowledge, when can it be made explicit and when not. In cases where you are not able to express those information, try to find out why and find a way. Train to write down tacit as explicit knowledge. Use different ways of writing down information, plain text, sketch notes, diagrams, mind maps, or whatever fits best.

Those were a lot of questions, and I am looking forward to reading Collins to understand the differences better and hopefully get some techniques how to transfer tacit to explicit knowledge.

Thinking in models

This is the first blog to my small series “My 2 cents about thinking“.

I would say, everybody does it. I assume that about most topics people do it unconscious. Thinking in models.

Don’t forget, this article is not based on any advanced knowledge, studies or books. This is how I explain “thinking in models”.

It is impossible to know everything about everything that we face in our life. Many things are accepted as they are, without spending too much time on how they work. “It works” is the only information needed for those things.
For a lot of things we are able to make connections between similar models and try to adapt the knowledge we have for one thing and project it on to the other.
A couple of things we spend time and try to understand better how they work. Depending on your profession, there might be even a few things where you try to find out nearly everything there is about to understand why it works how it works.
All things have one thing in common. In your mind, you create a model of them. This can range from a picture of that thing with the information “It works” up to something so complex and so interwoven with other models that you have no clue how to explain that model to someone else.

Don’t forget, “essentially, all models are wrong, but some are useful” – as the statistician George Box wrote it.

Your models will be never complete, rarely correct in every aspect, but they hopefully contain enough information to explain most things you need to get explained. If you learn something new about a domain you already have knowledge about, the learning process is much easier. You “just” need to map your already existing knowledge around the new facts. In a new domain you have to acquire the basic set of knowledge for your model first, which makes it more time-consuming.

I had one experience lately at work. One of my team members was testing a functionality I did not yet care about. When thinking about it I had no idea how this feature might work, not even what it was used for. It was only a name. When my team member organized the know how transfer, it took three single terms in the first two sentences of her explanation for me to realize how it works. I was immediately able to relate my existing knowledge and build a small model of the new functionality, that explained most of the things I encountered so far and was sufficient enough for easily understanding the rest of it. The other team member who also got the functionality explained for the first time, took much longer. After the “guided tour”, she needed also the personal experience to grasp the newly learned. I realized all those differences only when I started thinking about why I got it in two sentences and she took about an hour. I tried to understand how she was thinking, how her model-building works, and how I could help her to improve understanding new things faster and how I could explain things to her.
This is the luxury of having a small team. You can try to understand how different team members think and adapt your way of explaining new things to their style. If your team is bigger, you can try to recognize patterns in learning and optimize your way of transmitting the message by the feedback you get. Without feedback from your team or audience you cannot evaluate how good your message was understood. But for all of this you must first be aware of how thinking in models works and you need interest in understand the model thinking of your co-workers.

Most models are tacit knowledge to their owners. I assume that most people don’t know how to explain their models to others. Testers have to deal with that problem every day. A tester gets a piece of something to test, she creates a model of that something in her mind and starts to develop test ideas. When she writes down those test ideas in prose they become test cases, test scenarios, test outlines, or whatever she might call them. The tester explains, sometimes step-by-step, what she is doing, if the reader is lucky even how or why. The more complex the something is, and of course the more creative the tester is, the more possible test cases she needs to write down. And with everything written that information gets longer and longer, and people don’t read it and it gets quickly outdated, because nobody maintains it. So this form of information is a certain waste of time from the moment you create it.

Early in 2013, soon after starting with my current company, facing a problem I described above, I developed an idea of how to document the system we have, bringing all the different customer specialties in one picture without the need to write and maintain test cases with up to 10 different customizations to consider. I started to use mind maps as a model of our system. With the help of tags and filters I brought in the special configurations. I came up with a name, that I don’t even remember now. Because about 6-8 weeks later I read a blog from Leah Stockley describing the idea I also had as “Visual Test Model“. I liked the name, I liked the explanation and I liked the discussion afterwards of how and where to use mind maps in testing.
Lately Aaron Hodder explained using mind maps in testing in this excellent two part series. Part 1, Part 2. With these great explanations at hand I will stop trying to explain how mind maps can be facilitated for testing and come back to thinking in models.

When I think about my models, especially the two systems I currently test, these models are at least 3D. There are some connections between parts of the system that don’t appear in my 3D picture, so maybe there is a dimension in addition to that to cover all relations. This is not practicable when trying to put it down on a piece of paper, a white board or to chisel in stone.

I would assume that most people model more or less in 3D, since it’s a three-dimensional world, people are used to this. But when it comes to putting that knowledge down, we have to transfer the model in 2D. This transfer or translation is a big challenge. You don’t want to loose vital information, but still you want to cover everything.

There are hundreds of modeling languages out there, ways to describe complex things in 2D. Most modeling languages bring good tools to explain special situations. Unless you are firm in using one ore more modeling languages and you completely adapt your way of model thinking to those languages you still have to translate your model to them. Parts of your model might already be reflected in some sort of standardized modeling language, because you are used to it. But I guess for most people they are not or they are mixed up, I can only speculate about this.

In my former company I worked in a big system integration test project with about 200 different business processes modeled into about 20 different systems, interconnected via several technologies. For most business processes I had some sort of flow chart in my mind, how that process looks and works in my mind model. I maintained not one giant model containing everything, I split the model apart and explained things differently, depending on the context. I could even bring in hardware information, systems that share an app or database server. Who wants to know what, how and why. But I only know that now, about 15 months after I left the project I was in for nearly 10 years, when thinking about how I modeled the system back then. I was not aware of this.

But how do I explain the model to others, so that they understand my model and incorporate the information from my model to their models?
In my team I made an experiment rather early in the project phase. I told the team about the mind mapping approach and how to maintain information in them. But I was not happy about the initial model I created, it lacked information. After two weeks experience in the new system I gave them half an hour to draw me a model of the application and I questioned them afterwards. Team member #1 used my mind map and tried to extend it with the new information. Creating the relations and dependencies was not easy, since it was my way of thinking and not hers. Team member #2 used my way of creating mind maps and adapted it from scratch to the new information. She was not happy with it, maybe because she knew how I was thinking and tried to explain her model in my language without knowing all grammar and vocabulary. Team member #3 made my day. He got stuck after putting about 10 items on the screen. He was not able to create connections and he got a black out and was not even able to explain something to me of those 10 items. The problem was, at least I think so, he tried to imitate my language and model, but he mixed up objects and functions on the same level. With this approach I finally added a tool to my tool-box to better explain my model. Thanks to team member #3 and his black-out that triggered me to think more about his solution and where the problem was, I can translate my model now better to 2D. At least in a way I understand it. I’m still working on making it easy readable for others.

When I encounter now a situation where it is obvious to me, that someone has a flaw in his mind model, I try to help with reconstructing the model. We had a discussion the other day, where six people explained their understanding of the functionality. One of the team had a problem in understanding. Instead of explaining him the functionality again, I tried to understand the design flaw in his model and I gave him a hint how to re-model. After a short while he understood the functionality better and was up to speed in the discussion.

Up for me to learn are better ways of translating my mind models to 2D and to other peoples modeling languages.

My tips: Learn as much about anything as you can, you never know when your mind models come in handy for cross-referencing a new fact that you come across. Try to make yourself aware of your own mind models and try to be curious about how others think with the goal to adapt your style of transmitting information.

Next up in my reading list are now the system thinking books from Jerry Weinberg. I’m looking forward how those books will influence my way of thinking about mind models.

My 2 cents about “thinking” (introduction)

This is the kick-off to a small series of blogs.

In the past year my life as a tester changed. I became really conscious of what I was doing and I wanted to know more. I found an important skill for a good tester that is more important than test methods. Thinking, and I mean conscious thinking about what I am doing. And with that I became also much more aware of how others think, and things like how to translate my knowledge, so others can understand faster what I want to tell them.

Testing is a) learning about a product and b) telling someone (who matters) what you did, what you didn’t do, and what you found and did not find, so that this someone can make an informed decision.
I learned that very intensively when trying SBTM for the first time. The test reports should describe pretty good what you did and what you found. Debriefing adds another instance of telling the story, but already the written report should contain sufficient information for the reader.
With this new approach for me and my team I saw that you need more than knowing some test techniques.
You need to understand your own way of building models and how to explain them. You need to know what tacit knowledge is and what you have to write down how and make it explicit. You need to write a compelling story. And you also need a way of questioning to get the answers you need to maintain your own model.

Another thing that I found an explanation for while trying to gain more knowledge was something I’m usually rather good at. Speaking with people from several expert groups without knowing much about the domain and still getting accepted as someone with sufficient knowledge to be trusted. More about this will come in “Expert knowledge”.

I thought a lot about those topics over the past couple of months, and I found some books to acquire more knowledge in those areas. I’m currently only reading one of them, and I want to blog my ideas before I read a couple more of those books. I want to be able to compare my thoughts now, that I mostly came up with on my own, with what I learned through reading those books.

To make it not one big boring blog, I decided to part it in several smaller boring blogs. So you will find here shortly more about:

Thinking in models

Expert knowledge (coming soon…)

Explicit and tacit knowledge

Socratic Questioning (coming soon…)

Is manual testing very easy? A comparison…

Lately there was a question on LinkedIn, is manual testing very easy?

My answer in short was that it depends on you and your skills.

Inspired by the blog from Michael Bolton about “Counting the wagons”, I came to a an analogy that might be fitting here.

Testing is like moving a vehicle. It depends on your skills if it is easy, challenging or impossible.

The software is the vehicle, so imagine everything from skateboard, bicycle, motorcycle, cars, transporters, trucks, helicopter, plane, boat, cruise ship to a spaceship. To bring in some variance, old models, contemporary models, completely new designs, from broken to running fine.

The test assignment can be anything from check if its running, check some particular functionality, like breaks, running errands with or without a shopping list, driving from A to B, with or without a map, discover new spaces.

The environment for your software is anything from prototype to mass-software, so from first tests in the simulator, driving on a test or empty race track to driving downtown Beijing in rush-hour.

For reporting your drive-out you might have only your brain, a co-pilot who tracks things on a map, a black box, onboard video, radar.

If the damn thing breaks down, do you hand it over to the mechanic/engineer, “this is broken”, do you look under the hood and tell him, you think that something with the turbo is wrong, when you go from 80 to 100, or you deconstruct the thing to tell him, where the issue is.

For coaching/managing do you have a co-pilot, an instructor, a fleet manager, a customer (e.g. taxi), or simply a boss telling you to get the job done. Or are you working on your own, just for fun. Driving around with some friends.

So if testing is easy for you depends on your skills, your experience and how fast you learn and adapt.