2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 3,300 times in 2015. If it were a cable car, it would take about 55 trips to carry that many people.

Click here to see the complete report.


Instructional vs. Intentional Scripting

I’ve been thinking lately about the two extremes a test script can be written. I don’t speak about the exploratory-scripted continuum. I was thinking about the instructional-intentional continuum.

Let’s take a standard test script or test case or charter, whatever you want to call it.

  1. Enter a valid username in the text field labeled with “username”.
  2. Enter a valid password for the chosen username in the text field labeled with “password”.
  3. Press the button “Login”.
  4. Expected result: You should be taken to the start page.

This is what I would call an instructional script. It consists of instructions to follow and a result to expect.

Now let’s try to write the same set of instructions intentional.

To successfully login to the application enter valid credentials in the according text fields and submit.

Not much of a difference you would think. But what if the login form has changed? There is no text field “username”, it’s now called “email”. And the button has been renamed to “Continue”. Which of the two examples would lead to problems for a human user?

Of course it could be a bug that those fields have been changed, but to be honest, is it really a problem? A human being can still sort out what information to enter where and how to go on.
If an automated check is looking for IDs that went unchanged it could still pass without a problem. The goal of the test is to log in successful.

Now think of a more complex test script where something in the middle has changed, and you have no clue what the goal of your test case should be. How much time would you waste with test steps that are not executable any more as intended?

Of course, a pure intentional script might leave inexperienced users a bit alone during execution. So a good combination of both is important. State the WHY in the beginning and obvious, so that an experienced tester knows already what to do, and an inexperienced tester at least can get help by the WHY, when the WHAT is not applicable due to changes.

If you write the WHAT also with a good portion of WHY, you might save big for maintenance.

Just think about it the next time you write a test script, do you write the WHAT or the WHY?

Testpappy’s International Testing Standard

What is a standard?

Wikipedia says about Technical Standards:

A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices.

OK, and what is a requirement?

Wikipedia, as source for unlimited knowledge about everything, says about Requirement:

A requirement is a singular documented physical and functional need that a particular design, product or process must be able to perform

Summarized and applied to “testing”, this means for me that a testing standard formally describes the methods and processes necessary to provide a uniform testing service.

When it comes to testing, the past has shown that project environments are so manifold and diverse, that it’s extremely hard to unify them and apply the same over-weight test approach to them all. Many people have accepted that fact and are doing the best they can think of that is necessary and helpful in their context. But some are afraid of the diversity and differences between testing projects and want to find the one way to rule them all with one ring, eh I mean standard. I’ve been in more discussions this past year (2015) regarding ISO 29119 than I even dreamed of when I first came across it, back in 2012. This post is not designed as a rant against the new ISO standard, I’ve done my share of that already this year.

To be more precise, the goal of this post is to describe a testing standard that should really be the minimum process to all test projects you perform professionally and structured. I say that a project adheres to a standard, when it fulfills this set minimum. A project that does not follow this standard, is sub-standard. There is no need to fill out huge checklists of things that you shall do, don’t want to do, and have to justify why. Just don’t do it and you are sub-standard. And believe me, from working for years below standard, it really feels like that.
If in your context there are special rules to follow, documents to produce that some other standard, law, federal agency, or whoever prescribe, those rules and documents belong to that other standard and don’t belong to the testing standard. It’s not useful that one standard cares for other standards to be fulfilled. If you can combine your efforts to fulfill both at the same time, excellent! If not, don’t blame it on the testing standard.
Everything that is not described as part of the standard that is on top or extra, and depends on your project context or personal favors is nothing but that, an extra on top. It might improve the situation and quality in your special situation, but is not a must to comply with Testpappy’s International Testing Standard.

The standard consists of 3 parts:

Part 1 – Terminology: Some basic terms to know, when speaking about testing.

Part 2 – Test Process: Activities a testing project consists of.

Part 3 – Documentation: Stuff you should write down.

In contrast to ISO 29119, I don’t see much use in test techniques being part of a standard. You cannot and don’t have to use all techniques in every project, sometimes the use of a well known test technique might result in testing the wrong things. Context is king when testing! I don’t say, test techniques are useless, au contraire, they are good tools for good testers and should be learned and practiced. But I don’t see them as part of a standard that describes a process. The usage of tools is also as important and is also not part of this or any other standard I know of.
Tester: “But I followed test technique abc, because the standard describes it!” Stakeholder: “I don’t care, the program still sucks!” Not with my standard!

There will be no part of the standard describing testing in a waterfall, V-model, SCRUM, Agile, DevOps approach or anything that will come up in the future. The testing process is and will be always the same, only the involvement of roles within the project life cycle differ. The sooner good and structured testing starts, the better. But context, availability of skills and resources, and many other factors can have a huge influence and impact here. As long as you test in your project, you can’t be that wrong.

Part 1 – Terminology

Most terms are project context specific. There are more people speaking about testing than just testers. The most important fact is, that you reach a common understanding, not shallow, of the terms you use in your context.
Every testing training (with or without certification scheme) bring their own namespace of terms. Some of the terms and definitions are useful, some may not be the best or thought through. But all need to be understood to get the ideas presented and taught to you during the training. Most words have meanings given to them by the dictionary a long time ago and should not be reused, some words are made up and given a meaning in the context of the namespace to transport complicated ideas by simple terms.

This part of the standard just wants to give you a set of basic terms, to distinguish roughly between some basic testing terms.

Testing and Checking

Testing: Intellectual process of learning about a product / feature / function by exploration and experimentation. It’s all about gaining new information about the system under test. Testing strongly follows the scientific method.

Checking: Making evaluations by applying algorithmic rules to observations of the system that don’t bring new information other than, “it’s still working the way it was intended and did before”.
Example: What most people call “automated tests” are actually checks. The testing happened beforehand and exact instructions are given to a machine what to do and how to evaluate the results. The machine will only say “yes, worked as expected”, or “no, did not work as expected”.
(Hint: Don’t exaggerate using this term in contrast to testing. Most people, especially non-testers, don’t see the difference between testing and checking. It is an important difference to understand the value of individual tasks testers are actually doing, but as long as that is clear to all or not a problem, just go with “testing” even if it’s “checking”. “Checking” is a part of “testing”, so it’s not wrong to call it all just “testing”.)

Functional and Non-functional Tests

Functional Test: Testing related to a function or feature, if it doesn’t show any problems when using.

Non-functional Test: Testing a part of your product, that is not directly functional related. e.g. operational tests are non-functional. Even if your application doesn’t fail over, it can still work correct most of the time.

Performance, Load and Stress Tests

Performance Test: You measure and monitor the reaction times of multiple parts of the system for your user. You monitor this over time and when changes are applied, and you can evaluate the individual results or trends as good or bad. This should be done on a special environment, that is used exclusively for performance tests, so that third-party influences can be excluded from the measurements.

Load Test: You expect a certain amount of load onto your system in production. To know ahead if the system can handle the load, you apply this load to your product and monitor the performance for the single users and parts of the application. This should be done on a special environment which reflects the production environment to certain level.

Stress Test: You want to know what your product can stand, so you raise the load onto the system and monitor the performance and system behavior. Once the system starts making errors or the performance is rated unbearable for the user, you have found a rough boundary for your system performance. Environment should be the same like that for load testing.
An interesting result of a stress test is to see how your system reacts under stress. Is it simply getting slower, or does it start to produce errors?!

Security Test: Everything that you do with your system under test that helps to understand the level of security built into the system.

Namespaces usually consider of hundreds of words, but I don’t see much use for a standard to define them all. Use those words as you need them in your context and how you and your colleagues understand them.
The most important aspect is that there is common, not shallow, understanding of all the terms used in your project context.

Part 2 – Test Process

What is a process? Again wikipedia helps:

A business process is a collection of related, structured activities or tasks that produce a specific service … for a particular customer or customers.

This part describes what activities and tasks you have to do in a testing project. It does not describe “testing” itself. Some of the tasks you won’t even experience as special tasks, because they come with the natural flow of a test project.

(Overall) Test Strategy: Create an overall strategy how and when to include testing in your project, and who (which role or team) is testing to what extend in what part of the project. The test strategy should support the common goal to achieve a certain quality.
This might even be given by the overall project management.

Test Management: Testing is a project activity like anything else and should be managed at a certain level. Managing a testing effort consists of planning, segmentation/controlling, monitoring and reporting.

Test Plan: Plan all your testing activities, skills and resources, as far as you can. In general, everything you create that should be delivered you should test to some degree – given by the context. And remember, it’s testing! You produce information and never know exactly what you’ll find, that will lead to additional testing. So better start to plan only on a high level and have only a rough idea what to do. Plan your functional and non-functional tests, special tests, performance and load tests, or plan for using automated scripts and tools. Detailed planning in the beginning might be a waste of time in many cases and is on your own risk.
You and your stakeholders (should) have a certain quality expectation. Plan your tests accordingly to show that those expectations are met.
Planning is a reoccurring activity that has to react to changes and additional information.

Test Segmenting: This can be achieved by structuring and splitting the necessary testing in manageable bits. This can be test cases, checklists, charters, post-its, or any other sort of structure you want to apply in your context. In the sense of the standard it means that a manager or lead has a certain control over what the testers actually look at during test execution.
Testing produces information that might lead to more testing necessary. You must also manage those additional bits.

Test Monitoring: Monitor the progress of your testing activities. This informs further planning activities.

Test Reporting: Based on strategy and plan, summarize your test results with the achieved information about the product under test. Focus your report on the valuable information for the stakeholders.

Test Execution: The “Testing” activity itself consists of test design, preparation, execution and documentation. Those steps can be handled separately as the classical approach often suggests, or can be seen as interacting activities that best work together as the more modern approaches suggest.

Test Design: Tests are like experiments and need to be designed. What are the prerequisites (e.g. state of the system, input data, etc.), what do you plan to do with the system under test to achieve what goal?

Test Preparation: Prepare your test environment, the test data, tools and scripts, set up logging and monitoring.

Test Execution and Documentation: The performance of the tests or experiments itself. The actual interaction with the system. Collect the results of your experiments and document them in the way prescribed by the project, which depends on the context how to re-use the documentation for other purposes than the original.

If you think about testing on different scales, from being alone to testing in 50-people team, you will go through those activities sooner or later. And the project size and context will dictate the importance and approach to choose. But if you skip one of those steps, which is actually really hard, you are operating below standard.

Part 3 – Documentation

There is one basic rule defined by the standard: document only as much as you have to (prescribed e.g. by the non-testing influences of the project) but as much as is useful (for reporting, supporting your memory, later reuse). But do document and communicate!

Test Strategy: The strategy should be usually a part of the overall project management documentation. If not, write down a separate one. It should also be presented to the team and stakeholders.

Test Plan: The plan should go conform with the test strategy. Write down what you plan and keep it up-to-date. An outdated plan is a useless document. Better keep it to a minimum and up-date, than plan in detail and let it get deprecated quickly.
Do you need special equipment, test environments, specially skilled people, write it down. And of course, the best plan is good for nothing if you just write it down and don’t tell anyone. Remember to share and communicate your plan!

Test Execution: There are lots of ways to document test executions. There is – of course – only one minimum requirement to follow Testpappy’s International Testing Standard: do it!
You can write down your planned tests way ahead in a more or less detailed way. Everything is fine from rough charters via long checklists, up to detailed test scripts (if you really want that). You made a plan, you know what is important for your stakeholders, why put that at risk to forget it during execution? During execution you can then decide to simply check off the performed steps as a bare minimum of documentation or take extensive notes of what you have done and observed, supported by screenshots and videos.
Document something for the people who should do the testing to follow. If you are the one, why not write down your ideas, so you don’t forget them. If someone has to take over from you, there is a start. It must not be much according to the standard, but of course can be if you want. Nobody stops you from wasting your time. But a minimum to prove the testing that has been performed is mandatory for fulfilling the standard.

Bug Reporting: This is a necessary part of a development project. And yes, bug reporting is a valuable skill for a tester. A bug report is a special piece of information and is collected and managed by a process involving more roles than just the tester. So besides the fact that bugs which are not fixed right away should be documented and even managed, the standard for testing doesn’t add any rules here. Setting up a bug life cycle and process is part of the overall project management (even if often the test lead is responsible for that process), it’s not necessary for testing. Testing can live without such a process. But you have to document the found bugs somehow as part of your notes.

Test Reporting: Don’t report (only) by numbers. Testers are in the information business, and what other information business do you know that report by numbers? Your information should be valuable to the stakeholder, so treat it like that. The form of a test report depends of course on project, context, and size of testing. It can be a simple mail, an excel sheet, a fancy slide deck, or if necessary even a 100-page word-document; or everything in between. The one rule to follow is again, it should be written down somewhere and communicated to the stakeholders. Your stakeholders want to make an informed decision, so provide information in a relevant way!
Reporting should happen throughout the whole project. But the standard prescribes the test completion report as the only mandatory. Every other reporting occasion is helpful for the project, but not mandatory by the standard.

That’s all! – Disclaimer

If you have a test project and you follow the steps prescribed by the standard, you have created a project that has a certain minimum aspect of quality and value, a good test project should have from a management perspective.
It cannot be described by this or any standard that the test results you produce are what your stakeholders expect. This standard won’t prevent any bugs from showing up in production. And of course, following this standard – like any other standard – doesn’t prevent you from creating a product that sucks and nobody wants.

If you don’t follow this standard, your test project can still be a success. If you follow this standard, your project can still fail.

This standard does not describe how to actually test! Because my motto is, as long as bugs don’t adhere to a standard, neither do I when looking for them!

If you think there is a term, an activity or task, or a document that is missing from this standard, please let me know, and I will think about adding it.