Challenge accepted – Integration Testing

James Bach published a very interesting blog post lately reporting about a recent conversation he had with a student about the topic “integration testing”. In the end James did not share his thoughts about the topic, only some teasers what he thought about the answers given by his student. James set the challenge for his readers to think more about the topic. And guess what, that’s what I did. Here are my thoughts.

My past being an integration tester

12 years ago I grew up as a tester in a rather big system landscape, being the responsible test specialist for a particular system that had three incoming interfaces from three different systems and two outgoing ones to two separate systems, being capable of managing millions of data sets per day. We – the test team – were responsible for the end-to-end tests of the system stack, and we even had a test phase called I&T (Integration and Testing). In the beginning we were in charge of the environment were all back-end systems were actually integrated with each other for the very first time.

So do I know by heart what integration testing is? No! So I gave it a good sit and think.

Input from the twittersphere

Rather obvious, Alan Page also read the article and tweeted an interesting statement:

In the first moment I liked it, being shallow or not. I read that statement late at night and had a good night’s sleep to think about it.
And I was coming to the conclusion that what Alan described was not integration testing. Those are interface tests, unit tests, component tests, tests for robustness, or whatever you want to call that (have to think about that later), but it’s not what I’d describe as integration testing. Several elements are missing for that.

To be clear, I would say those tests described by Alan are very important, and with very important I mean it. Those tests are responsible to check for robustness of the interfaces and can find out if a system goes down when getting some strange message. This is especially important for webservices and other interfaces possibly talking with many different partners.

What I think integration testing is

So what is integration testing in my opinion:
For me it is important during integration testing to actually have two or more systems (pieces of software (systems, programs, modules, classes, etc.)) that you bring together in one (test) environment and let them communicate in their “natural” way.

It’s a test phase that is not primarily using drivers and stubs for the interfaces under test. If your individual systems support sending test messages via the interface without using the actual functionality, or resending messages, that’s okay.

What will be checked during integration testing

The context is important. All possible production scenarios should be tested in this phase. With all possible I don’t mean all possible. Being a tester you know what I mean, reduced to a manageable number of scenarios. It could also cover besides real payloads empty payloads, one partner answering slow or not at all, duplicate messages, and many more. Depending on what is possible that your sending interface can produce. Many of these scenarios should have been covered already in the robustness tests. In this phase there should be nothing going over the interfaces that they didn’t seen during previous test phases.

During integration testing it’s all about the real thing. Can those two or more components actually communicate? You could have checked the incoming interface a billion times with test scripts, but the first contact with the real thing is always different. There is a quote that a plan is only valid until you have contact with the enemy. I think the same is true for interfaces. There are so many other aspects that can go wrong, that should be covered during integration testing at last. I can only think of scenarios like problems with different time zones, different character sets, encryption, connection speed, protocols, and a few more. But I would estimate there are hundreds of possible other problems waiting out there.

What integration testing does not cover

When one communication partner will be changed in the future and something happens with the payload, that the unchanged system is still able to handle that. And that the system is able to handle anything that comes its way without going down or turning inside out. Robustness is a topic for the phase or test types that Alan described.

What James mentioned was also the level or form of integration, but I have not finalized my thoughts on this, so it’s not mentioned yet.

 

That’s it with my thought for today. I might update this post once I had further insights. Thanks for reading and please let me know via the comments what you think of my explanation what integration testing is and what not.

2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 3,300 times in 2015. If it were a cable car, it would take about 55 trips to carry that many people.

Click here to see the complete report.

Instructional vs. Intentional Scripting

I’ve been thinking lately about the two extremes a test script can be written. I don’t speak about the exploratory-scripted continuum. I was thinking about the instructional-intentional continuum.

Let’s take a standard test script or test case or charter, whatever you want to call it.

  1. Enter a valid username in the text field labeled with “username”.
  2. Enter a valid password for the chosen username in the text field labeled with “password”.
  3. Press the button “Login”.
  4. Expected result: You should be taken to the start page.

This is what I would call an instructional script. It consists of instructions to follow and a result to expect.

Now let’s try to write the same set of instructions intentional.

To successfully login to the application enter valid credentials in the according text fields and submit.

Not much of a difference you would think. But what if the login form has changed? There is no text field “username”, it’s now called “email”. And the button has been renamed to “Continue”. Which of the two examples would lead to problems for a human user?

Of course it could be a bug that those fields have been changed, but to be honest, is it really a problem? A human being can still sort out what information to enter where and how to go on.
If an automated check is looking for IDs that went unchanged it could still pass without a problem. The goal of the test is to log in successful.

Now think of a more complex test script where something in the middle has changed, and you have no clue what the goal of your test case should be. How much time would you waste with test steps that are not executable any more as intended?

Of course, a pure intentional script might leave inexperienced users a bit alone during execution. So a good combination of both is important. State the WHY in the beginning and obvious, so that an experienced tester knows already what to do, and an inexperienced tester at least can get help by the WHY, when the WHAT is not applicable due to changes.

If you write the WHAT also with a good portion of WHY, you might save big for maintenance.

Just think about it the next time you write a test script, do you write the WHAT or the WHY?

Testpappy’s International Testing Standard

What is a standard?

Wikipedia says about Technical Standards:

A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices.

OK, and what is a requirement?

Wikipedia, as source for unlimited knowledge about everything, says about Requirement:

A requirement is a singular documented physical and functional need that a particular design, product or process must be able to perform

Summarized and applied to “testing”, this means for me that a testing standard formally describes the methods and processes necessary to provide a uniform testing service.

When it comes to testing, the past has shown that project environments are so manifold and diverse, that it’s extremely hard to unify them and apply the same over-weight test approach to them all. Many people have accepted that fact and are doing the best they can think of that is necessary and helpful in their context. But some are afraid of the diversity and differences between testing projects and want to find the one way to rule them all with one ring, eh I mean standard. I’ve been in more discussions this past year (2015) regarding ISO 29119 than I even dreamed of when I first came across it, back in 2012. This post is not designed as a rant against the new ISO standard, I’ve done my share of that already this year.

To be more precise, the goal of this post is to describe a testing standard that should really be the minimum process to all test projects you perform professionally and structured. I say that a project adheres to a standard, when it fulfills this set minimum. A project that does not follow this standard, is sub-standard. There is no need to fill out huge checklists of things that you shall do, don’t want to do, and have to justify why. Just don’t do it and you are sub-standard. And believe me, from working for years below standard, it really feels like that.
If in your context there are special rules to follow, documents to produce that some other standard, law, federal agency, or whoever prescribe, those rules and documents belong to that other standard and don’t belong to the testing standard. It’s not useful that one standard cares for other standards to be fulfilled. If you can combine your efforts to fulfill both at the same time, excellent! If not, don’t blame it on the testing standard.
Everything that is not described as part of the standard that is on top or extra, and depends on your project context or personal favors is nothing but that, an extra on top. It might improve the situation and quality in your special situation, but is not a must to comply with Testpappy’s International Testing Standard.

The standard consists of 3 parts:

Part 1 – Terminology: Some basic terms to know, when speaking about testing.

Part 2 – Test Process: Activities a testing project consists of.

Part 3 – Documentation: Stuff you should write down.

In contrast to ISO 29119, I don’t see much use in test techniques being part of a standard. You cannot and don’t have to use all techniques in every project, sometimes the use of a well known test technique might result in testing the wrong things. Context is king when testing! I don’t say, test techniques are useless, au contraire, they are good tools for good testers and should be learned and practiced. But I don’t see them as part of a standard that describes a process. The usage of tools is also as important and is also not part of this or any other standard I know of.
Tester: “But I followed test technique abc, because the standard describes it!” Stakeholder: “I don’t care, the program still sucks!” Not with my standard!

There will be no part of the standard describing testing in a waterfall, V-model, SCRUM, Agile, DevOps approach or anything that will come up in the future. The testing process is and will be always the same, only the involvement of roles within the project life cycle differ. The sooner good and structured testing starts, the better. But context, availability of skills and resources, and many other factors can have a huge influence and impact here. As long as you test in your project, you can’t be that wrong.

Part 1 – Terminology

Most terms are project context specific. There are more people speaking about testing than just testers. The most important fact is, that you reach a common understanding, not shallow, of the terms you use in your context.
Every testing training (with or without certification scheme) bring their own namespace of terms. Some of the terms and definitions are useful, some may not be the best or thought through. But all need to be understood to get the ideas presented and taught to you during the training. Most words have meanings given to them by the dictionary a long time ago and should not be reused, some words are made up and given a meaning in the context of the namespace to transport complicated ideas by simple terms.

This part of the standard just wants to give you a set of basic terms, to distinguish roughly between some basic testing terms.

Testing and Checking

Testing: Intellectual process of learning about a product / feature / function by exploration and experimentation. It’s all about gaining new information about the system under test. Testing strongly follows the scientific method.

Checking: Making evaluations by applying algorithmic rules to observations of the system that don’t bring new information other than, “it’s still working the way it was intended and did before”.
Example: What most people call “automated tests” are actually checks. The testing happened beforehand and exact instructions are given to a machine what to do and how to evaluate the results. The machine will only say “yes, worked as expected”, or “no, did not work as expected”.
(Hint: Don’t exaggerate using this term in contrast to testing. Most people, especially non-testers, don’t see the difference between testing and checking. It is an important difference to understand the value of individual tasks testers are actually doing, but as long as that is clear to all or not a problem, just go with “testing” even if it’s “checking”. “Checking” is a part of “testing”, so it’s not wrong to call it all just “testing”.)

Functional and Non-functional Tests

Functional Test: Testing related to a function or feature, if it doesn’t show any problems when using.

Non-functional Test: Testing a part of your product, that is not directly functional related. e.g. operational tests are non-functional. Even if your application doesn’t fail over, it can still work correct most of the time.

Performance, Load and Stress Tests

Performance Test: You measure and monitor the reaction times of multiple parts of the system for your user. You monitor this over time and when changes are applied, and you can evaluate the individual results or trends as good or bad. This should be done on a special environment, that is used exclusively for performance tests, so that third-party influences can be excluded from the measurements.

Load Test: You expect a certain amount of load onto your system in production. To know ahead if the system can handle the load, you apply this load to your product and monitor the performance for the single users and parts of the application. This should be done on a special environment which reflects the production environment to certain level.

Stress Test: You want to know what your product can stand, so you raise the load onto the system and monitor the performance and system behavior. Once the system starts making errors or the performance is rated unbearable for the user, you have found a rough boundary for your system performance. Environment should be the same like that for load testing.
An interesting result of a stress test is to see how your system reacts under stress. Is it simply getting slower, or does it start to produce errors?!

Security Test: Everything that you do with your system under test that helps to understand the level of security built into the system.

Namespaces usually consider of hundreds of words, but I don’t see much use for a standard to define them all. Use those words as you need them in your context and how you and your colleagues understand them.
The most important aspect is that there is common, not shallow, understanding of all the terms used in your project context.

Part 2 – Test Process

What is a process? Again wikipedia helps:

A business process is a collection of related, structured activities or tasks that produce a specific service … for a particular customer or customers.

This part describes what activities and tasks you have to do in a testing project. It does not describe “testing” itself. Some of the tasks you won’t even experience as special tasks, because they come with the natural flow of a test project.

(Overall) Test Strategy: Create an overall strategy how and when to include testing in your project, and who (which role or team) is testing to what extend in what part of the project. The test strategy should support the common goal to achieve a certain quality.
This might even be given by the overall project management.

Test Management: Testing is a project activity like anything else and should be managed at a certain level. Managing a testing effort consists of planning, segmentation/controlling, monitoring and reporting.

Test Plan: Plan all your testing activities, skills and resources, as far as you can. In general, everything you create that should be delivered you should test to some degree – given by the context. And remember, it’s testing! You produce information and never know exactly what you’ll find, that will lead to additional testing. So better start to plan only on a high level and have only a rough idea what to do. Plan your functional and non-functional tests, special tests, performance and load tests, or plan for using automated scripts and tools. Detailed planning in the beginning might be a waste of time in many cases and is on your own risk.
You and your stakeholders (should) have a certain quality expectation. Plan your tests accordingly to show that those expectations are met.
Planning is a reoccurring activity that has to react to changes and additional information.

Test Segmenting: This can be achieved by structuring and splitting the necessary testing in manageable bits. This can be test cases, checklists, charters, post-its, or any other sort of structure you want to apply in your context. In the sense of the standard it means that a manager or lead has a certain control over what the testers actually look at during test execution.
Testing produces information that might lead to more testing necessary. You must also manage those additional bits.

Test Monitoring: Monitor the progress of your testing activities. This informs further planning activities.

Test Reporting: Based on strategy and plan, summarize your test results with the achieved information about the product under test. Focus your report on the valuable information for the stakeholders.

Test Execution: The “Testing” activity itself consists of test design, preparation, execution and documentation. Those steps can be handled separately as the classical approach often suggests, or can be seen as interacting activities that best work together as the more modern approaches suggest.

Test Design: Tests are like experiments and need to be designed. What are the prerequisites (e.g. state of the system, input data, etc.), what do you plan to do with the system under test to achieve what goal?

Test Preparation: Prepare your test environment, the test data, tools and scripts, set up logging and monitoring.

Test Execution and Documentation: The performance of the tests or experiments itself. The actual interaction with the system. Collect the results of your experiments and document them in the way prescribed by the project, which depends on the context how to re-use the documentation for other purposes than the original.

If you think about testing on different scales, from being alone to testing in 50-people team, you will go through those activities sooner or later. And the project size and context will dictate the importance and approach to choose. But if you skip one of those steps, which is actually really hard, you are operating below standard.

Part 3 – Documentation

There is one basic rule defined by the standard: document only as much as you have to (prescribed e.g. by the non-testing influences of the project) but as much as is useful (for reporting, supporting your memory, later reuse). But do document and communicate!

Test Strategy: The strategy should be usually a part of the overall project management documentation. If not, write down a separate one. It should also be presented to the team and stakeholders.

Test Plan: The plan should go conform with the test strategy. Write down what you plan and keep it up-to-date. An outdated plan is a useless document. Better keep it to a minimum and up-date, than plan in detail and let it get deprecated quickly.
Do you need special equipment, test environments, specially skilled people, write it down. And of course, the best plan is good for nothing if you just write it down and don’t tell anyone. Remember to share and communicate your plan!

Test Execution: There are lots of ways to document test executions. There is – of course – only one minimum requirement to follow Testpappy’s International Testing Standard: do it!
You can write down your planned tests way ahead in a more or less detailed way. Everything is fine from rough charters via long checklists, up to detailed test scripts (if you really want that). You made a plan, you know what is important for your stakeholders, why put that at risk to forget it during execution? During execution you can then decide to simply check off the performed steps as a bare minimum of documentation or take extensive notes of what you have done and observed, supported by screenshots and videos.
Document something for the people who should do the testing to follow. If you are the one, why not write down your ideas, so you don’t forget them. If someone has to take over from you, there is a start. It must not be much according to the standard, but of course can be if you want. Nobody stops you from wasting your time. But a minimum to prove the testing that has been performed is mandatory for fulfilling the standard.

Bug Reporting: This is a necessary part of a development project. And yes, bug reporting is a valuable skill for a tester. A bug report is a special piece of information and is collected and managed by a process involving more roles than just the tester. So besides the fact that bugs which are not fixed right away should be documented and even managed, the standard for testing doesn’t add any rules here. Setting up a bug life cycle and process is part of the overall project management (even if often the test lead is responsible for that process), it’s not necessary for testing. Testing can live without such a process. But you have to document the found bugs somehow as part of your notes.

Test Reporting: Don’t report (only) by numbers. Testers are in the information business, and what other information business do you know that report by numbers? Your information should be valuable to the stakeholder, so treat it like that. The form of a test report depends of course on project, context, and size of testing. It can be a simple mail, an excel sheet, a fancy slide deck, or if necessary even a 100-page word-document; or everything in between. The one rule to follow is again, it should be written down somewhere and communicated to the stakeholders. Your stakeholders want to make an informed decision, so provide information in a relevant way!
Reporting should happen throughout the whole project. But the standard prescribes the test completion report as the only mandatory. Every other reporting occasion is helpful for the project, but not mandatory by the standard.

That’s all! – Disclaimer

If you have a test project and you follow the steps prescribed by the standard, you have created a project that has a certain minimum aspect of quality and value, a good test project should have from a management perspective.
It cannot be described by this or any standard that the test results you produce are what your stakeholders expect. This standard won’t prevent any bugs from showing up in production. And of course, following this standard – like any other standard – doesn’t prevent you from creating a product that sucks and nobody wants.

If you don’t follow this standard, your test project can still be a success. If you follow this standard, your project can still fail.

This standard does not describe how to actually test! Because my motto is, as long as bugs don’t adhere to a standard, neither do I when looking for them!

If you think there is a term, an activity or task, or a document that is missing from this standard, please let me know, and I will think about adding it.

Exploring, Testing, Checking, and the mental model.

The beginning

You might be well aware of “Testing and Checking Refined” by James Bach & Michael Bolton, and maybe you are also aware of “The new model of Testing” by Paul Gerrard. I read both more than once and/or saw the videos and webinars.I find many useful aspects in both pieces of work.

But I want to explain more what happens behind the scenes of testing. What is testing actually when we look behind all the obvious actions. I don’t want to explain what we obviously do in a project when we perform a test. I want to try to explain what goes on in our brains when we test and check. And what value is that information bringing to a project.

Testing and checking

Set-up, assumptions, and expectations

Inside a good tester’s mind there is a huge net of information and interlinked models, many anchor points to add or retrieve information, all based upon her knowledge.

When a tester gets the assignment to test a piece of software, that we will call system under test (SUT), she starts immediately to generate a mental model and collect questions and information, without having any further details yet. This starts in the first seconds and continues until the tester gets access to more information. A matter of maybe seconds or minutes, but sometimes hours or even days.

But who or what is informing the mental model of the tester? It’s her experience paired with a certain expectation depending on the context, and curiosity. This sets the foundation of the mental model about the SUT and where it’s placed upon her existing net of models and generating sources for oracles and heuristics.

And a good tester has usually more questions than answers.

Testing

When there finally is access to more information the testing begins. Testing in the sense of learning about the SUT and collecting new information. Questions are asked by the tester to people, while reading a specification, or when interacting with the system through experiments. Answers get embedded in the model, new questions turn up. Further experiments are thought up.

Learning about a system is, and software is always a system, in my opinion always connected to mental modelling. It’s not learning a poet by heart, it’s trying to figure out how something works, and that is directly connected to system or model thinking. Experience already created a rich amount of existing models in our mind including the interconnectedness. When you now learn about a new SUT, you will start seeing smaller models within the new big model, that look familiar. So you will create a link to an existing model including assumptions and expectations.

Our experience and our current vision of the SUT set certain assumptions of how the model or parts of the model should work. There will be questions and experiments to verify these assumptions.

There are parts of the model that may be blank and need to be explored from scratch. But also these parts will start with the assumption that there is something worth to be explored and certain expectations or desires exist that help to frame the first questions or experiments, usually based on heuristics.

Models are not reality, and never will be. But the more information the tester collects, the more accurate and helpful her mental model will become, to make predictions about the behavior of the SUT. Questions and experiments or tests will be formed, which purpose it is to exclude other possible models and ideas of the SUT.

Important to remember is, that when discussing the SUT with someone, e.g. a stakeholder, you are testing the mental model of the stakeholder, not the actual one. The same is true for documentation, it’s the author’s view on the SUT, and might even bear differences between the author’s mind and what is written.

Critical and creative thinking need to be applied, because a whole army of fallacies await the tester behind every corner. But critical and creative thinking should not be part of this article.

Checking

When a tester is testing, she creates a mental model that reflects all the information, facts and interpretation of the former, and creates concepts and theories how parts of the system behave, up to a particular point in time. When the tester reached a certain confidence that her mental model or parts of it reflect the SUT, checks will be created and executed.

There are also the parts of the model that are connected with existing models which already bring a set of encoded instructions (= checks) with them.

A check is an algorithm that will describe certain steps to be performed during checking that should demonstrate the desires about the SUT based on the tester’s mental model and assumptions. It is both an attempt to make tacit knowledge explicit and an attempt to show that the assumptions are aligned with the real thing.

A positive outcome of a check, where the observation matches the expectation, will show that the SUT could still fit the underlying mental model. The problem with most checks is that they focus on narrow aspects of the model. Especially when automated, checks assert often only the absolute minimum of facts necessary to call it per definition a check.

When a human is performing a check, she is able to evaluate many assertions, that often are not encoded in the explicit check.

One check alone does not reflect a mental model, because a single check can fit like a gazillion different models. The whole set of checks narrows down the possible amount of different models, but will never reflect the whole mental model inside the tester’s mind. The checks will only represent key elements of the mental model, which leaves a lot of interpretation in between.

When a check fails, something is wrong, either with the implementation of the check, the mental model, or the SUT. A failed check does not automatically reflect a defect in the SUT. A failed check is an invitation to explore and investigate where check and SUT differ. Testing is needed here.

Checking – in the context of regression testing – is used to confirm that parts of the mental model and the SUT still fit together and are not an obvious subject to change.

If a new tester is trying to learn about the SUT, the set of checks, often called “the test set”, can help to frame the key elements of the model.

The Role of Acceptance Criteria

Acceptance Criteria describe elements of the common understanding of the SUT within the project team. The danger of using acceptance criteria is that stakeholders and team members can easily reach a shallow agreement about the understanding of the SUT. A list of Acceptance Criteria can never replace a serious discussion and sharing of ideas about the model behind the SUT.

The Role of Bugs

Bugs are deviations between the desired behavior of a SUT with the actual one. Since mental models will differ, the perception if a bug is a bug depends in some cases upon the individual.

“A bug is something that bugs someone who matters.”James Bach

The tester herself is usually not considered as someone who matters, but she should represent the views of those who matter. Therefore a tester needs to gain the understanding of how the people or systems who matter use the SUT or how the SUT solves a problem for them.

The Tasks of a Tester

The task of the tester is to create, evolve and enhance the model of the SUT, seek discussions with stakeholders, like product owners, users, or business analysts, asking the right questions to hone the understanding of the SUT.

The tester may encode a set of checks or checkpoints based upon the mental model. Those can be used for performing regression checks or automating those scripts.

When testing the tester’s job is to compare the actual SUT with the current common understanding of how the SUT should behave using all sorts of experiments and facilitating tools. This is heavily influenced by the model the tester has in her mind.

Consequences

This understanding of testing and checking implies:

  • the people responsible for testing need the skills to utilize, build and enhance a good mental model
  • the stakeholders need to share their understanding of the SUT, the problem and the solution with the tester
  • the tester needs to reflect the stakeholders’ understanding of the SUT to find the important bugs
  • when performing a search for regression, a tester should prefer using checklists that invite to testing over test scripts that invite having a tunnel view on the SUT
  • don’t reduce your testing or checking to acceptance criteria

Summary

Testing is all about design, construction, evolving and extension of a mental model. Testing produces the checks to validate the conformity between mental model and SUT.

The mental model should reflect the perception of the stakeholders rather than that of the tester.

As long as all performed check results are positive the SUT is still reflected by the created mental model. It does not prove that this model is the one. Negative check results entail testing to find out if mental model, SUT or the check needs to be adapted.

This does sadly not fit into 140 characters.

And for the end: this is how it all started

On one of my recent lunch break walks I suddenly had an idea how to compress the theory that brewed in my brain for quite a while in 140 characters.

I wanted to explain what happens behind the scenes of testing, in your mind. This was my original try:

//platform.twitter.com/widgets.js

After the tweet went unnoticed for a while, nobody less than Michael Bolton and James Bach challenged my idea. In the end I had to realize, that I need more than 140 characters to explain my idea.

I don’t want to mention the discussion here, so if you are interested what happend that day, please investigate via the above tweet.

Invited by Michael Bolton to put my thoughts into a blog, and I did. What you just read  is the third attempt to get it all right.

Thank you Michael Bolton and James Lyndsay for the discussion in the test lab at EuroSTAR.

EuroSTAR 2015 – my personal summary

It’s now over a week after the end of EuroSTAR and I just finished my last article about this fantastic conference. You can find them all here.
For me it’s now time for a personal review and summary.

First of all I want to thank Emma and the EuroSTAR team for inviting me to the conference and being a media partner. The conference was well-organized and in my eyes flawless in execution. The two dinners were stunning locations and the food was really good. Well done!

It was great to meet so many people again whom I met first time at Let’s Test in May, and who welcomed me back with an open and friendly spirit.

Guna is a great person and brought so much energy to the Test Lab. This baltic, blue-haired bundle of energy made me smile every time I went to the Test Lab. Guna it was an absolute pleasure to finally meet you in person. It was always fun to interact with Guna on Twitter, and will be even more fun, now that I have a vivid image of her before my inner eye.

Finally meeting Colin “Jim” Cherry aka Klaas Kers meant so much to me. Colin just beams with wisdom. There was this short (well, for me most people are short), silent, friendly and open-minded person, not exactly how I imagined him, and he was just an inspiration to my EuroSTAR-experience. Colin made his TestHuddle-blogs so special, that I questioned the usefulness of all my writing so far. Since I met him I want to re-read all his blog posts again with his person in mind. Colin you are an awesome person, and I am very thankful, that we finally met.

Michael Bolton invested more than two hours of his time into helping me review a blog post I was writing a few weeks ago and discussing with me about the nature of testing. It was even more a pleasure that James Lyndsay joined the conversation and let me allow a short look into his mind and how he thinks. You are both an inspiration to me! Thank you gentlemen.

At the community dinner I had the pleasure to share my table with Allison Wade, who is responsible for all the STAR-conferences in North America, and more, and Shmuel Gershon, who was later that week announced program chair of the next EuroSTAR conference. Chatting with those people in the location in the caves underneath a chateau was special for me.

At the conference awards dinner in the next cave location I was joined by Carly Dyson, Nick Shaw, Paul Coyne, Kristoffer Nordström and Iain McCowatt. The evening brought a very passionate discussion about testing in the financial sector between Carly, Paul and Iain. I just loved watching it for two reasons. First, the passion all three of them show is fantastic. And second, witnessing a discussion about testing from three native speakers. English is not my mother tongue, and neither it is for my colleagues in our Munich office, but it is the language of choice, since we are an American company and not all of my colleagues speak German. According is the level of skill and precision in using English. Being surrounded by native speakers and listening to the discussion was an absolute pleasure.

The “Lightning strikes the Speakers” keynote session on the evening of Day 2 was special. It was very intense, but all seven speakers were talking about great topics, all regarding the future of testing. Testing will experience a huge change in the near future, it will be a challenge, but those talks showed how it can be made possible and what is necessary. I am looking happily forward to what the near future will bring to testing. I am ready to be a part of you.

Julie Gardiner’s talk about the survival skills for testers was speaking from the heart. Experiencing Julie’s talk was a pleasure. She has a great stage presence and her 5 step message was spot on.

Meeting the NewVoiceMedia team at EuroSTAR was very nice. I finally had the chance to meet Rob Lambert in real life. Rob is a person I greatly admire for his stage presence and I am very thankful for all the valuable information he shares with the community. And meeting also Kevin Harris and Raji Bhamidipati was a pleasure. NewVoiceMedia seems to breed great people, as you see with so many speakers from that company on the program of EuroSTAR.
And of course my buddy Dan Billing is also a part of the NewVoiceMedia family. Seeing him again was also great and I am happy to pair up with him for Let’s Test.

Now EuroSTAR 2015 is really over for me. All blog posts are written and on Monday and Tuesday I will share some experience from the conference with my team in the office. Now it’s time for me to prepare for my own first ever conference talk in Brighton at TestBash in March.

I can’t wait to meet so many enthusiastic testers in one place again soon.

EuroSTAR 2015 – Do-over session – Julie Gardiner’s Survival Skills for Testers

I was very sorry, when I missed the original talk from Julie Gardiner on Day 2. The merrier I was that her talk was selected for the do-over session, the session that people wanted to see again, wanted others to see it, or wanted to see it the first time. The do-over session is voted for by the audience.

The introduction was planned by Declan, but Colin Cherry, or better Klaas Kers for a couple of days, got the honor to introduce his long time friend Julie.

IMG_4097

Julie’s talk was all about what a tester needs to do today, to stay relevant tomorrow. A topic that I can’t agree more on. In times of rapid-changing technology, new approaches to development, faster times to market, it’s important for testers to improve their skills to have a job tomorrow. I have heard now more than once, that most of people working in “test” today, won’t work in test anymore in a couple of years. Those who want to, should better listen to what Julie has to say.

IMG_4103

The first point is about mentality. Testers should no longer be the “quality police”, better see yourself as the “enabler of quality”. Testers need to provide value throughout the software development lifecycle. That works much better with a helper mentality than an enforcer. Enable by being a trusted advisor, the conscience, trainer and coach, quality guru, provide guidance, and implement quality in the whole lifecycle.

You should have a passion for testing. “If testing isn’t fun, you are doing it wrong!” This sentence is so much worth. You can make testing fun, by constant learning (new stuff), seeing improvements and make them happen, and find opportunities to test everything. Testing can be so much fun, if done right.
You need to understand your skills, and how to foster them. Julie suggests the Dreyfus model and an evaluation of your style of testing. Evaluate your scores, sum up the left columns for both X- and Y-axis, and place your dot on the map. Then you see what kind of tester you are:

IMG_4120 IMG_4124 IMG_4125 IMG_4126 IMG_4127

“Take ownership of your career” is an important message. Most people still expect their companies to help them with their career. But many companies can’t or don’t want to afford the huge amounts of time and money that it needs in current times to stay up-to-date and relevant, especially in testing. So you need to take care for yourself, if you still want to be relevant tomorrow.
What you can do, is learn (self-education), find a mentor who helps you, and create an action plan of where you want to go and how you want to get there.

Demonstrate and report the value of testing. Testing is expensive, but compared to what? Not testing is not an option. So show value by how much you saved the company, demonstrate effectiveness and use a language management can understand. Risk rules! Test cases don’t!

And it’s import to retain your integrity. “Integrity is the consistency of actions, values, methods, measures, and principles.” Avoid being a “yes” person. Be the conscience of the project management. And stand up and be counted! That was in reference to a story Julie told about an experiment in an elementary school, where someone convinced the class to trick one girl by saying that 2+2 is 5. When the teacher asked what 2+2 is, and everyone said it’s 5, the girl, who was still convinced that it’s 4, said it’s 5, because she didn’t want to stand up/ against the class.
As a tester it’s important to be the one who stands up!

Choose your battles wisely. Only some battles are worth fighting for. Save your energy and choose wisely!

Survival means standing out and making a difference. Julie closed with a quote from Franklin D. Roosevelt: “There are many ways of going forward, but only one way to stand still.”

 

My personal summary is, that this was one of the best talks I have seen at EuroSTAR, and I am so glad that I had the chance to see it in the do-over session. Julie has a wonderful stage presence and an enthusiastic way of delivering her talk. She was left, center and right, interacting with her slides and the audience, using the whole stage. The great topic and her presence made it really an outstanding talk.
I was just sitting there, nodding. The topic is spot on, I greatly support all she presented, and I hope that everyone who talks about that topic reaches many people.
I had the chance to thank Julie in person for her talk, and I would have loved to spend more time talking to her. Colin was so right about her! Thank you.