Exploring, Testing, Checking, and the mental model.

The beginning

You might be well aware of “Testing and Checking Refined” by James Bach & Michael Bolton, and maybe you are also aware of “The new model of Testing” by Paul Gerrard. I read both more than once and/or saw the videos and webinars.I find many useful aspects in both pieces of work.

But I want to explain more what happens behind the scenes of testing. What is testing actually when we look behind all the obvious actions. I don’t want to explain what we obviously do in a project when we perform a test. I want to try to explain what goes on in our brains when we test and check. And what value is that information bringing to a project.

Testing and checking

Set-up, assumptions, and expectations

Inside a good tester’s mind there is a huge net of information and interlinked models, many anchor points to add or retrieve information, all based upon her knowledge.

When a tester gets the assignment to test a piece of software, that we will call system under test (SUT), she starts immediately to generate a mental model and collect questions and information, without having any further details yet. This starts in the first seconds and continues until the tester gets access to more information. A matter of maybe seconds or minutes, but sometimes hours or even days.

But who or what is informing the mental model of the tester? It’s her experience paired with a certain expectation depending on the context, and curiosity. This sets the foundation of the mental model about the SUT and where it’s placed upon her existing net of models and generating sources for oracles and heuristics.

And a good tester has usually more questions than answers.

Testing

When there finally is access to more information the testing begins. Testing in the sense of learning about the SUT and collecting new information. Questions are asked by the tester to people, while reading a specification, or when interacting with the system through experiments. Answers get embedded in the model, new questions turn up. Further experiments are thought up.

Learning about a system is, and software is always a system, in my opinion always connected to mental modelling. It’s not learning a poet by heart, it’s trying to figure out how something works, and that is directly connected to system or model thinking. Experience already created a rich amount of existing models in our mind including the interconnectedness. When you now learn about a new SUT, you will start seeing smaller models within the new big model, that look familiar. So you will create a link to an existing model including assumptions and expectations.

Our experience and our current vision of the SUT set certain assumptions of how the model or parts of the model should work. There will be questions and experiments to verify these assumptions.

There are parts of the model that may be blank and need to be explored from scratch. But also these parts will start with the assumption that there is something worth to be explored and certain expectations or desires exist that help to frame the first questions or experiments, usually based on heuristics.

Models are not reality, and never will be. But the more information the tester collects, the more accurate and helpful her mental model will become, to make predictions about the behavior of the SUT. Questions and experiments or tests will be formed, which purpose it is to exclude other possible models and ideas of the SUT.

Important to remember is, that when discussing the SUT with someone, e.g. a stakeholder, you are testing the mental model of the stakeholder, not the actual one. The same is true for documentation, it’s the author’s view on the SUT, and might even bear differences between the author’s mind and what is written.

Critical and creative thinking need to be applied, because a whole army of fallacies await the tester behind every corner. But critical and creative thinking should not be part of this article.

Checking

When a tester is testing, she creates a mental model that reflects all the information, facts and interpretation of the former, and creates concepts and theories how parts of the system behave, up to a particular point in time. When the tester reached a certain confidence that her mental model or parts of it reflect the SUT, checks will be created and executed.

There are also the parts of the model that are connected with existing models which already bring a set of encoded instructions (= checks) with them.

A check is an algorithm that will describe certain steps to be performed during checking that should demonstrate the desires about the SUT based on the tester’s mental model and assumptions. It is both an attempt to make tacit knowledge explicit and an attempt to show that the assumptions are aligned with the real thing.

A positive outcome of a check, where the observation matches the expectation, will show that the SUT could still fit the underlying mental model. The problem with most checks is that they focus on narrow aspects of the model. Especially when automated, checks assert often only the absolute minimum of facts necessary to call it per definition a check.

When a human is performing a check, she is able to evaluate many assertions, that often are not encoded in the explicit check.

One check alone does not reflect a mental model, because a single check can fit like a gazillion different models. The whole set of checks narrows down the possible amount of different models, but will never reflect the whole mental model inside the tester’s mind. The checks will only represent key elements of the mental model, which leaves a lot of interpretation in between.

When a check fails, something is wrong, either with the implementation of the check, the mental model, or the SUT. A failed check does not automatically reflect a defect in the SUT. A failed check is an invitation to explore and investigate where check and SUT differ. Testing is needed here.

Checking – in the context of regression testing – is used to confirm that parts of the mental model and the SUT still fit together and are not an obvious subject to change.

If a new tester is trying to learn about the SUT, the set of checks, often called “the test set”, can help to frame the key elements of the model.

The Role of Acceptance Criteria

Acceptance Criteria describe elements of the common understanding of the SUT within the project team. The danger of using acceptance criteria is that stakeholders and team members can easily reach a shallow agreement about the understanding of the SUT. A list of Acceptance Criteria can never replace a serious discussion and sharing of ideas about the model behind the SUT.

The Role of Bugs

Bugs are deviations between the desired behavior of a SUT with the actual one. Since mental models will differ, the perception if a bug is a bug depends in some cases upon the individual.

“A bug is something that bugs someone who matters.”James Bach

The tester herself is usually not considered as someone who matters, but she should represent the views of those who matter. Therefore a tester needs to gain the understanding of how the people or systems who matter use the SUT or how the SUT solves a problem for them.

The Tasks of a Tester

The task of the tester is to create, evolve and enhance the model of the SUT, seek discussions with stakeholders, like product owners, users, or business analysts, asking the right questions to hone the understanding of the SUT.

The tester may encode a set of checks or checkpoints based upon the mental model. Those can be used for performing regression checks or automating those scripts.

When testing the tester’s job is to compare the actual SUT with the current common understanding of how the SUT should behave using all sorts of experiments and facilitating tools. This is heavily influenced by the model the tester has in her mind.

Consequences

This understanding of testing and checking implies:

  • the people responsible for testing need the skills to utilize, build and enhance a good mental model
  • the stakeholders need to share their understanding of the SUT, the problem and the solution with the tester
  • the tester needs to reflect the stakeholders’ understanding of the SUT to find the important bugs
  • when performing a search for regression, a tester should prefer using checklists that invite to testing over test scripts that invite having a tunnel view on the SUT
  • don’t reduce your testing or checking to acceptance criteria

Summary

Testing is all about design, construction, evolving and extension of a mental model. Testing produces the checks to validate the conformity between mental model and SUT.

The mental model should reflect the perception of the stakeholders rather than that of the tester.

As long as all performed check results are positive the SUT is still reflected by the created mental model. It does not prove that this model is the one. Negative check results entail testing to find out if mental model, SUT or the check needs to be adapted.

This does sadly not fit into 140 characters.

And for the end: this is how it all started

On one of my recent lunch break walks I suddenly had an idea how to compress the theory that brewed in my brain for quite a while in 140 characters.

I wanted to explain what happens behind the scenes of testing, in your mind. This was my original try:

//platform.twitter.com/widgets.js

After the tweet went unnoticed for a while, nobody less than Michael Bolton and James Bach challenged my idea. In the end I had to realize, that I need more than 140 characters to explain my idea.

I don’t want to mention the discussion here, so if you are interested what happend that day, please investigate via the above tweet.

Invited by Michael Bolton to put my thoughts into a blog, and I did. What you just read  is the third attempt to get it all right.

Thank you Michael Bolton and James Lyndsay for the discussion in the test lab at EuroSTAR.

EuroSTAR 2015 – my personal summary

It’s now over a week after the end of EuroSTAR and I just finished my last article about this fantastic conference. You can find them all here.
For me it’s now time for a personal review and summary.

First of all I want to thank Emma and the EuroSTAR team for inviting me to the conference and being a media partner. The conference was well-organized and in my eyes flawless in execution. The two dinners were stunning locations and the food was really good. Well done!

It was great to meet so many people again whom I met first time at Let’s Test in May, and who welcomed me back with an open and friendly spirit.

Guna is a great person and brought so much energy to the Test Lab. This baltic, blue-haired bundle of energy made me smile every time I went to the Test Lab. Guna it was an absolute pleasure to finally meet you in person. It was always fun to interact with Guna on Twitter, and will be even more fun, now that I have a vivid image of her before my inner eye.

Finally meeting Colin “Jim” Cherry aka Klaas Kers meant so much to me. Colin just beams with wisdom. There was this short (well, for me most people are short), silent, friendly and open-minded person, not exactly how I imagined him, and he was just an inspiration to my EuroSTAR-experience. Colin made his TestHuddle-blogs so special, that I questioned the usefulness of all my writing so far. Since I met him I want to re-read all his blog posts again with his person in mind. Colin you are an awesome person, and I am very thankful, that we finally met.

Michael Bolton invested more than two hours of his time into helping me review a blog post I was writing a few weeks ago and discussing with me about the nature of testing. It was even more a pleasure that James Lyndsay joined the conversation and let me allow a short look into his mind and how he thinks. You are both an inspiration to me! Thank you gentlemen.

At the community dinner I had the pleasure to share my table with Allison Wade, who is responsible for all the STAR-conferences in North America, and more, and Shmuel Gershon, who was later that week announced program chair of the next EuroSTAR conference. Chatting with those people in the location in the caves underneath a chateau was special for me.

At the conference awards dinner in the next cave location I was joined by Carly Dyson, Nick Shaw, Paul Coyne, Kristoffer Nordström and Iain McCowatt. The evening brought a very passionate discussion about testing in the financial sector between Carly, Paul and Iain. I just loved watching it for two reasons. First, the passion all three of them show is fantastic. And second, witnessing a discussion about testing from three native speakers. English is not my mother tongue, and neither it is for my colleagues in our Munich office, but it is the language of choice, since we are an American company and not all of my colleagues speak German. According is the level of skill and precision in using English. Being surrounded by native speakers and listening to the discussion was an absolute pleasure.

The “Lightning strikes the Speakers” keynote session on the evening of Day 2 was special. It was very intense, but all seven speakers were talking about great topics, all regarding the future of testing. Testing will experience a huge change in the near future, it will be a challenge, but those talks showed how it can be made possible and what is necessary. I am looking happily forward to what the near future will bring to testing. I am ready to be a part of you.

Julie Gardiner’s talk about the survival skills for testers was speaking from the heart. Experiencing Julie’s talk was a pleasure. She has a great stage presence and her 5 step message was spot on.

Meeting the NewVoiceMedia team at EuroSTAR was very nice. I finally had the chance to meet Rob Lambert in real life. Rob is a person I greatly admire for his stage presence and I am very thankful for all the valuable information he shares with the community. And meeting also Kevin Harris and Raji Bhamidipati was a pleasure. NewVoiceMedia seems to breed great people, as you see with so many speakers from that company on the program of EuroSTAR.
And of course my buddy Dan Billing is also a part of the NewVoiceMedia family. Seeing him again was also great and I am happy to pair up with him for Let’s Test.

Now EuroSTAR 2015 is really over for me. All blog posts are written and on Monday and Tuesday I will share some experience from the conference with my team in the office. Now it’s time for me to prepare for my own first ever conference talk in Brighton at TestBash in March.

I can’t wait to meet so many enthusiastic testers in one place again soon.

EuroSTAR 2015 – Do-over session – Julie Gardiner’s Survival Skills for Testers

I was very sorry, when I missed the original talk from Julie Gardiner on Day 2. The merrier I was that her talk was selected for the do-over session, the session that people wanted to see again, wanted others to see it, or wanted to see it the first time. The do-over session is voted for by the audience.

The introduction was planned by Declan, but Colin Cherry, or better Klaas Kers for a couple of days, got the honor to introduce his long time friend Julie.

IMG_4097

Julie’s talk was all about what a tester needs to do today, to stay relevant tomorrow. A topic that I can’t agree more on. In times of rapid-changing technology, new approaches to development, faster times to market, it’s important for testers to improve their skills to have a job tomorrow. I have heard now more than once, that most of people working in “test” today, won’t work in test anymore in a couple of years. Those who want to, should better listen to what Julie has to say.

IMG_4103

The first point is about mentality. Testers should no longer be the “quality police”, better see yourself as the “enabler of quality”. Testers need to provide value throughout the software development lifecycle. That works much better with a helper mentality than an enforcer. Enable by being a trusted advisor, the conscience, trainer and coach, quality guru, provide guidance, and implement quality in the whole lifecycle.

You should have a passion for testing. “If testing isn’t fun, you are doing it wrong!” This sentence is so much worth. You can make testing fun, by constant learning (new stuff), seeing improvements and make them happen, and find opportunities to test everything. Testing can be so much fun, if done right.
You need to understand your skills, and how to foster them. Julie suggests the Dreyfus model and an evaluation of your style of testing. Evaluate your scores, sum up the left columns for both X- and Y-axis, and place your dot on the map. Then you see what kind of tester you are:

IMG_4120 IMG_4124 IMG_4125 IMG_4126 IMG_4127

“Take ownership of your career” is an important message. Most people still expect their companies to help them with their career. But many companies can’t or don’t want to afford the huge amounts of time and money that it needs in current times to stay up-to-date and relevant, especially in testing. So you need to take care for yourself, if you still want to be relevant tomorrow.
What you can do, is learn (self-education), find a mentor who helps you, and create an action plan of where you want to go and how you want to get there.

Demonstrate and report the value of testing. Testing is expensive, but compared to what? Not testing is not an option. So show value by how much you saved the company, demonstrate effectiveness and use a language management can understand. Risk rules! Test cases don’t!

And it’s import to retain your integrity. “Integrity is the consistency of actions, values, methods, measures, and principles.” Avoid being a “yes” person. Be the conscience of the project management. And stand up and be counted! That was in reference to a story Julie told about an experiment in an elementary school, where someone convinced the class to trick one girl by saying that 2+2 is 5. When the teacher asked what 2+2 is, and everyone said it’s 5, the girl, who was still convinced that it’s 4, said it’s 5, because she didn’t want to stand up/ against the class.
As a tester it’s important to be the one who stands up!

Choose your battles wisely. Only some battles are worth fighting for. Save your energy and choose wisely!

Survival means standing out and making a difference. Julie closed with a quote from Franklin D. Roosevelt: “There are many ways of going forward, but only one way to stand still.”

 

My personal summary is, that this was one of the best talks I have seen at EuroSTAR, and I am so glad that I had the chance to see it in the do-over session. Julie has a wonderful stage presence and an enthusiastic way of delivering her talk. She was left, center and right, interacting with her slides and the audience, using the whole stage. The great topic and her presence made it really an outstanding talk.
I was just sitting there, nodding. The topic is spot on, I greatly support all she presented, and I hope that everyone who talks about that topic reaches many people.
I had the chance to thank Julie in person for her talk, and I would have loved to spend more time talking to her. Colin was so right about her! Thank you.

EuroSTAR 2015 – Everything I know about Testing I learned from the Scientific Method

Paul Coyne’s talk was about the scientific method and what he learned from it about testing.

To start with a preface here. In a talk between Keith Klain and James Bach about what testers can do to improve, James said “I want you to learn how to design experiments”. That means learn about the scientific method. This is the method behind testing. So I put that on my to-do list. I have heard about it before, read the Wikipedia article, but never had the time to dive a bit deeper.

So here was Paul, talking about the scientific method, the room was quite full, and Paul seemed to get comfortable with presenting here.

Paul is a failed zoologist. So he has a scientific background and via some detours came to testing. His knowledge about the scientific method helped him to understand “testing”.

I have heard that before, but it summarizes it greatly. It’s not about “Heureka!”, it’s more like “that’s funny”. And serendipity plays a big role in both science and testing. Rikard Edgren has some good insights into that topic as well.

Science is not a body of knowledge, or a textbook, it’s not a tool, and most of all, it’s not changable. The scientific method is a way of thinking and a way of investigating. It’s there to find new information.
One black swan is all it takes. That’s in reference to Nassim Taleb.

IMG_3806

The scientific method is “a systematic and logical approach to discovering how things work”. And that’s exactly what testing is all about.

An observation leads to a hypothesis that can be proven wrong. The hypothesis needs to be testable. And all a scientist tries to do, is prove a hypothesis wrong. Testing is trying to show that the code does not work as intended.
A failure in an experiment is not the failure to get the expected result. Same in testing, and please don’t confuse test case results like “Passed” and “Failed” here. You should always test for failure, because positive testing is not very helpful.

Good testers value the scientific method, as described in this slide:

IMG_3822

Interesting for me was as well the statement: “All prior knowledge is provisional until disproven.” That is so true, and often seen in history.

And to come to an end here, a good test result is not “It works!”, but “I did my best and I didn’t find anything that’s wrong.”

This talk was so important in many ways. For rookie-testers who have not much experience or were trained classical in some foundation training, this provides a valuable insight into what testing really is. For more aged testers who may have lost track, that was an important lesson and will bring some of them back on the right road. For me it was both fantastic to see a presentation about the scientific method at a tester’s conference, and to watch the audience and the more and more nodding heads. And it also confirmed my beliefs about what testing really should be in a world of shallow testing.

As a personal note to the end, I had the chance to meet Paul at the conference award dinner and chat some more with him, and even better experience him in a passionate discussion. Paul is a great guy, and I am really happy I went to his talk and met him again at the dinner.

The iSTQB foundation certificate guarantee

When I scrolled through my timeline this morning I saw this ad.

ImbusAdTweet

ISTQB Certified Tester Trainings 2016: with exclusive guarantee to pass [the exam] and guaranteed appointments!

This makes my toenails roll up. As some might have noticed I don’t hold the highest opinion about the ISTQB certificate, please note, the certificate. It’s not about the training. If they guarantee you that you will pass the exam, what value does taking the exam have?

The ad sounds like: “Take the exam with us and you will get the certificate!” So why waste the time? Book the course, get the certificate.

It’s unfair for those who try to get the overrated certificate by trying on their own. If they could afford to buy the training at imbus, they would get it guaranteed.

But the ad is just a wrong promise! (@Rex: not helping with making ISTQB more credible in my opinion. And imbus is not even ISTQB.)

When you read the long text behind the tweet ad you get some details in the small print:
You will get prepared optimal for the exam questions, and in case you fail, imbus will pay you another try within 2016.

First of all, in my personal view, the questions of the ISTQB exam are not the best. Type of question, how they are asked, and often the strange answers, don’t value learning about testing. You can find some interesting insights here (there are 6 parts!!!) and here (near the end of the podcast).
So if your training focuses on the exam questions, you loose time spending on really learning about testing. And I will say at this point, that the syllabus itself of the ISTQB foundation holds some good basics for people who want learn about formal ways of testing. Sadly often the job reality, but not the topic here.

The worst thing in my eyes is, that, at least in Germany, at the time I got my certificates, you don’t get a chance to see what questions you answered wrong. So you cannot learn from a failed certificate, and in case of a not 100% result, but good enough, you don’t know where your knowledge failed you and you don’t care any more. At the advanced level I got only the results per category showing me that I didn’t get everything right, but not what! NOT HELPFUL! Not in case of ISTQB haters, and most of all not helpful for those who like ISTQB and want to continue with the next level.

In case of imbus, when you have failed the first exam, you don’t know why you failed, but you get a second chance. First of all, not helpful (see above); and all you get is a second lucky shot in the ISTQB lottery. There might be a certain number of people who get it right in the second attempt. But what about those who still did not get it? No third chance, no lessons learned, where is your guarantee now? Sorry imbus, but that ad is a wrong promise.

EuroSTAR 2015 – my third day in review

The third day kicked off with Rikard Edgren and his “Growing from a reckless bug hunter to a stakeholder conversationalist”. Rikard’s message was that you need earn respect by finding valuable information. Tester’s are in the information business. Testing is never better than the communication of the results!

IMG_3955

Rikard described his way to become context-driven in three major steps. It started with his biggest mistake. He and his team found 30 bugs, and they were proud. And they wondered why nobody came back to them with a response. The reason was that they were context-unaware and failed to understand the real testing mission.
Second step was the poster story. Rikard and his colleagues published the famous poster of quality characteristics, I have it hanging next to my desk myself, and they felt as context-hipsters. His tip was to use the poster for finding test ideas and James Bach’s list for test strategy purpose.

IMG_3965
But Rikard was not happy with the poster because it uses his namespace. His new approach suggests, start with a blank page and ask the stakeholders what is important for them. Use the customer’s words.

The last step is “The Conversationalist”. Rikard is doing more talking than testing nowadays and values information pull over information push. You have to adjust your language to the stakeholder, and invest the time to find out that you know you are testing the right thing.

Explain your testing, why are you testing and why is your test strategy good? Anchor you test strategy, often the test report is not the problem, it’s the strategy that is not understood.

My takeaways from that session are, that I am not the only one who made mistakes due to misunderstanding the mission. In my opinion it is important for a tester to speak the languages of the parties he is working with and that the tester is able to translate from his language/namespace to the stakeholders namespace. Rikard fortified my opinion.

 

Next on my list was Geoff Thompson talking about “Test Process Improvement – How hard can it be?” The talk was mostly about change, and why it’s so hard to improve your test process. I liked the statement “It seems to be easier to keep paying people doing things wrong.” The key messages of the talk were taken from John Kotter’s book “Our Iceberg is Melting”. There is also a nice video available.

IMG_3989

The Dunning-Kruger effect is important to consider when going through change. The unskilled overrate their abilities, while the skilled underrate them. And there is also the dis-organized people accept change with open arms while organized people already think they are effective.
At the center of every process and every change to it stand people and culture. And there will always be someone in a change project who says: No! As change manager you have to concentrate on those people to be successful.

My takeaway was that “change is difficult”. Well, we are humans, ain’t we, and we don’t like change.

 

It was time for the next “Soap Box Session”, and it was my buddy Dan Billing up on the box. Dan gave a shout-out to the Weekend Testing Europe chapter, which is a great institution to improve your testing. So far I only joined Weekend Testing America sessions, but they are all worth attending. I can only affirm Dan’s statement: “Join Weekend Testing!”
It would not be Dan if he talked only about a non-security topic. So there was a second part. And it was about EXTERMINATE! Dan’s Dr. Who favorite villain related mnemonic about security testing!

IMG_4005CTCjJOmWEAA91w2

 

Next up was Michael Bolton and his statement “No more exploratory testing!”. I have read Exploratory Testing 3.0, so I new roughly what was coming, but still it is a pleasure to see Michael on stage. That was obviously the view of many, because the auditorium was packed.

The beginning of the history of testing was very much confirming my experience of the past 13 years as a tester. In 1972 there was the book “Program Test Methods”, it was trying to structure testing and it ignored completely the human aspect to testing. Testing became confused with its artifacts. Testing became over-formalized by processes (see also the latest attempt: ISO29119), and testing was all about the test cases. Since computers are procedural, so have to follow procedures to test it.
It was also the time that “ad hoc” and “exploratory” got confused and many mix up “unscripted” with “unstructured” when talking about exploratory testing. Michael’s article “Testing without a map” shows that Exploratory Testing has a lot of structure. The key elements of exploratory testing are freedom and responsibility. Scripted testing is controlling the tester from the outside. And people seem to forget, that you need to do exploratory testing first to get to scripted testing.

IMG_4031
We have to relax our degree of description to follow to give testers the freedom and responsibility they deserve to fulfill their tasks. And we always seem to forget, that there is no other cognitive profession using cases to frame and describe their work. And very important, don’t confuse checklists with test cases.

So the conclusion is, all testing is exploratory, so you can skip the “exploratory”. And “scripting” is just an approach.

IMG_4037

My takeaway from this talk was learning about the background why in my former company, which was heavily iSTQB- and waterfall-driven, exploratory testing had a bad reputation: 1) they simply did not understand it, 2) they tried to reduce the human factor. Which can also be seen in the naming: Test Factory!
My second takeaway is that my approach of the last 2 years to start with heavy exploratory testing and then produce correct and useful test scripts for regression testing purpose was correct. Only some know why I had to abandon it, and I won’t state it here.

IMG_4042

And we came to the closing keynote “Wild West Security” by Paco Hope. Paco designed his metaphor for his key message based on the famous western movie “The Magnificent Seven”. He described seven roles of an IT project who all have their responsibilities for security and have to contribute to it. All roles have certain specialties that make them predestined to contribute to security. He described how Testers, DevOps, Product Owner, Project Manager, Architects, Developers, and Security Specialists can help with making their product secure.

IMG_4051

The key message was: “Everyone who has something to do with Software has something to do with Software Security”.

It was a fun metaphor to show how everyone can and has to contribute to software security. And my key takeaway is to learn more about security testing and aspects I need to be aware of.

The ladies from the test lab took the stage:

IMG_4084

Carly, Adina, Jyothi, Susan, and Guna made a fantastic job in providing challenges and riddles, hosting a wonderful area in the Expo where people could meet, discuss, and learn. Thank you lab rats! You did a fantastic job!

Then it was time to announce the next destination and the next conference chair for EuroSTAR 2016. And it will be in Stockholm from Oct 31st – Nov 3rd, with conference chair Shmuel Gershon! In my opinion two excellent choices!

IMG_4089IMG_4090

And it was time for the do-over session. Attendees could vote for sessions that they wanted to see, to see again, or wanted others to see it. And it was a session I missed on Wednesday and wanted to see, lucky me. Julie Gardiner was talking about “Survival Skills for Testers”. That was my session of day 3, why I described it in an extra post.

 

To conclude a wonderful experience I went last to conference chair Ruud Teunissen’s “How to share your lessons learned”. I shared with you already a lot of information and insights of the 3 days of EuroSTAR, next will be my team.

Thanks for staying till the end, I hope you like my review of day 3 of EuroSTAR.

EuroSTAR 2015 – my second day in review

Day 2 of EuroSTAR 2015 kicked off, of course with the Ruudmap of the day, by conference chair Ruud Teunissen. The day was packed with one keynote, 24 track sessions, and “Lightning Strikes the Speakers”.

The keynote of the day was delivered by Jeffery Payne about “Test Automation. The DevOps Achilles Heel.” And in reference to yesterday’s keynote that was all about the future, DevOps is already here! DevOps is a philosophy to get everybody on the same page and to foster collaboration and communication. The statement “we can’t apply DevOps here” is just an excuse for Jeffery, because he saw the DevOps approach work in many different environments.

The essential conflict of Dev vs. Ops is, that devs have the desire for rapid rollout, while the ops people have the desire for stable conditions. As in yesterday’s talk from Sujay, Jeffery described Continuous Integration and Continuous Delivery as his key elements for a successful DevOps approach. Whereas the other CD thing – continuous deployment – might work well for some big companies and also lots of small ones, that component does not suit many.

IMG_3768 IMG_3773

Automation of everything along the chain makes it possible to “fail fast” and saves time on time-consuming activities, that can be better achieved by a machine or script.

“DevOps is the best thing that ever happened to testing.” I am not so sure about that. In the overall process of software development the DevOps approach forces collaboration, which is good for testing. Because most issues in a project are interpersonal and are not due to technical issues.

I found the “Test results are “blinky”” slide funny and sadly true in too many cases. But the ability of DevOps tools to create fancy and blinky test reports does not improve the situation that management doesn’t understand what testing is. The communication between testers and management needs to be improved and not be misrouted via blinky reports, just because you can.

One of the closing statements was that you can start running more automated tests. But do I really want to that? Why can I not use the tests that matter most and abandon the rest, because they will be difficult and expensive to maintain. After the test run, you need someone to interpret the results. Good testing is not that you simply count on passed tests and pass the quality gate to the next environment.

Overall I had mixed feelings about the keynote. It was a solid overview of the elements of DevOps, but especially towards the end I found that Jeffery was sending the wrong signals what DevOps is doing to testing. It might still be the best thing that ever happened to testing, but if so, then for some other reasons. At least from my point of view.

Next on schedule was the track session from Paul Coyne about “Everything I know about testing I learned from the scientific method”. I wrote about this experience in an extra blog post.

I had the honor that Michael Bolton took some, well not only some, but about 2 hours, of his precious time to review one of my upcoming blog posts. And the discussions we had in the test lab during the review, also with other people, like James Lyndsay, were intense and fantastic. I missed two track sessions due to that fact, but I hope everyone understands that it was for a very good reason.

I will end writing for tonight and continue tomorrow. Stay tuned for info about the ISO 29119 discussion track, Kevin Harris’ “Top 10 Mistakes Testing in Agile”, Grace O’Mahony’s “just one slide inspired me to be a better coach for testers in Agile” and the really intense but insightful “Lightning strikes the Speakers” session. And of course some photos from the Conference Awards Dinner.

[continue]

During the lunch break I met Alan Page and had the chance to thank him for his inspiring podcast and the ideas he and Brent are sharing. If you haven’t listened to them before, do so now. They deserve a fourth listener. It’s great content.

After the lunch break I was going to the “Let’s talk about the ISO29119 Standards” talk/discussion. To be honest and frank it was a bit disappointing. The introduction was, due to the short amount of time, too shallow for the audience that had not heard about the standard before. When the questioning round was open the expected suspects raised their hands. The first question came from a person, whose name I sadly did not get, who was the first person I ever met who was actually using ISO29119 and likes it. His question was about a problem, that he struggles with the many “test plans” the standard describes, and if there isn’t a better nomenclature to name the different documents to produce. And there a lot of documents titled “test plan” something in the standard. After that Iain McCowatt asked for new evidence that proofs the standard to be correct, as mandated by ISO for a standard to have to be accepted. But it seems that part is still missing, and still the standard got accepted so far by the board. Michael Bolton questioned the standards that were used to produce the standard, and Karen Johnson asked a most fascinating question – at least for me – why are Anne Mette and Stuart are engaged personally professionally in creating such a standard. When speaking with Karen afterwards, which was an inspiring experience in just 5 minutes, she summarized the answers very well. They saw the different directions testing was heading (both have many years of experience) and feared the unstructured chaos the world of testing became over the years, and they wanted to bring structure back to chaos.

The session was a bit disappointing for me, the discussion part was rather unstructured, and I missed information actually showing that the standard can be helpful, besides creating huge amounts of documents. And I stay with my statement, for what they intended to do, why not create a body of knowledge, like the PMBoK. You can have all elements and simply pick those you need or want, without having a huge bureaucracy of justifying why not to use several parts of a standard. Stuart answered that question for me months ago with, “because the ISO doesn’t produce BoK”. My question today would be, did ISO ask Stuart to write a standard or was Stuart asking ISO to create one? Is there really a need for that standard in form of a standard?

And to conclude for now: As long as bugs don’t obey to standards, why should I?

Next up was one of the many NewVoiceMedia speakers, Kevin Harris. Sadly I missed Rob’s and Raji’s talks earlier that day. The room was packed, people were even sitting on the floor.
I thought about writing an extra blog post about Kevin’s experience report and his lessons learned, but that might give away a bit too much for people who want to see that talk in the future. For me it was an interesting talk from two different perspectives. The presentation itself was well crafted, with the right amount of slides, the right speed and a balanced amount of information for a 45 minute talk. I don’t know who influenced whom, but it reminded me a bit of Rob Lambert’s talk from NTD earlier this year. Since they are colleagues, the idea might not be that wrong. The show was really good!

Now to the content. The topic was about Kevin’s “Top 10 Mistakes Testing in Agile”. In my current projects we are far from Agile and the first ~10 years of my career was pure waterfall. So I have no personal experience with Agile in any form. All I know about Agile is from talks, articles and discussions. So I was looking forward to see what could go wrong with Agile. To give a short summary, the important takeaways described some of the basic principles of Agile and SCRUM projects, as far as I know theoretically. But it was interesting to hear the examples Kevin gave, why it went wrong in the first place, when they should’ve known better.

Value the kick-off and ask all the questions, especially as a tester. Put EVERYTHING on the Kanban board that needs to be done in the project. Make stories as small as possible to decrease the complexity. Outsourcing testing, especially into a different time zone, increases cycle time and silo thinking, which brings us back to waterfall. So keep your team together. Which was also the next message. Try to keep the team stable.  Let developers automate the checks. Communication is key and can save you valuable time. Don’t do more than necessary. Share responsibilities and tasks, don’t create key players. For Agilists that should be nothing too surprising, right? Well for waterfall, V-model, and whatever-not-SCRUM people, this is often like the complete opposite of their reality.

Well that was only nine. The 10th takeaway was that testers should not brag about the bugs they found. For me that would mean a rough change in motivation for many testers. Yes, we should shift left and find the bugs as early as possible, but why should we not be proud that we caught some? I think I will ask Kevin to find more about it.

And then it was time to hear Iain McCowatt stand up on the Soap Box and give his version on the topic of ISO 29119. I hope it will be soon available on the TestHuddle. I will update my blog as soon as I know it’s up.

Next on my agenda was Grace O’Mahony and the topic of “Just one slide inspired me to be a better coach for testers in agile teams”. To be honest I did not expect much more than one slide, but it was a whole more. Grace told us her experience about introducing 20 teams to Agile and their different problems and how she was able to overcome them. “The slide” was from Fran O’Hara from his talk about “Acceptance Testing in Agile – what does it mean to you?” Sadly the slide was not at the center of the presentation, and Grace went pretty quick over it. She described auditing her teams and realizing that the testers were not as proactive as they should be in an Agile context. The result was even that most of the teams categorized themselves as A or B. They had to accept that change is slow, but they got through the transformation, released the stress from the testers and had more happy people. All in all, Grace presented her success story on how the model from Fran O’Hara helped her to improve her teams. I would have liked to hear more about THE slide, what was the key element that made her realize how to benefit from that and how it changed her approach. The presentation had lots of slides and Grace was rushing a bit through them. For me the title was a bit misleading. But that’s just my view.

TheSlide

After a short visit to the TestHuddle and the Testlab people filled the auditorium for “Lightning strikes the Speakers!”

The show started with a special ensemble. I’ve made a video, but still need to edit it. It will be published here soon.

“Lightning strikes the Speakers” are seven speakers with 5 minutes each. If the 5 minutes are out and the speaker is still speaking, he is struck by lightning, and yikes, that was loud. The topic was “Testing in the year 2030”.

Iris Pinkster – from team *T*E*S*T – Clinic – started with the “Human Factor”. Humans are good at throwing things over the fence, but in Agile / DevOps approaches you have to work together. So think up a new process and realize that you are a team!

Jeffery Payne referenced to the opening keynote and the future of testing the Internet of Things. It will be the age of non-functional testing. 1. Fault Tolerance, 2. Robustness in Error-handling, 3. Privacy

Derk-Jan de Grood‘s message was, that we have to improve our testing. In Agile the responsibility is placed low. And before he was struck by lightning he asked the important question, is testing at the level we want it to be?

Michael Bolton was criticizing that we are sloppy on thinking and that testers and managers don’t communicate well. Exchange “verify that” with “challenge the idea that”, and “validate” with “investigate” or “look for problems”. Many testers are demoralized by management and processes, and are not curious and playful anymore. Change your language and improve your communication.

Rikard Edgren’s topic was that tester’s bring new perspectives and ideas. He described his first assignment, where the developers were all trained on the same academy. Rikard studied philospohy and other non-IT-related topics. Rikard was the first one to bring new perspectives to the project. Rikard challenged all testers “what are the new perspectives we will find?”

Rob Lambert‘s view into the future was that by 2030 there should be more qualified people to hire. He has the problem that it’s hard to find good people today. And he presented us his “10 things to improve”.

IMG_3925

Last, but not least, Kristoffer Nordström predicted that in 15 years most testers don’t work in testing anymore. Because we get what we pay for, and a low price brings you low skills. But testers in Agile need lots of skills. Cheap outsourcing is the norm and there are many bad/non-thinking testers out there. Kristoffer’s dream is to hire sapient testers and companies should invest in their people, also to reduce turnover. Companies need to change from quick wins to long term investment in their people, who take pride in building skills.

The keynote with its seven messages was very intense and had great messages.

At 6.45pm the buses left for “La Caverne” in Valkenburg for the EuroSTAR Conference Awards Dinner. A nice restaurant located in a cave, yes again in a cave. The caves were full, dinner was delicious, and there was plenty of wine. I had the honor to share my table with Carly Dyson, Kristoffer Nordström, community-report Nick Shaw, Paul Coyne, and Iain McCowatt. Folks, I enjoyed the passionate discussions, it was fantastic!

And the winners are:

Best Tutorial: Rob Lambert

Best paper: James Thomas

Tester of the Year: James Lyndsay

Well deserved!

The evening was fun, but after ~19 hours I was glad the day was finally over.