API Testing: Using the Fluent Interface Pattern for JSON Assertions

When you are working with APIs and especially testing them, then there’s usually also the step of building some JSON request and asserting the response. And it often means complicated JSON structures with nested elements, lists, and all kinds of scurviness. The focus of today’s article is on the Asserters.

Disclaimer: All samples are in Java code, but can most probably be transferred to most other languages as well.

Disclaimer 2: I haven’t seen this pattern documented or used anywhere else yet. So if you are aware that this is already somewhere described and shared, please let me know in the comments!

The Problem

Let’s take an example from a simple CRM system that manages customer contacts. Here we retrieve the contact information for the Sample Company with ID 123.

request:
GET /api/v1/contacts/company/123

response:
{
  "id": 123,
  "companyName": "Sample Company AB",
  "address": {
    "id": 4254,
    "address1": "Main Street 1",
    "address2": "2nd floor",
    "postCode": "12345",
    "city": "Sampletown",
    "createdAt": "2020-08-21T13:21:45.578",
    "updatedAt": "2022-07-23T07:14:25.987"
  },
  "contacts": [
    {
      "id": 423,
      "contactName": "Gary",
      "contactSurname": "Henderson",
      "role": "Sales",
      "email": "gary.henderson@samplecompany.com",
      "phone": [
        {
          "number": "+123456789-0",
          "type": "LANDLINE"
        },
        {
          "number": "+43123456789",
          "type": "MOBILE"
        }
      ],
      "createdAt": "2020-08-21T13:21:45.578",
      "updatedAt": "2022-07-23T07:14:25.987"
    },
    {
      "id": 378,
      "contactName": "Henriette",
      "contactSurname": "Garstone",
      "role": "CEO",
      "email": "henriette.garstone@samplecompany.com",
      "phone": [
        {
          "number": "+123456789-12",
          "type": "LANDLINE"
        }
      ],
      "createdAt": "2020-08-21T13:21:45.578",
      "updatedAt": "2022-07-23T07:14:25.987"
    }
  ],
  "createdAt": "2020-08-21T13:21:45.578",
  "updatedAt": "2022-07-23T07:14:25.987"
}

In my past 10 years since first getting into contact with APIs, I have seen a lot of test code that looks like this, and I have probably written my fair share of it.

@Test
public void firstTest() {
    Company company = MagicCRMRestRetriever.getContact(123);

    Assertions.assertEquals(123, company.id);
    Assertions.assertEquals("Sample Company AB", company.companyName);
    Assertions.assertEquals("Main Street 1", company.address.address1);
    Contact salesContact = company.contacts.first { it.role == "Sales" };
    Assertions.assertEquals("Gary", salesContact.contactName);
    Assertions.assertEquals("Henderson", salesContact.contactSurname);
    Phone salesContactMobile = salesContact.phone.first { it.type == PhoneType.MOBILE };
    Assertions.assertEquals("+43123456789", salesContactMobile.number);
}

And this is just a small example of an assertion orgy. When I first learned about the pattern I’m about to introduce to you, the project context was Airline industry, and the JSONs contained flight and ticket information. We talk about at least 100-150 entries, easily. Imagine the big blocks of assertions.

Those of you who say, just use a comparator, create the payload you expect and compare it with the one you get. Well, I don’t care about all fields in all cases. And that makes it easy to get distracted in the test you write. I want to focus on the fields that are of interest to me.
See for example the createdAt and updatedAt, and IDs of the contacts and address fields.

The Solution: Fluent Asserters

Already in 2005 Martin Fowler wrote about the FluentInterface. But as my main focus from 2003 – 2012 was on Oracle PLSQL, SAP ABAP, Excel VBA, and other non-sense I have never seen again since, my knowledge about design patterns was a bit outdated. One of the developers in the aforementioned project came up with the nice idea of introducing Fluent Asserters. Attention: This is not the same kind of fluent assertions that are possible with e.g. AssertJ, where you can chain assertions on the same variable.

First I will show you how sweet these Fluent Asserters can look in action, before I then scare the hell out of you about implementation details.
Here is the sample of the same test as above.

@Test
public void firstTest() {
    Company company = MagicRestRetriever.getContact(123);

    AssertCompany.instance(company)
      .id(123)
      .companyName("Sample Company AB")
      .address()
        .address1("Main Street 1")
        .done()
     .contacts()
       .byFilter(c -> c.role == "Sales")
         .contactName("Gary")
         .contactSurname("Henderson")
         .phone()
           .byType(MOBILE)
             .number("+43123456789")
             .done()
         .done()
       .done();
}

Doesn’t it look nice and neat, with all those indentations, that let it look a bit like a beautified JSON file.

If you like what you see, it might be worth to read on and get through all the nasty little details necessary to implement this kind of asserters. If you don’t like it, you better stop reading here, if you haven’t done already so.

The Asserter Interface and Chained Asserter Interface

The basic problem that needs to be solved is the chaining of DTOs. In our simple example above, the JSON is combining four different DTOs. And if you have a good API design it might be that certain DTOs are re-used all over the place. How do you distinguish which one to use. Well, you don’t have to, when all DTO asserter classes inherit from the Asserter Interface and Chained Asserter Interface.

public interface Asserter<TItem> {
    TItem get();
}
public interface ChainedAsserter <TItem, TParent extends Asserter> extends Asserter<TItem> {
    TParent done();
}
public abstract class AbstractChainedAsserter<TItem, TParent extends Asserter> implements ChainedAsserter<TItem, TParent>  {
    private final TItem item;
    private final TParent parent;

    protected AbstractChainedAsserter(final TItem item, final TParent parent) {
        this.item = item;
        this.parent = parent;
    }

    @Override
    public TItem get() {
        return item;
    }

    @Override
    public TParent done() {
        return parent;
    }
}

The ChainedAsserter is the heart piece, because it allows you to return back the parent (done()) in a way that the Asserter doesn’t need to know what type that parent is. This allows to embed the same DTO into several others without the need to distinguish between Asserters, because the parent is set on navigation through the DTO. As long as the parent is, of course, also an Asserter. The highest element must always be an Asserter.

The Implementation of the Asserter

When it comes to the actual implementation, the Asserter has to extend the AbstractChainedAsserter.

public class AssertCompanyDto<TParent extends Asserter<?>> extends AbstractChainedAsserter<CompanyDto, TParent> { ...

The TItem is the DTO to assert itself, and the TParent is another Asserter.
Every Asserter has two methods to instantiate it, instance(target) and chainedInstance(target).
instance is called in the actual test, when the Asserter is instantiated with the actual variable to investigate and assert.
The chainedInstance is called only by Asserters, when they hand over to another Asserter. Both methods call the constructor, instance is not setting a parent, as it represents the highest node of the JSON, and chainedInstance is setting a parent. The constructor also asserts that the handed over object is not null.

public static AssertCompanyDto<Asserter<CompanyDto>> instance(final CompanyDto target) {
    return new AssertCompanyDto<>(target, null);
}

public static <TP extends Asserter<?>> AssertCompanyDto<TP> chainedInstance(final CompanyDto target, TP parent) {
    return new AssertCompanyDto<>(target, parent);
}

private AssertCompanyDto(final CompanyDto target, TParent parent) {
    super(target, parent);
    assertThat(target).describedAs("Company must not be null").isNotNull();
}

Standard types like String, Int, Date, etc. can directly be asserted. And as it is a fluent interface type of implementation, the method has to return the Asserter itself (this).

public AssertCompanyDto<TParent> companyName(final String expected) {
    assertThat(get().getCompanyName()).describedAs("companyName").isEqualTo(expected);
    return this;
}

Chaining Asserters

When an attribute of the DTO is another DTO, then the according method is not expecting an object as comparison, but returns the Asserter for the nested DTO. While you can implement also a simple object comparison method that handles comparing the whole DTO at once, I have rarely used them, and hence dropped them in later projects.
Now we see the chainedInstance in action. The return type is defined with itself as parent. Also we directly check that the object is not null, so that we actually have something to assert.

public AssertAddressDto<AssertCompanyDto<TParent>> address() {
    final AddressDto target = get().getAddress();
    assertThat(target).describedAs("Address").isNotNull();
    return AssertAddressDto.chainedInstance(target, this);
}

List Asserters

Some of the embedded DTOs might even be lists. In our example that is contacts and the phone numbers of the contacts. Lists need to be treated specially. Because we want to navigate in lists, select specific elements (byIndex), e.g. when you know exactly which element should be on first position, or you want to search for an entry based on a known attribute (byFilter). Or simply know how big the list should be (size).

public class AssertContactDtoList<TParent extends Asserter<?>> extends AbstractChainedAsserter<List<ContactDto>, TParent> {
    public static AssertContactDtoList<Asserter<List<ContactDto>>> instance(final List<ContactDto> target) {
        return new AssertContactDtoList<>(target);
    }

    public static <TP extends Asserter<?>> AssertContactDtoList<TP> chainedInstance(final List<ContactDto> target, TP parent) {
        return new AssertContactDtoList<>(target, parent);
    }

    private AssertContactDtoList(final List<ContactDto> target) {
        super(target, null);
    }

    private AssertContactDtoList(final List<ContactDto> target, TParent parent) {
        super(target, parent);
    }

    public AssertContactDtoList<TParent> size(final int expectedSize) {
        assertThat(get().size()).describedAs("expected list size").isEqualTo(expectedSize);
        return this;
    }

    public AssertContactDto<AssertContactDtoList<TParent>> byIndex(final int index) {
        assertThat(0 <= index && index < get().size()).describedAs("index must be >= 0 and < " + get().size()).isTrue();
        return AssertContactDto.chainedInstance(get().get(index), this);
    }

    public AssertContactDto<AssertContactDtoList<TParent>> byFilter(final Predicate<ContactDto> predicate) {
        final ContactDto by = by(predicate);
        assertThat(by).describedAs("list entry matching given predicate not found").isNotNull();
        return AssertContactDto.chainedInstance(by, this);
    }

    private ContactDto by(final Predicate<ContactDto> predicate) {
        return get().stream().filter(predicate).findFirst().orElse(null);
    }
}

So, when dealing with a list element in your DTO, then you don’t directly return the Asserter of the DTO, but first its ListAsserter. This following method is part of the AssertCompanyDto class.

public AssertContactDtoList<AssertContactDto<TParent>> contactList() {
    final List<ContactDto> target = get().getContacts();
    assertThat(target).describedAs("Contacts must not be null").isNotNull();
    return AssertContactDtoList.chainedInstance(target, this);
}

Extend as you wish

And you can of course extend the Asserters with anything that you need often. An example is that often you might not want to compare a String exactly, but only check that it contains a certain substring. So why not add to your normal exact comparison method a “Contains” method.

public AssertCompanyDto<TParent> companyNameContains(final String expected) {
    assertThat(get().getCompanyName()).describedAs("companyName").contains(expected);
    return this;
}

The Secret to success: write a Generator

Disclaimer: I cannot share any code as of now, as the generators that I have produced so far are part of special code bases that I cannot publish. If interest is big enough though, I will try to re-create one in a public GitHub project. Leave me a comment, so that I know there’s interest.

The code described above is complicated and producing it manually is error-prone. And also it’s stupid work. Nobody likes stupid work, except maybe me sometimes. So the easiest way is to write a generator that analyzes the given DTO and creates the Asserters and ListAsserters auto-magically.

The generator is using the power of reflection by looking into the innards of the class, analyzing what it consists of.

The generator needs to do basically do the following things:

  • scan the parent class provided as input
    • find all attributes, and in case of Java e.g. also determine the getter
      • find out if the type of the attribute is a standard type (String, Int, Double, etc)
      • find out if the type is a custom type (in our example above, e.g. AddressDto)
      • find out if it’s a list
      • maps also work, but I haven’t found a proper way to automate that yet
    • custom types need to be scanned and analyzed (yes, recursive call)
  • manage the scanned classes, to avoid multi-scanning
    • you should also check which Asserters you already have to avoid unnecessary work
  • after collecting the work that needs to be done, start creating the Asserters and ListAsserters
    • use a template engine to provide the standard content for each class
    • add for each attribute the methods you want to add
      • at least the 1:1 comparison
      • String contains
      • timestamp with a potential diff (so that you can compare timestamps you only know roughly)
      • number formats with smallerThan or largerThan
      • and so on, whatever makes most sense in your project to always provide out-of-the-box
    • write the files to folder

Moving the files to their target directory is then again a manual process, to check that everything worked as expected. Then you can immediately start writing your tests with beautiful fluent assertions as shown on top of this post.

In case any DTO changes, just re-run the generator and it will add or remove changes. Be careful though, when you added custom assertions to your Asserters. IDEs with proper version control support should be helpful here to restore the according methods easily.

Famous last words

I am very sorry. This was now a very quick and dirty description of what the generator needs to do. But going into detail would be way too much for this anyway way too long blog post.

I hope that you understood the basic concept of the fluent asserters and find them as useful as I do. I don’t want to miss them anymore in any project where JSON/DTOs are getting a little bit bigger. It makes the tests so much more readable, writeable and maintainable. It focuses the tests on what is really important and not clutters them with technical details.

Advertisement

Test Automation – my definition

This post is an explanation how I see Test Automation. There is nothing new in here, and you probably have read that all somewhere else. I just want to create a reference in case I need to explain myself again.

Over the past few days I wrote some blog post on the test automation pyramid, or rather the band-pass filter. And I wrote about the trust in testing. This lead to some discussion and misunderstanding, hence I decided to write this post.

This post is about my opinion and my take on test automation. Your opinion and experience might differ, and I’m okay with that, because – you know – context matters. You might also prefer different names for the same thing, I still prefer Test Automation.

Disclamier and Differentiation: This article is about test automation, which means tests that have been automated/scripted/programmed to run by themselves without further user/tester interaction until they detect change (see below for more details on this aspect). Using tools, scripts, automation or whatever that assist your testing, go in my understanding under the term that BossBoss Richard Bradshaw coined as “Automation in Testing” or basically as “testing”. This is not part of this article!

In my humble opinion good test automation is providing the following aspects:

Protection of value

Automated tests are implemented to check methods, components, workflows, business processes, etc. for expected outcomes that we see as correct and relevant to our product. This reaches from functionality, design/layout, security, performance or many other quality aspects that we want to protect. We want to protect our product from losing these properties, that’s why we invest in test automation so that we can quickly confirm that the value is still there.

Documentation

Methods or components and especially the code should speak for itself, but sometimes / often it doesn’t. We can’t grasp every aspect that the method should be capable of doing, just by looking at the code. We can help with that, by manifesting abilities of a method or component in automated tests.

Change detection

Test automation is there to detect change. When the code of the product has been touched and an automated test fails, it has detected change. It has detected change of a protected value. This could be a change in functionality, flow, layout, or performance. If that change is okay (desired change) or not (potential issue) has to be decided by a human. Was there a good reason for the change, or do we need to touch the code again to restore the value of our product.

Safety net

The automated tests provide a safety net. You want to be able to change and extend your product, you want to safely refactor code and you want to be sure that you don’t lose existing value. Your automated tests are the safety net to move fast with lower risk. (I don’t say that there is no risk, I just say the risks are reduced with a safety net in place.)
Also maintenance routines like upgrading 3rd party dependencies are protected by the safety net. Because changing a small digit in a dependency file can result in severe changes of code that you rely on.

And this is also where the topic of trust comes back to the table. You want to trust your test automation to protect you, at least to a certain degree!, so that you have a good feeling when touching existing code.

What Test Automation is NOT

Test automation is not there to find new bugs in existing and untouched code. Test automation is not there to uncover risks, evaluate the quality of your product, tell you if you have implemented the right thing, or decide if it can be deployed or delivered.

Test automation cannot decide on usability, or proper design or layout. Automation doesn’t do anything by itself. What it does is enable you to re-execute the implemented tests as often as you want, on whatever environment you want it to run, with as many data as you want. As long as you implemented the tests properly (different story!)

Test automation, when done properly and right, is there to tell you, that all aspects, that you have decided to protect, did not change to the degree of protection you invested in.

In the process of creating automated tests, many more things can happen and can be uncovered. I prefer an exploratory approach to test automation, but that is subject to maybe another blog post. But once the test is in place and ready for regular re-execution, it is there to protect whatever you decided is worth to protect.

Test Automation can never be your only testing strategy. Test Automation is one part of a bigger puzzle.

In Test We Trust

Have you ever thought about how much trust testing gets in your projects?

Isn’t it strange that in IT projects you have a group of people that you don’t seem to trust? Developers! I mean, why else would you have one or more dedicated test stages to control and double-check the work of the developers? Developers often don’t even trust their own work, so they add tests to it to validate that the code they write is actually doing what it should.

And of course you trust your developers to do a good job, but you don’t trust your system to remain stable. That’s even better! So you create a product that you don’t trust, except when you have tested it thoroughly. And only then you probably have enough trust to send it to a customer.

Testing is all about trust. We don’t trust our production process without evaluating it every now and then. Let’s take some manufacturing industries, they have many decades, even centuries more of experience than IT. They create processes and tools and machines to produce many pieces of the same kind. Expecting it to have the specified properties and reaching the expected quality aspects every time. Depending on the product, they check every piece (like automobiles) or just rare and random spot checks (for example screws and bolts). They trust their machines – usually to a high degree – to reproduce a product thousands or even millions of times.

We are in IT, we don’t reproduce the same thing over and over again.
Can you imagine that for your project you only do some random spot checks, and only check for a handful of criteria each time? If your answer is ‘yes’, then I wonder if you usually test more and why you still test it. If your answer is ‘no’, you belong to what seems to be the standard in IT.

So, what we have established now is, that we don’t overly trust the outcome of our development process. Except when we have some kind of testing in place.

Have you ever realized how much decision makers rely on their trust in test results? If you are a developer, BA, PO, or a tester, who is part of the testing that happens in your delivery process, have you ever felt the trust that is put into your testing? Decision makers rely on your evaluation or the evaluation of the test automation you implemented!

Does your project have automated tests? Do you trust the results of your tests? Always? Do you run them after every check-in, every night, at least before every delivery? Do you double-check the results of your automated tests? When you implement new features, when you refactor existing code, when you change existing functionality, you re-run those tests and let them evaluate if something changed from their expectations. You trust your automated tests to warn you in case something has been messed up. The last code change, a dependency upgrade, a config change, refactoring an old method.

Do you put enough care into your automated tests, that you can really rely on them to do what you want them to do? Why do you have that trust in your tests, but probably not in your production code? And I don’t ask the question “who tests the tests?”

Of course we do some exploratory testing in addition to our test automation. And sure, sometimes this discovers gaps in your test coverage, but most of all exploratory testing is to cover and uncover additional risks, that automation can not supply. So, when you established exploratory testing in some form, alongside your change detection (a.k.a. test automation), you add another layer of trust, or respectively distrust to some parts of your system.

This is not about distrust, we just want to be sure that it works!

In one of my last consulting gigs for QualityMinds, I had an assignment for a small product company, to analyze their unit tests and make suggestions for improvement. The unit test set was barely existent, and many of the tests I checked were rarely doing anything useful. That wasn’t a big problem for the delivery process, as they have a big QA team who is doing lots of (end-to-end) testing before deployment, and even the developers help in the last week before delivery.

Yet they have a big problem. They don’t have enough trust in their tests and test coverage to refactor and modernize their existing code base. So my main message for the developers was to start writing unit tests that they trust. If you have to extend a method, change functionality, refactor it, debug it, fix something in it, you want to have a small test set in place that you can trust! I don’t care about code coverage, path coverage, or whatever metric. The most important metric is, that the developers trust the test set enough to make changes and receive fast feedback for their changes and that they trust that feedback.

I could add more text here about false negatives, false positives, flaky tests, UI tests, and so many more topics that are risks to the trust that we put into our change detectors.
There are also risks in this thing that is often referred to as “manual testing”. When it is based on age-old pre-defined test cases, or outdated acceptance criteria. Even when you do exploratory testing and use your brains, what are the oracles that you trust? You don’t want to discuss every tiny bit of the software with your colleagues all the time, if it makes sense or not.

We can only trust our tests, if we design them with the necessary reliability. The next time you design and implement an automated test, think about the trust you put into it. The trust that you hand over to this piece of code. Is it reliable to detect all the changes you want it to detect? When it detects a change, is it helpful? When you don’t change the underlying code and run the test 1000 times, does it always return the same result? Did you see your test fail, when the underlying code changes?

PS: This blog post was inspired by a rejected conference talk proposal that I submitted for TestBash Philly 2017. All that time since then, I wanted to write it up. Now was the time!

Network Console for your test scripts

As exploratory tester I love my browsers’ dev tools. Especially since I’m currently working on a CMS project, analyzing the available elements and their styles is what I’m doing every day.

When it comes to automating, Selenium Webdriver is capable of locating the elements of the DOM tree, interacting with them and even reading out there styles for verification and many other actions. But there is a tool in the toolbox that can also come in handy in automation that Selenium doesn’t cover (afaik): the network console! Selenium itself cannot capture what is going on when the browser captures its often 100+ files for one site from the web. The Webdriver “only” deals with the results.

For writing a simple check script I looked into this “problem” and found a very easy solution for Python that I want to show you. It’s by using the Browsermob Proxy. Just download and unzip the proxy to a folder reachable by your automation script and start coding.

Firing up the proxy:
# Start Proxy Server
from browsermobproxy import Server

server = Server("../browsermob-proxy-2.1.2/bin/browsermob-proxy")
server.start()
proxy = server.create_proxy()

print("Proxy-Port: {}".format(proxy.port))

# Start Webdriver
from selenium import webdriver
co = webdriver.ChromeOptions()
co.add_argument('--proxy-server={host}:{port}'.format(host='localhost', port=proxy.port))

driver = webdriver.Chrome(executable_path = "../drivers/chromedriver", chrome_options=co)

Now you have a running Webdriver that can collect information in HAR format. Let’s call a website and record the network traffic.

# Create HAR and get website
proxy.new_har("testpappy")
driver.get("https://testpappy.wordpress.com")

Now you have all the information that usually your browser’s network console provides in this HAR entity. So let’s find out, which files get loaded, how fast that was, and how big the files are.

# Analyze traffic by e.g. URL, time and size
for ent in proxy.har['log']['entries']:
    print(ent['request']['url'])
    print("{} ms".format(ent['time']))
    print("{} kB".format(round((ent['response']['bodySize'] + ent['response']['headersSize'])/1024, 2)))

Don’t forget to clean up after you.
# Shut down Proxy and Webdriver
server.stop()
driver.quit()

The output is now a long list of entries looking something like this:

https://testpappy.wordpress.com/
1103 ms
129.46 kB

Now let your imagination run and think of what you could track and analyze for your project with that simple tool? Maybe a basic performance monitoring?

If you come up with something cool, let us all know by adding it to the comments here. Thanks!

The code used here is using Python 3.5 and the latest Chrome webdriver as of Dec. 30, 2016.

Test Automation – Am I the only one?

What would the world of testing be without test automation? Well, I assume a bit slower than it is with it.

In this post I don’t want to speak about:

  • There is no such thing as automating “testing” – I know!
  • It’s not about using tools to assist testing
  • It will not be a rant against vendors who declare 100% test automation is possible – no it’s not!
  • Test Automation is for free, besides license costs – no, sir. It’s f**king expensive.
  • Test Automation doesn’t find bugs, it simply detects changes. Humans evaluate if it’s a bug.

So what is this post about? It’s about my personal fight with test automation and the risks I identified attached to it, that don’t seem to bother most of the people working with test automation. So, am I worrying too much? You want to know what bothers me? I will explain.

There are lots of people who treat test automation as silver bullet. “We need test automation to be more efficient!”, “We need test automation to deliver better software!”, and “We need test automation because it’s a best practice!” (Writing the b word still makes me shiver.) If you are in a product/project that doesn’t use test automation, you quickly get the impression that you are working outdated.

My personal story with test automation started a while back with entering my current company. Implementing test automation was one of the main goals why I was brought in. After nearly 2.5 years there was still nothing, because me and my team were busy with higher priority stuff. Busy with testing everything with minimal tool support. Sounds a bit like the “lumberjack with the dull ax” problem, when you belong to the test automation as silver bullet fraction. No time to sharpen the ax, because there are so many trees to chop. In May 2015 I got the assignment to finally come up with a test automation strategy and a plan how to implement it. Reading several blogs about the topic, especially Richard Bradshaw’s blog, quickly formed some sort of vision in my head. I know, against a vision, take two aspirins and take a nap. But really, a plan unfolded in my head. And again we had no time at hand to start with it. Some parts of the strategy were started, some proof of concepts were implemented. Since 3 weeks now I got a developer full-time to finally implement it. Things need time to ripe at my place.

Now I am a test lead with no real hands-on experience how to automate tests and I have a developer who can implement what formed in my head. But with all the time between creating the strategy, a strategy I still fully support as useful and right, and implementing it, I also had enough time to use some good critical thinking on the topic.
And finally, last week at the Agile Testers’ Meetup Munich the topic was “BDD in Scrum”, and thanks to QualityMinds who organized the event, we got not only a short introduction to BDD, we also had the opportunity to do some hands-on exercise.

Why am I not a happy TestPappy now that everything comes together? Here are my main pain points. Risks I would like to address and that I need more time to find out about.

Why do people have more trust in test automation than in “manual” testing? It seems that people are skeptic when it comes to let people do their job and test the right things. But once you have written 100 scripts that run on their own, 3-4 per day, every day of the week, producing green and red results. It seems to me that no one is actually questioning an automated test, once it’s implemented.

Automated checks with a “good quality” need well skilled people. Your stomach makes itself ready to turn, when you read about “good quality”? Good, we are on the same page. The most important quality characteristics in my opinion an automated check should have are completeness and accuracy, stability, robustness, and trustworthiness, scalability to some degree, maintainability and testability, and some more. That’s a shitload of things to take care of, when writing some simple checks. To be honest, most of these criteria our application itself doesn’t have to a sufficient degree, at least not, when it has to stand up to my demands. How could a test team take care of all that when generating hundreds of necessary tests? Now I got lucky and was able to hire a developer to implement the automation framework of my dream, so I got some support on that front. But once we start implementing the checks itself, me and the testers need to implement or at least help to implement them. How do you take care of the problem that all checks need have “good quality” to be reliable not only today but also next week or next year.

How do I know that my script checks the right thing? I’m a very explorative tester. I usually don’t prepare too much what I’m looking for. I let my senses guide me. So when I hand over a certain area to be covered by a script, I have to make decisions what to cover. At least in my context I am pretty sure, that I will miss something, when I give that out of my control. How do you handle that?
My first attempt to implement some automated checks 3 years ago was to call every page that I could reach without too much logic and making a screen shot. I would then just quickly walk through the screenshots and check for oddities. But this is more a tool to assist my testing, and not able to run without me or some other human. Simply comparing screenshots and only checking screens with differences to a baseline is not really possible, since displayed data are usually changing often in my context.

What am I missing? My favorite topic of the past 2 years is Ignorance, in this case the unknown unknown. How do I have to handle that question? I’m sure I miss lots of things with a check, but can I be sure that I don’t miss something important? Once an area is covered with automation, how often do you come back to investigate it? Review if you missed something, if you need to extend your checks, or redesign the whole approach?

How to trust a green test? There is always the problem about false positives and false negatives. False negatives are wasting time, but in the end I have double-checked the area and covered more than the script, so I’m okay to handle those. But false positives are mean. They say everything is all right, and – hopefully – they hide in a big list of other positive tests. So for every single check, every single assertion you need to think about if there is a way that the result is “true”, when it’s really “false”.
Now it also depends on what the script is covering. If you forgot to cover essential parts, you will miss something. But the check will not tell you, it simply can’t. It’s not programmed to do so.

Automating the API/service level? Richard Bradshaw presented that nice topic to automate things on the right level on Whiteboard Testing. Many tests would be better to run on the API/service level. I agree to a certain degree. As long as there is no business logic implemented on client (e.g. browser) side that I need to emulate. When I need to mock front-end functionality to effectively interact with an API, I have to re-create the logic, based on the same requirements. Do I implement the logic a second time to also test if the implementation is correct? Do I somehow have the possibility to re-use the original front-end code and miss problems in there? Do I trust the test implementation more than the front-end implementation? If so, why not put the test code into production?

And the list of worries would go on a bit, but I stop here.

Please help me! Do I worry too much about automating stuff? I would be very happy for some comments on my thoughts, if these thoughts are shared or maybe solved? And if they are overdrawn, I want to know as well.