Network Console for your test scripts

As exploratory tester I love my browsers’ dev tools. Especially since I’m currently working on a CMS project, analyzing the available elements and their styles is what I’m doing every day.

When it comes to automating, Selenium Webdriver is capable of locating the elements of the DOM tree, interacting with them and even reading out there styles for verification and many other actions. But there is a tool in the toolbox that can also come in handy in automation that Selenium doesn’t cover (afaik): the network console! Selenium itself cannot capture what is going on when the browser captures its often 100+ files for one site from the web. The Webdriver “only” deals with the results.

For writing a simple check script I looked into this “problem” and found a very easy solution for Python that I want to show you. It’s by using the Browsermob Proxy. Just download and unzip the proxy to a folder reachable by your automation script and start coding.

Firing up the proxy:
# Start Proxy Server
from browsermobproxy import Server

server = Server("../browsermob-proxy-2.1.2/bin/browsermob-proxy")
server.start()
proxy = server.create_proxy()

print("Proxy-Port: {}".format(proxy.port))

# Start Webdriver
from selenium import webdriver
co = webdriver.ChromeOptions()
co.add_argument('--proxy-server={host}:{port}'.format(host='localhost', port=proxy.port))

driver = webdriver.Chrome(executable_path = "../drivers/chromedriver", chrome_options=co)

Now you have a running Webdriver that can collect information in HAR format. Let’s call a website and record the network traffic.

# Create HAR and get website
proxy.new_har("testpappy")
driver.get("https://testpappy.wordpress.com")

Now you have all the information that usually your browser’s network console provides in this HAR entity. So let’s find out, which files get loaded, how fast that was, and how big the files are.

# Analyze traffic by e.g. URL, time and size
for ent in proxy.har['log']['entries']:
    print(ent['request']['url'])
    print("{} ms".format(ent['time']))
    print("{} kB".format(round((ent['response']['bodySize'] + ent['response']['headersSize'])/1024, 2)))

Don’t forget to clean up after you.
# Shut down Proxy and Webdriver
server.stop()
driver.quit()

The output is now a long list of entries looking something like this:

https://testpappy.wordpress.com/
1103 ms
129.46 kB

Now let your imagination run and think of what you could track and analyze for your project with that simple tool? Maybe a basic performance monitoring?

If you come up with something cool, let us all know by adding it to the comments here. Thanks!

The code used here is using Python 3.5 and the latest Chrome webdriver as of Dec. 30, 2016.

Test Automation – Am I the only one?

What would the world of testing be without test automation? Well, I assume a bit slower than it is with it.

In this post I don’t want to speak about:

  • There is no such thing as automating “testing” – I know!
  • It’s not about using tools to assist testing
  • It will not be a rant against vendors who declare 100% test automation is possible – no it’s not!
  • Test Automation is for free, besides license costs – no, sir. It’s f**king expensive.
  • Test Automation doesn’t find bugs, it simply detects changes. Humans evaluate if it’s a bug.

So what is this post about? It’s about my personal fight with test automation and the risks I identified attached to it, that don’t seem to bother most of the people working with test automation. So, am I worrying too much? You want to know what bothers me? I will explain.

There are lots of people who treat test automation as silver bullet. “We need test automation to be more efficient!”, “We need test automation to deliver better software!”, and “We need test automation because it’s a best practice!” (Writing the b word still makes me shiver.) If you are in a product/project that doesn’t use test automation, you quickly get the impression that you are working outdated.

My personal story with test automation started a while back with entering my current company. Implementing test automation was one of the main goals why I was brought in. After nearly 2.5 years there was still nothing, because me and my team were busy with higher priority stuff. Busy with testing everything with minimal tool support. Sounds a bit like the “lumberjack with the dull ax” problem, when you belong to the test automation as silver bullet fraction. No time to sharpen the ax, because there are so many trees to chop. In May 2015 I got the assignment to finally come up with a test automation strategy and a plan how to implement it. Reading several blogs about the topic, especially Richard Bradshaw’s blog, quickly formed some sort of vision in my head. I know, against a vision, take two aspirins and take a nap. But really, a plan unfolded in my head. And again we had no time at hand to start with it. Some parts of the strategy were started, some proof of concepts were implemented. Since 3 weeks now I got a developer full-time to finally implement it. Things need time to ripe at my place.

Now I am a test lead with no real hands-on experience how to automate tests and I have a developer who can implement what formed in my head. But with all the time between creating the strategy, a strategy I still fully support as useful and right, and implementing it, I also had enough time to use some good critical thinking on the topic.
And finally, last week at the Agile Testers’ Meetup Munich the topic was “BDD in Scrum”, and thanks to QualityMinds who organized the event, we got not only a short introduction to BDD, we also had the opportunity to do some hands-on exercise.

Why am I not a happy TestPappy now that everything comes together? Here are my main pain points. Risks I would like to address and that I need more time to find out about.

Why do people have more trust in test automation than in “manual” testing? It seems that people are skeptic when it comes to let people do their job and test the right things. But once you have written 100 scripts that run on their own, 3-4 per day, every day of the week, producing green and red results. It seems to me that no one is actually questioning an automated test, once it’s implemented.

Automated checks with a “good quality” need well skilled people. Your stomach makes itself ready to turn, when you read about “good quality”? Good, we are on the same page. The most important quality characteristics in my opinion an automated check should have are completeness and accuracy, stability, robustness, and trustworthiness, scalability to some degree, maintainability and testability, and some more. That’s a shitload of things to take care of, when writing some simple checks. To be honest, most of these criteria our application itself doesn’t have to a sufficient degree, at least not, when it has to stand up to my demands. How could a test team take care of all that when generating hundreds of necessary tests? Now I got lucky and was able to hire a developer to implement the automation framework of my dream, so I got some support on that front. But once we start implementing the checks itself, me and the testers need to implement or at least help to implement them. How do you take care of the problem that all checks need have “good quality” to be reliable not only today but also next week or next year.

How do I know that my script checks the right thing? I’m a very explorative tester. I usually don’t prepare too much what I’m looking for. I let my senses guide me. So when I hand over a certain area to be covered by a script, I have to make decisions what to cover. At least in my context I am pretty sure, that I will miss something, when I give that out of my control. How do you handle that?
My first attempt to implement some automated checks 3 years ago was to call every page that I could reach without too much logic and making a screen shot. I would then just quickly walk through the screenshots and check for oddities. But this is more a tool to assist my testing, and not able to run without me or some other human. Simply comparing screenshots and only checking screens with differences to a baseline is not really possible, since displayed data are usually changing often in my context.

What am I missing? My favorite topic of the past 2 years is Ignorance, in this case the unknown unknown. How do I have to handle that question? I’m sure I miss lots of things with a check, but can I be sure that I don’t miss something important? Once an area is covered with automation, how often do you come back to investigate it? Review if you missed something, if you need to extend your checks, or redesign the whole approach?

How to trust a green test? There is always the problem about false positives and false negatives. False negatives are wasting time, but in the end I have double-checked the area and covered more than the script, so I’m okay to handle those. But false positives are mean. They say everything is all right, and – hopefully – they hide in a big list of other positive tests. So for every single check, every single assertion you need to think about if there is a way that the result is “true”, when it’s really “false”.
Now it also depends on what the script is covering. If you forgot to cover essential parts, you will miss something. But the check will not tell you, it simply can’t. It’s not programmed to do so.

Automating the API/service level? Richard Bradshaw presented that nice topic to automate things on the right level on Whiteboard Testing. Many tests would be better to run on the API/service level. I agree to a certain degree. As long as there is no business logic implemented on client (e.g. browser) side that I need to emulate. When I need to mock front-end functionality to effectively interact with an API, I have to re-create the logic, based on the same requirements. Do I implement the logic a second time to also test if the implementation is correct? Do I somehow have the possibility to re-use the original front-end code and miss problems in there? Do I trust the test implementation more than the front-end implementation? If so, why not put the test code into production?

And the list of worries would go on a bit, but I stop here.

Please help me! Do I worry too much about automating stuff? I would be very happy for some comments on my thoughts, if these thoughts are shared or maybe solved? And if they are overdrawn, I want to know as well.