Developers talking about testing: tools and coverage

So, at the end of our last iteration we held a retrospective to specifically address the testing issues that cropped up from a developer perspective, what with us now asking them to test each other’s work.

The main issues that arose were quite straight-forward but also enlightening as they highlighted the differences in perspective that testers and devs have, the “build it” attitude rather than a tester’s “break it”.

Tool responsibility

The first issue I liked was where the line is drawn between nUnit unit/integration tests, FitNesse and Selenium tests. This was interesting as it shows we’ve become too focused on tools rather than concentrating on what we actually need to achieve. Both Selenium and FitNesse have been hyped up and used by certain teams, so it’s easy to see how others thought these tools are the be-all and end-all and become confused about what it is they actually do.

We’re combating this by talking about testing without talking about tools. This means getting the developers to think about what it is they want to achieve and then worrying about the “how” later; the “how” will always change, new tools will always emerge. Next will be a talk through each tool, what it can and can’t do and how to use it if it’s the one you want to go for.

Smoke test coverage

The second issue raised was about the lack of visibility of smoke test coverage, run post-deployment. Parts of our product were broken on production during the last couple of releases due to deployment related issues. Due to the scattered knowledge of these parts, checks were done by one person in a test environment but not performed on the staging or production environments. A “Wall of smoke test coverage” was introduced and used to visually display what gets checked post deployment; “If it ain’t on the wall, it won’t get checked”.

It was only after writing out these tests on post-it notes by hand that I realised the duplication within some of these tests. As a result we refactored the tests and have been able to reduce the time taken to run them by 50%. Score! However, we’ve found gaps in coverage, tests for which will obviously eat back into that, but more efficiently. The bigger plan is to make deploying and verifying releases as cheap as possible. Identifying more coverage is one thing, but we’ll also need to think about the best way of automating and executing these tests.


These issues we discussed in the retrospective are only the tip of the iceberg; although it is good to see a high level of commitment to testing from our development team. Another front for improvement is the quality of specification obtained from the business, something Gojko Adzic’s “Bridging the communication gap” will no doubt come in handy for.


0 Responses to “Developers talking about testing: tools and coverage”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: