Archive Page 2

Too much pairing

At our most recent retrospective everyone in our newly formed team commented on how effective and productive the promiscuous pairing we were doing was, especially in neatly formatted pomodoro sessions. There was, however, and to our surprise, one comment from a project manager about there being ‘too much pairing’.

After a little probing it turns out there were concerns from business people not familiar with pair programming about pairing being unproductive and an expensive waste of developer resource.

I don’t post this intending to pimp pairing, but rather to remind ourselves that sometimes our methods, techy or not, can seem strange to the business and that we should, in an agile team anyway, be worried about what they think. Applying this to other things we do: Does our testing seem lacklustre? Are they confused about specification workshops?

In this case it is simply a matter of educating them, but worth doing so in order to maintain the trust and openness that makes our team effective.

Saying that, it could have just been the pomodoro timers annoying them.

Establishing your team values

At the recent Progressive.Net exchange, Dave Laribee (blog/twitter) gave a great session on the role of architect in the age of lean software development. The talk focused on people, product and process and was filled with nuggets related to each. Establishing your team’s values was one of the exercises we performed around the people topic.

I must admit I initially thought it would be a pointless task but it proved grounding and provided perspective and focus. I took it back to my team and during the last retrospective we spent thirty minutes going through the following process.

Step 1: Shout out some values.

With someone writing them on a whiteboard, the whole team should shout out values that are important to them. All members of the team should be present, testers, developers, project manager, business analysts and the business themselves.

Values can be hard to think of at first, but once the ball gets rolling they’ll flow. Think about what’s important to you from your perspective e.g. your codebase, your team, how your team is perceived by others, what you deliver, your process, etc. They should be words that describe things that are taken for granted but not always remembered, things that should be engrained in the way you work rather than things you aim to achieve. Examples for us included quality, honesty, fun, energised work, communication etc.

With everyone together you’ll get an understanding of what’s important to other members of your team and end up with about 15/20 values.

Step 2: Vote for 3 killer values

Each member of the team should now vote for three of the most important values with a dot next to each one on the whiteboard. The number of votes each person gets depends on the size of your team, but 3 votes each should whittle the list down to a few popular ones. Through voting you’ll also find that some values overlap, or one value encompasses two or three others; don’t worry if this happens, just go with the one that you feel is most important.

Step 3: Explain the values with the most votes

There should now be three to five values on the board with the most votes. If not and the votes are spread, go through this step anyway as it’ll help surface any assumptions people made and you can re-vote.

The person that suggested each of the three most popular values should now just rattle off a sentence explaining why they felt that value was important. Other team members should also add their own comment; the more perspectives the better. For example, a tester will value quality with regards to the product in general being bug-free and what the business required. A developer will value a quality code base written with SOLID principles in mind and a business owner will value quality as it will give their product a competitive edge in the marketplace.

Step 4: Feel a bit more focused

Having picked three values everyone agrees on you should get these up above your story board for all to see. They’ll be there to remind you about a way of working that you all agreed on; if anyone breaches one you can say “Ah, come on, man!”.

Like I mentioned before, at first I thought this was a pretty pointless excercise. But, whether your team’s in the forming to performing stage it’s definitely worth the 30 minutes. You’ll feel everyone’s on the same playing field, more enthused and has a little better focus.

Another post on the web about user stories

A few weeks ago we had a discussion about user stories, tasks and acceptance criteria. The aim was to discuss stories and acceptance criteria, how to format them and agree on a common language and understanding for these artefacts within our organisation.

User stories (and tasks)

At the moment we rely on stories written on post its to manage our development iterations. The detail of these stories isn’t as high as most would desire, leaving us with ambiguous, one or two word descriptions of a more complex requirement. As such tasks exist which try to fill a gap from an implementation perspective.

Mike Cohn defines a story as one thing a customer wants the system to do; others have also added that it should be something that can be scoped, agreed on, estimated and tested against. While our current story writing isn’t impeding us, and one or two words may suffice to communicate a common understanding amongst teams, there are benefits to fleshing out a story in the right format.

A good story should include the user’s perspective, a description of the requirement and the business benefit:

Title: E-mail results page
As a user looking to perform a comparison of different products
I want to email myself the results of my comparison
So that I can view my results offline (and speak to / argue with my missus about the best deal).

This is just one variation of the format. Others include In order to.., As a.., I want to..

And tasks?

For us, tasks evolved from somewhere and refer to implementation, which for something that should be accessible to the business is bad. While it isn’t criminal to think about implementation, especially for estimating complexity, those details are best left on your notepad rather than your story/kanban wall. Just because you’ve created your website and got it talking to your database doesn’t mean your story of making a blog is complete. Tasks don’t let us measure progress or describe testability; stories demonstrating business value and use cases do.

User stories? Check. Burnt all those tasks? Check. What next?

Acceptance criteria, baby! Each story should have a number of acceptance criteria that describe a specific scenario or example. These will guide development, form the basis of your testing and ultimately indicate release-ability of the story. E.g. done, or done-done, or if youíre still kidding yourself, done-done-done-done (coded-testedhuh?tested?brokenfixed-retested-live).

Acceptance criteria look like:

Scenario 1: Clicking the link on the results page to enter my email address
Given I am on the results page
When I click on the link reading ‘E-mail yourself these results’
Then I should see a pop-up box asking for my name and email address

Secnario 2: Sending my results to a valid email address
Given I am on the results page
And I’ve clicked on the link to send the results to myself
When I enter a valid name and email address and hit the send button
Then The email should be sent
And A message should appear telling me so

The Given and the Then part can have extra And statements. The scenarios and examples covered in the criteria should include not only happy-paths but obscure, yet plausible routes through the system, e.g. what defines a valid name and email address?

From a development point of view, the behaviour described in well-written acceptance criteria will provide us with a starting point for BDD. Being an outside-in approach, you can drill down further and further to a unit level.

From a testing perspective, each acceptance criteria provides us with a minimum baseline to test for. Baseline because there’s all the other testing to do too, like integration (when you get more and more criteria and stories complete), browser compatibility, usability, exploratory etc. See http://whendoitest.com.

Further discussion:

  • How does this fit in with kanban? (Minimum Marketable Features…)
  • How do you write stories and criteria when thinking about slices?
  • Where should estimation fit in, per story?
  • When do we write the acceptance criteria, in spec workshops? On demand?
  • Can we make the acceptance criteria executable? Mmmmwwwwahaha now you’re talking!

I’ll try discuss some of the above soon.

When is test automation not test automation?

We’ve spent the last week or so scruitinising our smoke tests. Currently our smoke suite is probably best described as primitive as the tests drive the UI; this heavy reliance on the front-end means we’re only testing applications and services indirectly. As such these tests will only ever be as fast as the browser so I’ve been spending time assessing how we can improve them. We know what we want to test, but are questioning how. Naturally, these tests should be automated in one way or another.

Test automation bears a lot for discussion. Automated vs manual testing, whether you should be automating at all, the maintenance of tests, which tools to use and the return on investment are all topics that have been discussed thoroughly for years.

Having spent more time thinking about the how for these smoke tests, I  now cringe when reflecting back at past projects that involved automation. Those woeful days in the ‘end of cycle’ testing era when expensive UI automation tools were thought to be the answer to all of life’s problems. Datacentre move? Automate all your manual tests! Regression deck? Automate everything!

Since the wall between testers and developers has been brought down it’s easier to look at something and work out how to test it. A favourable alternative for a tester rather than hearing about something then thinking about how to test it using only a specific tool and only from the front end. These days working with a development team practicing behaviour-driven development means, from a testing perspective, that I gain confidence in the application sooner. As a result, further testing efforts don’t overlap with what was covered on a unit level and I’m able to invest more time in ‘ility’ and exploratory testing – the agile testing dream.

So, when is test automation not test automation?

When it’s just UI manipulation and the thing that actually needs to be tested is burried away on another tier.

Although it sounds obvious to test specific logic in isolation and on a unit/integration level, from what I’ve seen on automation projects testers don’t always had the opportunity to do so. This is a probable cause to the test-it-all-using-the-front-end mentally and the notation that test automation only involves manipulating the front end.

What about those smoke tests?

We’ve pulled tests out and automated them on their rightful tier (same test, different how), with the use of integration tests that run in fractions of a second. All possible with the help of a toolsmith, of course.

Developers talking about testing: tools and coverage

So, at the end of our last iteration we held a retrospective to specifically address the testing issues that cropped up from a developer perspective, what with us now asking them to test each other’s work.

The main issues that arose were quite straight-forward but also enlightening as they highlighted the differences in perspective that testers and devs have, the “build it” attitude rather than a tester’s “break it”.

Tool responsibility

The first issue I liked was where the line is drawn between nUnit unit/integration tests, FitNesse and Selenium tests. This was interesting as it shows we’ve become too focused on tools rather than concentrating on what we actually need to achieve. Both Selenium and FitNesse have been hyped up and used by certain teams, so it’s easy to see how others thought these tools are the be-all and end-all and become confused about what it is they actually do.

We’re combating this by talking about testing without talking about tools. This means getting the developers to think about what it is they want to achieve and then worrying about the “how” later; the “how” will always change, new tools will always emerge. Next will be a talk through each tool, what it can and can’t do and how to use it if it’s the one you want to go for.

Smoke test coverage

The second issue raised was about the lack of visibility of smoke test coverage, run post-deployment. Parts of our product were broken on production during the last couple of releases due to deployment related issues. Due to the scattered knowledge of these parts, checks were done by one person in a test environment but not performed on the staging or production environments. A “Wall of smoke test coverage” was introduced and used to visually display what gets checked post deployment; “If it ain’t on the wall, it won’t get checked”.

It was only after writing out these tests on post-it notes by hand that I realised the duplication within some of these tests. As a result we refactored the tests and have been able to reduce the time taken to run them by 50%. Score! However, we’ve found gaps in coverage, tests for which will obviously eat back into that, but more efficiently. The bigger plan is to make deploying and verifying releases as cheap as possible. Identifying more coverage is one thing, but we’ll also need to think about the best way of automating and executing these tests.

Next?

These issues we discussed in the retrospective are only the tip of the iceberg; although it is good to see a high level of commitment to testing from our development team. Another front for improvement is the quality of specification obtained from the business, something Gojko Adzic’s “Bridging the communication gap” will no doubt come in handy for.

What we’re currently up to: Baking in quality.

Our team is currently taking the final leap to ensure quality is the responsibility of the whole team, from the business to the developers and not that of the end-of-cycle “Quality Police’.

Initially, as with most development shops, new features are thrown over the wall into the laps of a QA team. Following this waterfall approach to testing, quality is tested into the build at the end of the production line, and inevitably signed off regardless. With a change in methodology to XP this evolved into the use of agile testers, integrated into each dev team, testing as they go.

However, this still used a mini-waterfall approach where deliverables were thrown over a smaller, shorter wall albeit earlier and more frequently. It is this wall that we aim to completely remove with test-developers and with a whole-team approach.

My aim is to refine the current process that integrates the business, devs and test-devs to one that advocates even earlier and more frequent verification of development work.

I’ll share the specifics, feedback and experiences of my roadmap to quality on here as it happens.

2009, year of the test-developer

Hello and welcome. I have been meaning to start this blog for a while now as a means to document the what, whys and hows of the role of a test-developer.

Who are you?

I am a tester-developer working within a Scrum and XP, .Net software development team. Previously in my career I was just a tester, then an automation monkey and now I’ve seen the light. The manner in which we build software has changed and so has the way we need to test it.

What can I expect from this blog?

My aim for this blog is to be able to provide insights and maybe answers, with real-life examples on:

  • Agile testing.
  • Automation.
  • Championing test-development.
  • Testing from a developer and business person’s perspective.