Summing it all up

I was fortunate enough to be invited to talk about the evolution of our development process at Agile Testing Days in Berlin and the Next Generation Testing Conference in London recently. The slides from those talks are online on slideshare (although it’s nothing new… merely this and this; just me being a broken record, in fact).

Both conferences were aimed at and attended by testers, and on some reflection I think I focused on the benefits of our evolved process rather than portraying the part the tester played in the evolution. So testers, to summarise, if you have a ‘QA’ column on your task wall, you’re doing it wrong. Go pair with a developer, now. Don’t just wait to bat back a list of bugs to them, go help them avoid having to work on the same thing twice. Give your devs those test cases you have in your head, you’re good at coming up with those. Teach your team how to come up with them too, make a cheat sheet. Discuss them with the team, learn about the business’ appetite for risk and surface those assumptions that everyone on the team is making but not sharing. Help everyone nail the domain. Ask which scenarios are important and forget about the ridiculous edge cases for now. Now build it and get real feedback. Start by writing your developer a failing test*. Remove yourself from your comfort zone one step at a time by doing this stuff; your team will thank you for it.

*I appreciate this is probably where some folks draw the line. Fair play, although I think your days are numbered..

During this whole phase I’ve gone from quality police tester, to back-seat developer tester, to integrated and valuable member of team helping bake quality in tester to… well, developer. It involved throwing myself out of my comfort zone; however, everyone else on the team had to too. It meant we can focus on delivering to our customers, the people that use our applications, rather than faffing around ping-ponging work between split dev and test teams.

We pulled off what we did because of everyone on the team. I’ve learnt an incredible amount from an incredible group of people, all of whom were willing to muck in: Jason Neylon, Mike Wagg, Jon Neale, Zubair Khan, Tony To, Mark Holdsworth, Damon Morgan, Baris Balic, Mark Barrett, Tim Ross, Nick Kostelik, Michael Baldry, James Williamson, Justin Ware, Elliot Ritchie, Stephen Lloyd, Paul Harrington, Mark Durrand, and Andrew MacQuarrie.

I work in an organisation that provides us, as developers, trust and autonomy, and I’m alongside some very, very smart people. As such, I shall stop banging on about this testing role stuff, and hone my skills as a dev. Cheers🙂

Specifying the what rather than the how with Cucumber

Recently Gojko Adzic held the latest London Agile Testing Group at Skillmatter in Farringdon and talked about Cucumber. I couldn’t make it but caught the video online (http://skillsmatter.com/podcast/agile-testing/using-cucumber-for-bdd-and-agile-acceptance-testing).

Gojko comments on how Cucumber allows us to concentrate on the what rather than the how, when compared to other acceptance testing tools. The same day I had a conversation with a former colleague

Describing the how

He was telling me about a tester on his project that was writing a Cucumber spec but failing to portray any business/customer value. I wasn’t sure what he meant so he sent over the feature file and I had a look. Here’s an example below (with the domain changed).

Feature: Customers can purchase products in their basket.
So that I can purchase the products I accumulate in my basket
As the customer
I want to be able to pay for them at a checkout.

Scenario: Filling out valid details at the checkout.
Given I am on the checkout page
When I fill in “Mr” for “Title”
And I fill in “Another” for “Firstname”
And I fill in “Tester” for “Surname”
And I fill in “111” for “House Number”
And I fill in “Buckingham Palace Road” for “Street Name”


You can see where this is going…

The Given, When and Then statements are being used to describe how he wants to test the feature and not what he wants it to do (or behave).

Seeing this notation reminded me of my QTP days, where I was on a ‘QA’ team writing automated tests that no one else apart from us would use or even understand. These tests would never describe the business rule that was being verified and were usually made up of long scripts of basic web-page interactions, e.g:

Browser(“Some website”).Page(“Some page”).WebEdit(“Some box”).Set “Some value”
Browser(“Some website”).Page(“Some page”).Link(“This is bringing back painful memories”).Click


Same story.

Describing the what

His tester friend could better reflect the business rules and behaviour of the feature by being more expressive; e.g:

Scenario: Filling out valid details at the checkout.
Given I have a product in my basket
When I go through the checkout with valid details
Then I should be thanked
And I should receive a confirmation e-mail

And onto different scenarios

Scenario: Filling out invalid details at the checkout.


More descriptions about behaviour…

Imagine that scenario failed if it were part of your regression effort… Not only would you know what was wrong but you’d immediately know the impact it has because the business value is expressed in plain English.

As Gojko mentions in his talk, Cucumber is “one of the rare tools that goes a long way to stay out of your way, (it) lets you do your work and not have to worry about the tool itself”. Spot on.

From bug hunting to bug prevention

I realise I’ve done nothing to document the evolution of my role of tester on this blog; instead I skipped to the end and wrote about what we get up to at the moment. As such, here are a few pointers that, looking back, aided my transition from an end-of-cycle bug-hunter to someone supporting the team to prevent bugs and deliver correct, quality, working software. These pointers aren’t in the order that I realised, was given, or understood them as I took the long way round. They are, however, now presented semi-ordered with the power of hindsight, and in present-tense for authenticity. I hope they provide some assistance to you if you’re a tester on an agile team looking to better establish your role, offering, and the assurance that you provide in delivering a quality product to your customers.

Some context on the team

As the tester within a team of four developers I am relied on to perform all the testing for the stories that are developed. The team is developing and releasing in three week iterations with Scrum; we plan and point-estimate upfront, develop the stories, test and then release them. The developers are practising XP with test-driven development. This diagram represents an iteration; the wall between development and test highlights the current segregation between these activities, similar to a phased approach to development.

Planning

During planning I provide an estimation on the required testing-effort and help formulate some acceptance criteria for each task that makes up a story. Some tasks are quite techy and implementation specific, so it’s often difficult to come up with constructive criteria that will help judge when we’re done, e.g. Task: “Create database for so and so”; Acceptance criteria: “Um, database for so and so is created with relevant fields”. I also don’t question why we’re developing what we are; my understanding is that some busines-sy person has asked for it so it’ll get developed and I’ll test it to make sure it does what they want. As such, I make sure I have an understanding of what the business person wants so I can do my best to assure it meets these expectations after it’s developed. I assume that the developers on the team will have the same understanding, but I don’t make an effort to ensure they do so.

Testing

I test the output from developers as it hits the ‘QA’ column on our board. I check it meets the expectations and requirements that the business folk have asked of it, and provide information to project managers and developers on the state of it. I use a context-driven approach to exploration in order to learn about what was built and to find defects. I have my toolset that helps me inspect, as well automate tests that I’ve performed in order to run them again in future.

TDD and me

Development and testing, in our team, are two separate disciplines and phases. As such, I currently have no notion of what the developers get up to; this includes test-driven development. I haven’t embraced unit tests; I don’t know what they look like, nor do I know what their coverage is. I am aware they help guide design and provide a safety net when refactoring.  I don’t know if there’s an overlap between the unit testing done by developers and my testing. I’m only ever exposed to the user-interface, as such, all the testing and automation I do occurs on this level. I treat the user-interface as the be-all and end-all of the application; I test business rules through the UI.

* Unit tests in the loosest sense. Behaviour checks are a better name, as Dan North suggests.

So what’s wrong with that?

If I was to compare the testing part of our software development process to manufacturing*, I’d be the one standing at the end of the production line checking and testing the product to see if what is churned out is good enough for the customer. I test things after they’ve been made, or after any damage is possibly done. But that’s my job, right? I am in my testing comfort zone, and I’m not really exposed to any other means of helping achieve the overall goal of assuring quality. The process is what it is and I do my small information gathering, bug finding part.

While I may be in my comfort zone, to me, developing and testing software in this manner doesn’t feel right. It never has on any of the projects I’ve worked, whether they were in an agile environment or not. I’ve always asked: “What’s stopping us developing this correctly without these bugs in the first place?” As a tester I take pride in seeing a good quality product roll out the doors to our customers. As a tester I understand the complex compromise and risk that relates quality, time, budget, and context. As a tester I’ve learnt to just make do with buggy, mis-developed software heading my way, and concentrate on doing my job the best I can: finding bugs, gathering information on quality, and stopping bad software reach the customer. From my perspective there will always be bugs, they will need to be found, and they will need to be managed.

* I am by no means saying that software development is like manufacturing, I’m only using the analogy.

Shouldn’t agility do more?

Agile-adoption has liberated teams from the typical stereotypes related to “traditional” project delivery. We talk to each other, we collaborate with business folk, and we embrace change. But, there is still an “us versus them” mentality between developers and testers, and even between testers and the business. In fact, late testing, bug management, and bug fixing are tedious fire-fighting activities that still linger. Perhaps, amongst the momentum of embracing change, we testers now have an opportunity to fix the development process itself once and for all, rather than just the broken software it yields…

So, what’s the alternative? First clue: Quality Control

During the testing phase I spend time exploring the software and exercising it in ways that cause deviations from my understanding of the expected behaviour to generate information for management and bugs for developers. I know what bugs look like, where to look for them and how to find them. By collaborating with the business and having an understanding of the customer, I help prioritise the defects I find and ensure the bad ones are fixed, and the “OK” ones go live.

Taking a step back and looking at my role in the whole development process, I think this is Quality Control. The root causes of these defects occur either during development or earlier, and I’m doing what I can to control the impact of the defects after they’ve manifested. In contrast, Quality Assurance would involve applying some sort of effort from the very start of development to prevent these defects from occurring in the first place. Therefore, I can assure what is developed is what the business expected.

So, am I really performing “QA”?

Onto some queuing theory

Joe Rainsberger’s post on TDD and queuing theory demonstrates the Quality Control point well. Let’s take a look at our current process, which loosely resembles this:

Plan what to develop (everyone) –> Develop (developer) –> Test (finds information on quality + bugs) (tester) –> Bug fix (developer) –> Test (tester) –> Bug fix (developer) –> Test (tester) –> Sign off (tester) –> Deploy (ops)

You can see we have repetition and wasteful re-work. We ping-pong between someone in development and myself until bugs are fixed and the software is considered shippable. I test what’s delivered after the damage is done and bat back a list of things that need fixing. The list of defects I create during testing is pretty much ato-do” list of fixes for the developer. Not exactly the slickest of processes.

How did we fall into this trap if developers are doing TDD?

Test-driven development will help ensure that we’re developing something correctly. If we’re finding bugs afterwards, then what we need is a way of ensuring we’re developing the correct thing in the first place; a way of being sure the developers, as well as everyone else on the team, has an understanding of what the business needs. We need a way of creating that “to-do” list of bug-fixes upfront, as well as making sure everyone involved with developing the story has a clear understanding of it, rather than letting ourselves assume they do, as I used to do.

I can in fact apply the epistemological and cognitive effort that goes into normal after-it’s-built UI-level testing earlier and come up with examples and scenarios that I can discuss with the business and developers. A simple start would be providing the team with the test cases I come up with, but before they start developing, so they know what I’m looking for.

How can I test at the beginning when there is nothing to test?

Firstly, as a tester I don’t need developed software with a UI to start testing; I can test a requirement and test the proposed solution. I can scope out scenarios, provide examples and edge cases. As Jerry Weinberg puts it, I can ask “does it do what I want it to do, and does it not do what I don’t want it to do?” I can look for bugs upfront in a proposed solution the same way I currently look for bugs afterwards in a built solution.

Airing these scenarios and edge cases with everyone upfront means we all share an understanding of what to expect, everyone knows the potential problems and bugs, and altogether there’s less chance for assumptions to creep in. These scenarios and edge cases form prioritise-able criteria that the business can provide feedback on and use to explicitly tell us what we should develop. Once we’ve developed something we can validate it against these criteria to check if it is acceptable.

Prevention with specification workshops before we start developing

Most of the defects I see sprout from different team-members making different assumptions about the expected behaviour of what we’re developing. A chunky piece of work vaguely worded on a post-it note can be interpreted in many different ways. That’s not to say that we should create vast documentation, but rather break the work down into even smaller manageable and understandable chunks, and format them as stories, examples and acceptance criteria.

With specification by example and acceptance criteria (in Gojko Adzic‘s own words) we “ensure that all project participants speak the same language, and build a shared and consistent understanding of the domain. This leads to better specifications, flushes out incorrect assumptions and ensures that functional gaps are discovered before the development starts.” (His book Bridging the Communication Gap is a must-read on the topic).

I use the same cognitive and exploratory testing effort within these workshops that I would do testing an application, except the benefits are greater. I help eradicate assumption by specifying the expected quality of what we’re building upfront, which reduces the rework and waste that occurs from finding and fixing defects at the end. I’ve learnt about the business domains in which I work from the testing I perform end-of-cycle and the bugs I’ve found, but, I’ve learnt considerably more about these domains performing these upfront exercises alongside the whole team.

All testers possess the open, exploratory mindset and big-picture perspective needed to extrapolate a requirement, help share an understanding, and remove uncertainty within the team.

How can we know everything upfront?

Pretending to know everything upfront is epistemic arrogance (Nassim Nicholas Taleb, via Michael Bolton). There will be unknown factors, scenarios we simply forgot, and stakeholders we didn’t realise we needed to think about; things that late testing would uncover, at a cost. However, Dan North refers to “deliberate discovery” in the context of estimation. I believe this can help us be aware of our ignorance of a missed stakeholder or requirement, allowing us to better prepare ourselves for its impact.

By scoping out work and breaking it down into smaller stories and small scenarios we can develop them individually, get feedback from them sooner, and learn from and improve them.

Developing

After manifesting a shared understanding of a story and extrapolating acceptance criteria that tells us what it should do, I can pair with a developer and we can start developing it. We use the outside-in approach of Behaviour Driven Development to drive the design and development of the code from our acceptance tests down to our “unit tests”/behaviour checks. During development everyone on the team is also constantly exploring the feature in order to improve their understanding of it.

Working closely with developers like this not only exposes me to the testing activity that occurs on a code level (stubbing, mocking, etc.) but also the factors that affect the quality of the code itself (design patterns, object modelling, etc.). These are activities that I am never involved with at the end of the production line and that further aid my goal of assuring quality throughout the development cycle.

Fast feedback

Having a small slice of work developed to pass our behaviour checks, which in turn pass our acceptance tests, we are able to get it in front of the business person faster, which means we can get feedback on it quicker than we would have before. A demo of the implemented story provides the team and stakeholders with an opportunity to determine if there are any scenarios of other stakeholders that were missed. Based on a shared understanding of the context of what is being developed, and of the business’ appetite for risk, we can establish if anything that was missed forms part of the acceptance for this story or can be put off until later as part of a separate story.

Providing Quality Assurance

Having attempting to apply true quality assurance rather than quality control, some defect prevention rather than late detection, and make sure we’re developing the correct thing upfront rather than developing the wrong thing correctly, we ended up with a process loosely like:

Decide what to develop (everyone) –> “Specification workshop”: Figure out what we’re developing and how it’ll work (everyone, especially testers) –> Behaviour Driven Development (developers and testers) –> Demo and learn (everyone) –> Deploy (ops).

I’ve written about this process in action as I mentioned above. I’ll delve into more detail in later posts (I seem to say this in every post) about automating execution of acceptance tests and even about why my job title is now ‘developer’ (which, I know, contradicts the URL of this blog).

Slides from “How we build quality software at uSwitch.com”

I’d like to thank everyone that came to Skills Matter for the talk this week. As Gojko mentioned, it was good to see another full house at a London Agile Testing event. The full house didn’t help the nerves, though the few chuckles at my bad jokes help settle those. I owe a beer to those who laughed.

Slides and links

The slides are available to view on slideshare and as promised, I’ve put a list of resources (videos, podcasts etc) below:

I will also put the cheat sheet we use for gathering acceptance criteria online for download; I usually ping links through my Twitter account.

Contributing to the community

Gojko Adzic, the leader of the London Agile Testing group also posted a great write-up of the talk, many thanks to him.

I was asked if I would return in 12 months to provide another insight into how we develop after continuously evolving our process. I happily agreed, however with hindsight I would have agreed to do so only if others provide an insight for the community into their agile testing effort and experience. Peer-reviews are invaluable and we should utilise the community we have to help us all improve and learn from each other. Please get in touch with Gojko to get involved.

Awesome team

My talk was a reflection of the process that we’ve evolved as a team. Everyone in the department contributes to the evolution and continuous improvement, not only of the process, but the environment we work in. Without these people there wouldn’t have been much for me to talk about! The fifth slide from the end of the presentation contains all the team members on Twitter, it’s well worth following them all.

A talk about how we build quality software

I’m delighted to be able to provide an experience report at the next London Agile Testing event hosted by SkillsMatter, London about how we build quality software at uSwitch.com.

As well as supplying great training courses, SkillsMatter do a lot to nurture communities lead by helpful people like Gojko Adzic. Gojko has kindly let me talk at this month’s event.

My aim is to provide a practical insight into our agile testing effort, talk about how we evolved our process and teams to get where we are today (no QA team, ‘baking’ in quality) and also speculate what the future might hold for us and agile testing.

The event is free to attend and takes place at SkillsMatter’s London office on Wednesday 28th October from 6.30pm. More information and registration is available on their website: http://skillsmatter.com/event/agile-testing/how-we-build-quality-software-at-uswitch-com.

I hope you can come along to peer-review our agile testing effort and join us for a beer after.

How we build quality software

I realised after looking at what I’ve blogged so far that I haven’t written much about actual testing, my first discipline. So, today I’ll provide an insight into how we develop and test software in our department. We have adopted the stance that having a role to find defects is wasteful and effort should be made to ensure such defects are not written (more on this soon). Thus, we do not have testers. However, our developers know how to test; developing with a focus on quality throughout underpins our process.

I’ve mentioned previously about baking-in quality and not having developers throw code over a wall to testers. I also mentioned automating the execution of acceptance criteria, written to define features, in order to aid development today and form regression tests tomorrow. Today I’ll give a very brief overview of each stage (most of which may sound obvious to anyone practicing agile); I’ll go over each stage in greater detail in later posts (my promise to you).

Naturally, the following outline is within the context of our team but there are variations to practices and toolsets that we use. These depend on what we’re working on (web/desktop) to appetite for risk (safety critical/profit).

Overall: Baking quality in with XP, scrum and lean principles

My team consists of 3 developers and myself (I am considered a developer, albeit not as good as the others, but with a better fault model). Everyone on the team is concerned with not only assuring quality in what we deliver, but making it visible to ourselves and the business.

We work in an agile manner, iterating through development with extreme programming practices and Behaviour Driven Development. Facilitating our relationship with the business is Scrum and we utilise kanban principles and systems thinking to maintain a speedy throughput of high-quality work. This mixture allows us to communicate effectively, develop the correct features properly and continuously deploy our work when it is complete, thus maximising business value. I should also mention that we are fortunate enough to have our business people/customer sat across from us.

From start to end: Starting with a User Story & Acceptance Criteria

I have written about user stories, acceptance criteria and how we improved the clarity of them in the teams within our department. Our stories describe a small, releasable piece of functionality and are supported with thorough acceptance criteria, (the level of thoroughness depends on the context and risk attached). The cognitive effort put into traditional end-of-cycle testing is used upfront to obtain these criteria. By anticipating and exploring what we should deliver before we start coding, we have a development goal to aim towards and know we are done once these are met.

Automating Acceptance Criteria

Our acceptance criteria are our acceptance tests, (like a Möbius strip). We automate our acceptance tests in one of the following two ways.

If the feature relates to user interaction through the front end, e.g. pages on a website, we write the acceptance criteria in Cucumber. These are in plain text files in a given-when-then format. At the top level, Cucumber runs Watir, which manipulates the browser. We  develop with BDD to make the Cucumber acceptance tests turn green as they execute.

If the feature relates to more data-driven/domain logic then we represent the acceptance tests in Fitnesse. Fitnesse uses fixtures to interact directly with domain objects. In their simplest form, tests are written in Excel-like tables within a wiki, (which helps when collaborating with business people) and turn green when the system under test returns the expected output.

BDD and Pairing With Awesome Developers.

Practicing BDD allows us to deliver correctly working, tested software. Firstly, a failing acceptance test scenario drives development. Dropping down into the code, red-green-refactor cycles with nUnit specs help design, document and test internal domain objects. These unit test cycles are repeated until the inital failing acceptance test that we set out to pass is doing so.

When coding we pair-programme (using the pomodoro technique). Alistair Cockburn and Laurie Williams’ paper “The Costs and Benefits of Pair Programming” describes the many advantages to pair programming.

No Testers, Testing or Inspection (being lean).

Huh, no testers?! Without testers or a QA team there is no wall over which work can be thrown and the responsibility for quality absolved. We as developers are responsible for delivering quality, therefore we focus on quality. This department-wide stance is achieved by solid direction from department heads, like our Development Manager.

The inspection typically carried out end-of-cycle only yields bugs that were low severity and of no real impact to the end user. The fallacies of testing hold true, not everything can be tested and not all bugs will be found (that is, if you want to get to market), so we put the right bugs live. Following point 3 from Deming’s 14 Points: by doing more upfront (in the form of acceptance) and focussing on and building in quality, we eliminate the need for “mass inspection”.

Error logging and Customer Experience tracking tools, like TeaLeaf, provide instant feedback on any issues that do happen to creep into production.

Continuous Integration and Continuous Test Runs

An agile testing must-have, we use TeamCity to continuously run our unit tests on each check-in. We also execute our Cucumber acceptance tests on scheduled runs. The status of the builds are visible on dedicated monitors around the office as well as a nice 6′ projected screen.

Altogether

The most radical part of our process is probably the lack of traditional testers. We do not, however, lack testing. We focus on and assure quality in what we develop as we develop it. As a result we have seen the quality of output improve, the rate of throughput increase and our developers thrive as we constantly deliver correct, tested software.

Futurespectives

Over the last few months (whilst I’ve spent nearly no time blogging) I’ve been paying a lot of attention to practices and principles contributing to leaner software development. With principles taken from the Toyota Production System, this encompasses continually delivering  value-ful, quality software to your customer through a refined process that advocates continuous improvement and elimination of waste; reducing the concept-to-cash lead time.

A lot about lean and the principles that kanban introduces interests me a lot, the inventory, limiting work in progress and of course, quality, but I’ll save all that for another day. For now, under the realm of continuous improvement (something we should all be doing, lean or not), I’d like to mention something I heard from Mike Wagg after his time at QCon, London back in March. In this post Mike mentions what he picked up listening to Energized Work talk about their development practices. One thing I liked were ‘futurespectives’ and I tried this with our team recently.

After a retrospective, with our developers, development manager, project manager and business person in attendance, we started talked about the next large block of functionality we were to deliver. While doing so I probed the conversation with the questions Energized Work recommend:

Pretending we’ve delivered, what issues did we have to deal with and how did we deal with them?

While this sounds like an exercise that would only state the obvious, having the whole team involved in discussion means that, subtly, everyone can air their assumptions.

The conversation that followed provided an insight into the risks we faced as a team, that we would probably only have talked about with hindsight at the next retrospective. We were able to anticipate possible problems and come up with goals to help mitigate them, such as addressing quicker feedback with on-demand demos, obstacles affecting testing complex parts of the application and how we can deploy completed work sooner.

Together with the goals from the retrospective itself, we ended up with a nice set of pointers to help us ensure we improved and delivered successfully, once again.