From bug hunting to bug prevention

I realise I’ve done nothing to document the evolution of my role of tester on this blog; instead I skipped to the end and wrote about what we get up to at the moment. As such, here are a few pointers that, looking back, aided my transition from an end-of-cycle bug-hunter to someone supporting the team to prevent bugs and deliver correct, quality, working software. These pointers aren’t in the order that I realised, was given, or understood them as I took the long way round. They are, however, now presented semi-ordered with the power of hindsight, and in present-tense for authenticity. I hope they provide some assistance to you if you’re a tester on an agile team looking to better establish your role, offering, and the assurance that you provide in delivering a quality product to your customers.

Some context on the team

As the tester within a team of four developers I am relied on to perform all the testing for the stories that are developed. The team is developing and releasing in three week iterations with Scrum; we plan and point-estimate upfront, develop the stories, test and then release them. The developers are practising XP with test-driven development. This diagram represents an iteration; the wall between development and test highlights the current segregation between these activities, similar to a phased approach to development.

Planning

During planning I provide an estimation on the required testing-effort and help formulate some acceptance criteria for each task that makes up a story. Some tasks are quite techy and implementation specific, so it’s often difficult to come up with constructive criteria that will help judge when we’re done, e.g. Task: “Create database for so and so”; Acceptance criteria: “Um, database for so and so is created with relevant fields”. I also don’t question why we’re developing what we are; my understanding is that some busines-sy person has asked for it so it’ll get developed and I’ll test it to make sure it does what they want. As such, I make sure I have an understanding of what the business person wants so I can do my best to assure it meets these expectations after it’s developed. I assume that the developers on the team will have the same understanding, but I don’t make an effort to ensure they do so.

Testing

I test the output from developers as it hits the ‘QA’ column on our board. I check it meets the expectations and requirements that the business folk have asked of it, and provide information to project managers and developers on the state of it. I use a context-driven approach to exploration in order to learn about what was built and to find defects. I have my toolset that helps me inspect, as well automate tests that I’ve performed in order to run them again in future.

TDD and me

Development and testing, in our team, are two separate disciplines and phases. As such, I currently have no notion of what the developers get up to; this includes test-driven development. I haven’t embraced unit tests; I don’t know what they look like, nor do I know what their coverage is. I am aware they help guide design and provide a safety net when refactoring.  I don’t know if there’s an overlap between the unit testing done by developers and my testing. I’m only ever exposed to the user-interface, as such, all the testing and automation I do occurs on this level. I treat the user-interface as the be-all and end-all of the application; I test business rules through the UI.

* Unit tests in the loosest sense. Behaviour checks are a better name, as Dan North suggests.

So what’s wrong with that?

If I was to compare the testing part of our software development process to manufacturing*, I’d be the one standing at the end of the production line checking and testing the product to see if what is churned out is good enough for the customer. I test things after they’ve been made, or after any damage is possibly done. But that’s my job, right? I am in my testing comfort zone, and I’m not really exposed to any other means of helping achieve the overall goal of assuring quality. The process is what it is and I do my small information gathering, bug finding part.

While I may be in my comfort zone, to me, developing and testing software in this manner doesn’t feel right. It never has on any of the projects I’ve worked, whether they were in an agile environment or not. I’ve always asked: “What’s stopping us developing this correctly without these bugs in the first place?” As a tester I take pride in seeing a good quality product roll out the doors to our customers. As a tester I understand the complex compromise and risk that relates quality, time, budget, and context. As a tester I’ve learnt to just make do with buggy, mis-developed software heading my way, and concentrate on doing my job the best I can: finding bugs, gathering information on quality, and stopping bad software reach the customer. From my perspective there will always be bugs, they will need to be found, and they will need to be managed.

* I am by no means saying that software development is like manufacturing, I’m only using the analogy.

Shouldn’t agility do more?

Agile-adoption has liberated teams from the typical stereotypes related to “traditional” project delivery. We talk to each other, we collaborate with business folk, and we embrace change. But, there is still an “us versus them” mentality between developers and testers, and even between testers and the business. In fact, late testing, bug management, and bug fixing are tedious fire-fighting activities that still linger. Perhaps, amongst the momentum of embracing change, we testers now have an opportunity to fix the development process itself once and for all, rather than just the broken software it yields…

So, what’s the alternative? First clue: Quality Control

During the testing phase I spend time exploring the software and exercising it in ways that cause deviations from my understanding of the expected behaviour to generate information for management and bugs for developers. I know what bugs look like, where to look for them and how to find them. By collaborating with the business and having an understanding of the customer, I help prioritise the defects I find and ensure the bad ones are fixed, and the “OK” ones go live.

Taking a step back and looking at my role in the whole development process, I think this is Quality Control. The root causes of these defects occur either during development or earlier, and I’m doing what I can to control the impact of the defects after they’ve manifested. In contrast, Quality Assurance would involve applying some sort of effort from the very start of development to prevent these defects from occurring in the first place. Therefore, I can assure what is developed is what the business expected.

So, am I really performing “QA”?

Onto some queuing theory

Joe Rainsberger’s post on TDD and queuing theory demonstrates the Quality Control point well. Let’s take a look at our current process, which loosely resembles this:

Plan what to develop (everyone) –> Develop (developer) –> Test (finds information on quality + bugs) (tester) –> Bug fix (developer) –> Test (tester) –> Bug fix (developer) –> Test (tester) –> Sign off (tester) –> Deploy (ops)

You can see we have repetition and wasteful re-work. We ping-pong between someone in development and myself until bugs are fixed and the software is considered shippable. I test what’s delivered after the damage is done and bat back a list of things that need fixing. The list of defects I create during testing is pretty much ato-do” list of fixes for the developer. Not exactly the slickest of processes.

How did we fall into this trap if developers are doing TDD?

Test-driven development will help ensure that we’re developing something correctly. If we’re finding bugs afterwards, then what we need is a way of ensuring we’re developing the correct thing in the first place; a way of being sure the developers, as well as everyone else on the team, has an understanding of what the business needs. We need a way of creating that “to-do” list of bug-fixes upfront, as well as making sure everyone involved with developing the story has a clear understanding of it, rather than letting ourselves assume they do, as I used to do.

I can in fact apply the epistemological and cognitive effort that goes into normal after-it’s-built UI-level testing earlier and come up with examples and scenarios that I can discuss with the business and developers. A simple start would be providing the team with the test cases I come up with, but before they start developing, so they know what I’m looking for.

How can I test at the beginning when there is nothing to test?

Firstly, as a tester I don’t need developed software with a UI to start testing; I can test a requirement and test the proposed solution. I can scope out scenarios, provide examples and edge cases. As Jerry Weinberg puts it, I can ask “does it do what I want it to do, and does it not do what I don’t want it to do?” I can look for bugs upfront in a proposed solution the same way I currently look for bugs afterwards in a built solution.

Airing these scenarios and edge cases with everyone upfront means we all share an understanding of what to expect, everyone knows the potential problems and bugs, and altogether there’s less chance for assumptions to creep in. These scenarios and edge cases form prioritise-able criteria that the business can provide feedback on and use to explicitly tell us what we should develop. Once we’ve developed something we can validate it against these criteria to check if it is acceptable.

Prevention with specification workshops before we start developing

Most of the defects I see sprout from different team-members making different assumptions about the expected behaviour of what we’re developing. A chunky piece of work vaguely worded on a post-it note can be interpreted in many different ways. That’s not to say that we should create vast documentation, but rather break the work down into even smaller manageable and understandable chunks, and format them as stories, examples and acceptance criteria.

With specification by example and acceptance criteria (in Gojko Adzic‘s own words) we “ensure that all project participants speak the same language, and build a shared and consistent understanding of the domain. This leads to better specifications, flushes out incorrect assumptions and ensures that functional gaps are discovered before the development starts.” (His book Bridging the Communication Gap is a must-read on the topic).

I use the same cognitive and exploratory testing effort within these workshops that I would do testing an application, except the benefits are greater. I help eradicate assumption by specifying the expected quality of what we’re building upfront, which reduces the rework and waste that occurs from finding and fixing defects at the end. I’ve learnt about the business domains in which I work from the testing I perform end-of-cycle and the bugs I’ve found, but, I’ve learnt considerably more about these domains performing these upfront exercises alongside the whole team.

All testers possess the open, exploratory mindset and big-picture perspective needed to extrapolate a requirement, help share an understanding, and remove uncertainty within the team.

How can we know everything upfront?

Pretending to know everything upfront is epistemic arrogance (Nassim Nicholas Taleb, via Michael Bolton). There will be unknown factors, scenarios we simply forgot, and stakeholders we didn’t realise we needed to think about; things that late testing would uncover, at a cost. However, Dan North refers to “deliberate discovery” in the context of estimation. I believe this can help us be aware of our ignorance of a missed stakeholder or requirement, allowing us to better prepare ourselves for its impact.

By scoping out work and breaking it down into smaller stories and small scenarios we can develop them individually, get feedback from them sooner, and learn from and improve them.

Developing

After manifesting a shared understanding of a story and extrapolating acceptance criteria that tells us what it should do, I can pair with a developer and we can start developing it. We use the outside-in approach of Behaviour Driven Development to drive the design and development of the code from our acceptance tests down to our “unit tests”/behaviour checks. During development everyone on the team is also constantly exploring the feature in order to improve their understanding of it.

Working closely with developers like this not only exposes me to the testing activity that occurs on a code level (stubbing, mocking, etc.) but also the factors that affect the quality of the code itself (design patterns, object modelling, etc.). These are activities that I am never involved with at the end of the production line and that further aid my goal of assuring quality throughout the development cycle.

Fast feedback

Having a small slice of work developed to pass our behaviour checks, which in turn pass our acceptance tests, we are able to get it in front of the business person faster, which means we can get feedback on it quicker than we would have before. A demo of the implemented story provides the team and stakeholders with an opportunity to determine if there are any scenarios of other stakeholders that were missed. Based on a shared understanding of the context of what is being developed, and of the business’ appetite for risk, we can establish if anything that was missed forms part of the acceptance for this story or can be put off until later as part of a separate story.

Providing Quality Assurance

Having attempting to apply true quality assurance rather than quality control, some defect prevention rather than late detection, and make sure we’re developing the correct thing upfront rather than developing the wrong thing correctly, we ended up with a process loosely like:

Decide what to develop (everyone) –> “Specification workshop”: Figure out what we’re developing and how it’ll work (everyone, especially testers) –> Behaviour Driven Development (developers and testers) –> Demo and learn (everyone) –> Deploy (ops).

I’ve written about this process in action as I mentioned above. I’ll delve into more detail in later posts (I seem to say this in every post) about automating execution of acceptance tests and even about why my job title is now ‘developer’ (which, I know, contradicts the URL of this blog).

About these ads

3 Responses to “From bug hunting to bug prevention”


  1. 1 Dave Nicolette 11 February, 2010 at 4:48 pm

    Kudos to you for thinking about how to improve the end-to-end process, not resting on your laurels for having reached a certain level of agility in your work, and for proactively seeking ways you can add value early in the development cycle. That sort of thinking is the core of agility.

    Of the various points you mentioned, one stands out to me as a possible red flag: The “us versus them” mentality. I wonder if that might be a good place to look for opportunities for improvement? It’s more about culture than about practices.

    Good luck and thanks for sharing your experiences.

    Cheers,
    Dave


  1. 1 Summing it all up « Hemal, Developer in Test Trackback on 6 November, 2010 at 2:38 pm
  2. 2 Building Both Security and Quality In « Secure Software Development Trackback on 29 October, 2012 at 1:05 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Recent twitters


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: