A fire hose of programmers, a straw of testers

The programmers write new code so fast, the testers can’t keep up. It’s like shooting a fire hose into a straw. It doesn’t matter how fast the programmers shoot new code out of the fire hose, because the testers have to get it all through the straw before we can say it’s Done and deploy it in production.
You have a problem
The problem is easy enough to recognize. You have a killer programmer team, but the test team can’t keep up. You have a bunch of features that were coded, so you think they are almost done, but the features haven’t been tested yet, so you can’t deploy them to production. You’re frustrated that you can’t get stuff out the door.

And it just gets worse. No matter how close the test team comes to catching up, the programmers keep adding new, untested code. It seems like you can never get Done, and you have no idea when you’ll get Done or how much more it will cost.

Your problem is you
You need to ask why. Make a list of reasons this is happening and think of ways to fix it. Here are some possible causes:

  • You have the wrong mix of players on your team: too many programmers, and not enough testers.
  • The test team is distracted. They spend too much time doing bug triage or preparing for future new features. They attend too many meetings. They spend too much time on customer or production support.
  • The test team has inadequate computing resources. Other teams borrow their test environments, totally blocking the test team. When the testers get their environment back, it takes too long to reconfigure. To make matters worse, components of the test environment are unreliable, with too little disk space or subpar network infrastructure.
  • The test team relies too heavily on manual testing.
  • Your release criteria (your Definition of Done) are so onerous that the team can’t ever be Done.

Fix it
Given that list of problems, the solutions seem obvious:

  • Stop hiring programmers--more programmers won’t help you get Done any faster. Add more testers, or make the programmers play tester.
  • Protect the test team from distractions. Your testers are the critical constraint--don’t let them do anything that doesn’t help them get Done. Other people can represent them in meetings or help with support issues.
  • Get the test team the computing resources it needs, and don’t let anyone else use those resources, for any reason. Stabilize the environment’s infrastructure. Manage the infrastructure yourself, so you can fix problems immediately instead of handing them off for another team to fix.
  • Automate!
  • Review your release criteria. Does every item add value? Does every item protect against low quality? Can you remove some criteria? Can you address the criteria earlier, as part of getting each story or sprint Done?

What are you doing about it?
Why can’t your test team keep up? What are you doing about the fire hose of new code shooting into the straw of testers?


Oliver Stewart said...

My first inclination would be to wonder why the programmers aren't producing tested code. Are the testers only doing exploratory testing, or are they doing verification testing that could better be done by unit, integration, and acceptance tests as part of a continuous build?

Maybe hire a developer with deep experience in automated testing, or get the developers some training in automated testing techniques. Does done mean running, tested features, or just a bunch of code?

Programmers are generally good at automating things, so you could try shifting the testing load to developers, emphasizing that they should automate wherever they can.

If testers are finding bugs (as opposed to missed requirements/misunderstandings/poor usability), the programmers could probably be doing more/better testing.

Richard said...

Oliver, thanks for writing. This is a team in transition, learning to do Agile well. The problem might be a simple as a mismatch between the number of programmers and testers. The team is simply producing code faster than it can be adequately tested.

The team is taking a number of steps to address the problem. The first fundamental change is to catch up on the inventory of untested code: no new code until the existing code is tested and ready or production. The second change is to force the team not to produce any new code that can’t be tested simultaneously. The team will figure out how to do this themselves, but, as you point out, it will involve things like more upfront testing by the programmers, programmers playing the tester role, and more automation.

Anonymous said...

We have testers embedded in our team and we work as a unit, so testing is part of the sprint. In fact of late all we're doing is testing (and fixing found bugs).

Test automation is key to avoid regression testing and to simplify new tests as they are added - but even there there is a huge unsprintable story to automate existing tests.

Richard said...

@Anonymous, I think you're talking about inventory and technical debt. "... of late, all we're doing is testing (and fixing found bugs)" is a way of saying you built up an inventory of untested code, and now you're testing it. "... there is a huge unsprintable story to automate existing tests" is a way of saying you introduced a lot of technical debt by not automating your tests earlier, and now you're reducing it. I'm glad you're taking care of things.


Related Posts with Thumbnails