Is it "obious day at camp stupid?" Maybe, but quality assurance is still expensive, and people (especially stakeholders) sometimes like to forget this fact. In this context I am using QA to refer to the final testing of a build as a whole. Our team does not have dedicated QA staff, so every two week iteration the entire team takes from one to two days to test the build. That is 10%-20% of the total effort of an iteration. Read that line again.
Stakeholders, however, are still very angry (understandably) when a bug makes it into the production system. On any given iteration, we usually have a patch after the fact that fixes something, though usually something minor. I bring it up because that is our strategy for making up for the deficiency in the QA effort: let the users test it.
It sounds horrible, but the best testers of any system are real users trying to use the system in the real world. They find bugs ridiculously fast. This might lead you to have the idea of users testing a preview of the release. This is a good idea, but does not work for business applications because there is a usually only a single instance of the production software running at a time.
Unfortunately, there really is no other alternative except to spend more money to test a build. Upper management is not going to fork over the money unless there really is a need to be 99% bug-free on delivery day. This is usually not the case unless you are shrink-wrapping. And let's face it, you're not.
If that is not enough to dissuade you, in addition to extra money, if you are looking at a dedicated QA staff, you will also have extra lag time between the finishing of a build and its delivery (you cannot test a build before it is finished, at least, not the QA I am talking about here). The QA staff must be handed the build, and the developer team must be handed back a list of bugs, at which point the process repeats. In the meantime, the team has moved on to a new build, and is no longer focused on the old one. So deliveries end up being delivered half-way through an iteration instead of on iteration boundaries.
I have found that if you patch any bugs the users do find (that are important, see my last post) in a reasonable time with a reasonable attitude ("thanks for reporting that", "must have slipped past our two days of testing"), the users will not mind. Instead they will worship the ground you walk on for reducing QA time and given them more effort to spend on new development. Pause not.
Monday, October 29, 2007
QA is Expensive
Posted by Justin Francis at 8:43 PM 1 comments
Friday, October 12, 2007
Fixing Bugs is not Free
When wandering the halls, I will often hear comments from users about little bugs (usually display bugs) and I tell them straight up that in all likelihood, the bug will never be fixed. The typical response to this is a gasp, followed by a smug look that means something along the lines of "I could write software better than these amateurs."
I have also told developers who report small bugs that "we'll wait for a user to report that," with similar results. I then have a choice to make. Try to convince them I actually know what I am doing, or leave them thinking I'm an buffoon. Here is the argument I make.
Fixing bugs is just like building new features. It is not free. Each bug users desire fixed costs effort (points in our agile methodology) to do so. Bugs are usually much cheaper to fix than new features are to build, but the cost is certainly not insignificant.
If bugs cost effort to fix just like anything else, then they must be estimated and scheduled just like everything else. This is where the key lies. When confronted with the option of refining an existing feature (let alone a bugfix) or the creation of a new feature, stakeholders will almost always opt to implement a new feature (this leads to a kind of functional but rough monolith, but that is another post). This means that bugs, especially ones that don't really hurt anybody, are the least likely items to get scheduled. And so they don't.
I should make a note about critical bugs. If a critical bug (one that has no workaround that prevents an important feature from working) is found, we fix it immediately (forget iterations), but even these are not free. After the fact, we estimate the fix and then push an appropriate number of items from the current iteration to make room for the bugfix, just as if a stakeholder has scheduled it.
Surprisingly, systems I have built using this strategy are not buggy as one would expect, though that probably has more to do with Test Driven Design than anything else. The point is that if you do things properly, this strategy not only works but works well. We hardly ever schedule bug-fixes at work, and when we do, they are usually almost as large as features.
Once this is explained, I wait for a few weeks and then circle back. The person in question is usually impressed with the features we have delivered in that time and is not concerned about that bug that they don't even notice anymore.
Posted by Justin Francis at 6:17 PM 4 comments