There’s an old Russian saying, “Perfect is the enemy of good enough.” Some folks misinterpret it as an observation on the adversarial relationship between mediocrity and excellence, but to Russians, it’s about diminishing returns. The Soviet Union would build four or five “good enough” MIG jet fighters for the cost of one western fighter plane, which would never be perfect, but would be incrementally more effective in some hypothetical match-up. To the Russian eye, the massive incremental expense wasn’t worth the minimal incremental benefit – better to go for adequate in quantity than pursue an elusive perfection that would necessarily be scarce.
The equivalent in Agile terminology is “barely sufficient.” Minimal methodology, the fewest meetings needed for communication, minimal documentation, and you “only solve today’s problem.” In Scrum, you limit the scope to what you can reasonably achieve in a two to four week sprint, and repeat as necessary, until the incremental benefit of another sprint is less than the cost. At the end of each sprint, you must deliver a tested, working release; consequently, quality is defined in the context of that limited scope. A release that passes all unit tests, and accommodates all user “stories” is a success. If we need to “refactor” the code in a subsequent sprint, then we’ll solve that problem on that day.
While the Agile approach is arguably superior for development of software products with uncertain requirements and a short life cycle, most corporate IT project goals require more than just an aggregation of incrementally developed components that work reliably without interfering with each other. Much of what gets built for internal consumption is going to have a very long life. Thus, maintainability is a consideration; refactoring code might be a reasonable approach, when the programmer doing the work is the original author and only a few weeks have passed, but if a maintenance programmer will have to go back into someone else’s code three years later, what level of documentation is “barely sufficient?” If the user base might expand exponentially over the life cycle of the product, what scalability considerations should be taken into account during development?
While we’re on the subject of scalability, consider the trend (OK, tsunami) toward virtual servers. Whether in an internal data center or in the cloud, the IT department has re-discovered time-sharing, as we had back in the days of renting mainframe time from service bureaus. Increasingly, we’re finding cost savings in multiple applications “sharing” use of a server running Microsoft technologies, or a LAMP stack, letting no CPU cycle go to waste. Both in the development and test cycle and in production, virtual machines are becoming the default solution. But what quality considerations should apply to virtual machines in production – availability, scheduled maintenance down times, user-perceived response times? Decide early on what the quality standards should be, before settling on a virtual architecture.
When you think about measuring quality, it’s important to realize that customers have an expectation of what they will experience, whether in a fast food joint or fancy restaurant, expense reimbursement application or payroll system. Understanding those expectations, and helping your customers to keep them realistic, is half of the job; managing the quality of the deliverables, while keeping costs under control, is the other half. The key is distinguishing between perfect and good enough.