Much has come out in the past few days over the late, minimal, and unsuccessful testing done on Healthcare.gov before it was put into production. From what I’ve read so far, it was pretty appalling and highly unprofessional. Here is a slightly edited version of what I wrote several years ago, as part of my final report reviewing a $500 million IT project that was in serious trouble, on the need for thorough and rigorous quality assurance (including testing). Judge for yourself its relevance.
The fundamental purpose of testing—and, for that matter, of all software quality assurance (QA) deliverables and processes—is to tell you just what you’ve built and whether it does what you think it should do. This is essential, because you can’t inspect a software program the same way you can inspect a house or a car. You can’t touch it, you can’t walk around it, you can’t open the hood or the bedroom door to see what’s inside, you can’t take it out for spin. There are very few tangible or visible clues to the completeness and reliability of a software system—and so we have to rely on QA activities to tell us how well built the system is.
Furthermore, almost any software system developed nowadays for production is vastly more complex than a house or car—it’s more on the same order of complexity of a large petrochemical processing and storage facility, with thousands of possible interconnections, states, and processes. We would be (rightly) terrified if, say, Exxon build such a sprawling oil refining complex near our neighborhood and then started up production having only done a bare minimum of inspection, testing, and trial operations before, during and after construction, offering the explanation that they would wait until after the plant went into production and then handle problems as they crop up. Yet too often that’s just how large software development projects are run, even though the system in development may well be more complex (in terms of connections, processes, and possible states) than such a petrochemical factory. And while most inadequately tested software systems won’t spew pollutants, poison the neighborhood, catch fire, or explode, they can cripple corporate operations, lose vast sums of money, spark shareholder lawsuits, and open the corporation’s directors and officers to civil and even criminal liability (particularly with the advent of Sarbanes-Oxley).
And that presumes that the system can actually go into production. The software engineering literature and the trade press are replete with well-documented case studies of “software runaways”: large IT re-engineering or development projects that consume tens or hundreds of millions of dollars, or in a few spectacular (government) cases, billions of dollars, over a period of years, before grinding to a halt and being terminated without ever having put a usable, working system into production. So it’s important not to skimp on testing and the other QA-related activities.
That said, testing (and QA in general) seeks to achieve three primary goals for the system in development:
• acceptable reliability, i.e., integrity of information and behavior;
• acceptable performance, i.e., timely completion of behavior;
• acceptable functionality, i.e., the behavior necessary to produce the desired results.
It is up to the organization to define—in documented form—what are acceptable functionality, performance, and behavior for the system to be developed (or under development). This is done in a cyclic process that involves not just the source code and other program elements of the system in development, but also the other deliverables (requirements, architecture/design documents, and so on):
The key to this cycle is the ‘gatekeeping’ function—that is, deciding whether the deliverable being created or modified has sufficient quality to pass along to someone else. When that gatekeeping function is ignored, overridden, weakened, or simply not present, the whole software development process gradually breaks down, because the quality of the system in development becomes more and more constrained by the (poor) quality of its components. One cannot build a safe, enduring house with incomplete blueprints, untested construction techniques, shoddy materials, and a sandy foundation. The same is true of software.
Investment in testing and QA corresponds to the level of quality desired (or required). Obviously, developing space shuttle avionics software requires a higher level of quality than developing a word processor. Testing beyond a certain point provides diminishing returns; one role of the cycle above is to invoke human decision to decide that the system under test is good enough to become the system in production. Not all tests need to be done; not all need to be done exhaustively; and at some point, you have to decide to push ahead to the next phase (including into production).
However, in order to control spending on testing, you have to spend sufficient money (and time) to do testing right. In other words, if you don’t have the appropriate test-related deliverables and processes—and qualified people—in place, then you will end up spending much more time and money on testing with far less results than you want or need.
And I believe that is where we pretty much stand with Healthcare.gov. ..bruce w..
About the Author (Author Profile)Webster is Principal and Founder at Bruce F. Webster & Associates, as well as an Adjunct Professor of Computer Science at Brigham Young University. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at email@example.com, or you can follow him on Twitter as @bfwebster.
Sites That Link to this Post
- QotD: Software quality assurance « Quotulatiousness | November 4, 2013
- Obamacare and the Potemkin Website : And Still I Persist… | November 16, 2013
- Obamacare and the Three Heads : And Still I Persist… | November 19, 2013
- Obamacare and the Bursting Dam : And Still I Persist… | November 25, 2013