In Search of Software QualityTylogix Quality Links
By Thibault Dambrine
How many times have you been faced with buggy software in the past year or so? If only for the "small fixes" or "minor overlooks". How annoying did you find this? How many times did you realize after the fact -- weeks, months, years later -- that you were the author of code you were not so proud of? Who of us cannot recall at least one or two (maybe more) major blunders or design overlooks? Of course, these completely unexpected design flaws usually surface at the most inappropriate moments. As programmers, most of us have probably experienced some variation of this occurrence at least once. Most times, hind-sight design, how things should have been done or tested in the first place come to mind very quickly. Effectively applying these first time "how things should be done in the first place" principles is what this article is all about.
Testing already-written software is probably the most popular quality-improving activity for software. The problem with this approach to quality improvement is that it is also the hardest way to go: The goal of testing is to find problems, errors. In doing so, it can only prove the presence of some errors, but it cannot prove the absolute absence of all errors. You could argue that testing is synonymous with Quality Control. The problem with this approach is that it comes only as an after-thought, as a post-design activity.
Quality in software is unusual in a significant way: Improving quality up-front reduces development costs. Understanding this principle relies on understanding one key point: The best way to improve quality and productivity is to reduce the time spent reworking code, whether the changes are due to requirements changes or from simple debugging.
Quality software is something we all strive for, but in concrete terms, what does it really mean? Steve McConnell, author of the book "Code Complete", offers the following definition of external characteristics for measuring quality software. These are not the end-all absolute set of rules, but do provide a good guideline to start with. They are as follows:
- Correctness: The degree to which a system is free from faults in its specifications, design and implementation.
- Usability: The ease with which users can learn and use a system.
- Efficiency: Minimal use of system resources, including memory and execution time.
- Reliability: The ability of a system to perform its required functions under stated conditions whenever required; having a long mean time between failures.
- Integrity: The degree to which a system prevents unauthorized or improper access to its programs and its data. The idea of integrity includes restricting unauthorized user accesses as well as ensuring that data is accessed and entered properly.
-Adaptability: The extent to which a system, as built, is free from error; especially with respect to quantitative outputs.
- Robustness: The degree o which a system continues to function in the presence of invalid inputs or stressful environmental conditions.
Programmers, on top of the above, have to care about internal workings of the system, the guts of the software system. Here are Steve McConnell's quality guide-lines for those:
- Maintainability: The ease with which you can modify a software system to change or add capabilities, improve performance, or correct defects.
- Flexibility: The extent to which you can modify a system for uses or environments other than those for which it was specifically designed.
- Portability: The ease with which you can modify a system to operate in a different environment than the one it was designed in.
- Reusability: The ease with which you can read and understand the source code of a system, especially at the detailed-statement and routine level.
- Testability: the degree to which you can unit-test and system-test a system; the degree to which you can verify that the system meets its system requirements.
- Understandability: The ease with which you can grasp a system at both the system/organization and program/routine level.
Internal and external characteristics of a system are hard to separate. If we go only by those listed above, we can say that they are almost all defined in the design phase of the software system.
In order to get maximum bang-for-the-testing-buck, all testing activity should be planned in advance. Every foreseeable test scenario should be run through, at least in theory, through the system as it is designed. Altering the design, if necessary to meet the testing objectives is far less expensive than changing the software once it is written. At the design phase, each test scenario can be scripted in advance. It can then be done with a thoroughness far superior to the (usually more common) after-the-code-is-written testing. The goal ideally, is of course flawless software.
What rules can we plan to follow for testing? Here, once again, from Steve McConnell are a few guidelines:
- Plan for testing at the design phase.
- Make testing systematic. Unit testing, routine testing, even line by line logic testing; everything should be tested. Ask yourself, has each routine in each program been tested at least once for each situation?
- Once successful unit testing is completed, full system testing should be done, having data flowing from the top of the input stream to the bottom of the last output report.
- To increase testing thoroughness, use testing grids, listing the input, the process, and the expected result for each case.
- Make sure the input data is considered thoroughly. Bad data cases, cases of too much or too little or no data at all. Questions such as "Will your arrays overflow under heavy loads?" should be asked, and tested. Of course, you also want to test with proper data.
- In the case of existing systems, regression testing is important. Regression is testing designed to make sure that the software has not taken a step backwards (or "regressed") with the newest changes.
In conclusion, testing is practically a philosophy of life. Planning the test in advance builds that extra thoroughness right in your software. It also builds consistent, reliable results. Designing the test along with the software (not after) is the key to a more reliable software product. Systematic execution of the planned tests will ensure that the results match the original vision.
Beyond the question of testing and methods, the quality of the work we produce as IS professionals reflects on ourselves and on our credibility. Designing and writing quality software can be a matter of choice. Even in the smallest modifications, in the little every-day-life changes we often make to larger sometimes patchy old systems. We can make it a habit to be as thorough as possible, to create software that will run as it is expected to. The rules are not so elusive; Software Quality can be quantified. The choice is ours, if we want to produce the best.
Steve McConnell is a consultant to software-intensive companies in the Puget Sound area, including Microsoft Corporation. He is the author of "Code Complete", published by Microsoft Press.Back to Tylogix Home Page