A simpler approach is to poll customer satisfaction and find out whether the software meets the general requirements of the user it was designed for. A customer can often label a bug-free code with poor quality, if it is difficult to install and use and does not do what the customer thinks it rightly should.
Both these approaches assess quality after the event, and can at best lead to improvements when the next piece of software is designed. The third approach focuses on measuring quality as it is created, by measuring compliance with a good process for doing every task in the chain that ultimately results in the finished software. High compliance creates a confidence in the finished artifact. Also, this way, any deviations can be trapped and corrected early, more effectively, often at lower cost.
The best way is to use all three approaches to measure software quality. There can be no disagreements about the quality of software that is built in compliance with a good process, yields few defects on testing and gets the thumbs up from customers. Once we get all three measures in line we know exactly what we need to do each time we have to create quality software thru quality software processes.
A successful software quality process will address at least the following elements. In the early stages, some elements (like test automation) might be deferred for later implementation.
Quality Controlled Requirements Management - More than half of all software defects can be traced to poorly defined product requirements. To avoid this, requirements must be carefully managed. These means requirements are carefully documented and reviewed for testability. It also implies that there is a method for incorporating both new and changed requirements that are defined once the project is well underway, since almost nobody uses the Waterfall model anymore. One source of good information on requirements management is the book Exploring Requirements: Quality Before Design, by Donald Gause and Gerald Weinberg. Although the book is now over ten years old, it's still an entertaining and effective look at this essential process.
Design reviews - Design reviews are the best way to ensure that design errors are exposed before coding begins. Involving testers in functional design reviews puts them in a position to better understand system internals, which will come in handy during validation testing.
Testing in Development - Code reviews, unit testing and qualification testing are typically performed by the development team. Performing unit testing and qualification testing (sometimes called smoke testing) in Development rather than in the Test group helps promote a sense of quality ownership among developers (though some will resist it at first).
Configuration Management - No organization can have a meaningful QA process without strong code control and configuration management. Code control refers to a checkout, check -in system for source and object code.
Test Planning - Test planning should begin as soon as requirements are known; as early as possible in the development lifecycle. The master test plan should include risk assessment, selection of test types, entry and exit criteria, resources and schedules. In most cases, requirements management and design reviews (described above) will be important early steps in the master test plan. In fact, the test plan is the document that unites all the processes outlined in this article. The more complete your early planning, the fewer unpleasant surprises you'll encounter as the product gets closer to the planned delivery date.
Test Case and Script Creation - Tests can be created while code is in development, whenever possible, to save time. Each test case and test script developed should relate directly to one or more of the product requirements. A table, sometimes called a traceability matrix, can be created that lists requirements, related design elements, associated tests, expected and (eventually) actual results. Test cases and scripts should be designed for re-use, because many tests will be run over and over as the system evolves, to ensure that a change or bug-fix doesn't break something that worked before (known as regression testing).
Defect Tracking - An effective defect tracking system allows the organization to record, prioritize and take action on all defects discovered during the product lifecycle.
Test Automation - Regression testing and performance testing are often well suited to automation. The timing of test automation is a difficult decision. If tests are automated too early in an application's lifecycle, frequent changes to the code will break many automated tests, wasting the time used to create them. But the earlier tests are automated, the more time will be saved running and re-running common tests.
Metrics - Gathering data, such as defect origin, defect density, time-to-test per thousand lines of code, etc., can be useful in predicting schedules, defect discovery rates, defect removal efficiency, etc. Be careful when selecting metrics, because people tend to modify their behavior in response to measurements, so unintended outcomes may result .
Conclusion
Software quality is a complicated issue, and in some, it can take a third party to decide. But vendors are starting to get the message that they need to be more accountable for quality in all its forms, from reliability to security to performance. Oracle, for instance, touts its efforts to make quality a larger part of its development efforts. And there's a school of thought among some in the IT industry that quality is more than making sure code is free of programming bugs or security holes. Microsoft chairman and chief software architect Bill Gates says the industry's future hinges on improving quality. He'd like companies to become even more dependent on software, using emerging standards and technology known as Web services to interconnect every interaction until they're completely digital businesses. That won't happen unless quality improves….