Skip to main content

What is CMM?

CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Fw if any processes in place; successes may not be repeatable.

Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.

Comments

Popular posts from this blog

What is Installation testing?

Installation testing is done to verify whether the hardware and software are installed and configured properly. This will ensure that all the system components were used during the testing process. This Installation testing will look out the testing for a high volume data, error messages as well as security testing

What is Smoke Testing ?

Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also sometimes known as ad hoc testing, i.e. testing without a formal test plan. With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed. Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, suppo...

Software Testing - Boundary-value analysis

Boundary-value analysis is a variant and refinement of equivalence partitioning, with two major differences: First, rather than selecting any element in an equivalence class as being representative, elements are selected such that each edge of the EC is the subject of a test. Boundaries are always a good place to look for defects. Second, rather than focusing exclusively on input conditions, output conditions are also explored by defining output ECs. What can be output? What are the classes of output? What should I create as an input to force a useful set of classes that represent the outputs that ought to be produced? The guidelines for boundary-value analysis are: · If an input specifies a range of valid values, write test cases for the ends of the range and invalid-input test cases for conditions just beyond the ends. Example: If the input requires a real number in the range 0.0 to 90.0 degrees, then write test cases for 0.0, 90.0, -0.001, and 90.001. · If an input specifies a numb...