Skip to main content

Posts

Showing posts from January, 2008

What is Equivalence partitioning

Equivalence partitioning is a systematic process that identifies, on the basis of whatever information is available, a set of interesting classes of input conditions to be tested, where each class is representative of (or covers) a large set of other possible tests. If partitioning is applied to the product under test, the product is going to behave in much the same way for all members of the class. The aim is to minimize the number of test cases required to cover these input conditions. There are two distinct steps. The first is to identify the equivalence classes (ECs) and the second is to identify the test cases. (1) Identifying equivalence classes For each external input: (i) If the input specifies a range of valid values, define one valid EC (within the range) and two invalid Ecs (one outside each end of the range). Example: If the input requires a month in the range of 1-12, define one valid EC for months 1 through 12 and two invalid ECs (month<1>12). (ii

Explain Testing Methodologies

Testing methodologies are set of rules or guidelines that are followed to minimize the number of test cases and also provide with maximum test coverage. The following methods are commonly used: · equivalence partioning · boundary-value analysis · error guessing The following are lesser-used methods: · cause-effect graphing · syntax testing · state transition testing · graph matrix

What is ANSI?

ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

What is IEEE?

IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

What is ISO?

ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.i

What is CMM?

CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors. Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Fw if any processes in place; successes may not be repeatable. Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated. Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group i

What is the 'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

What is Smoke Testing ?

Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also sometimes known as ad hoc testing, i.e. testing without a formal test plan. With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed. Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, suppo

Checklist for conducting Unit Tests

- Is the number of input parameters equal to number of arguments? - Do parameter and argument attributes match?- Do parameter and argument units system match? - Is the number of arguments transmitted to called modules equal to number of parameters? - Are the attributes of arguments transmitted to called modules equal to attributes of parameters? - Is the units system of arguments transmitted to called modules equal to units system of parameters? - Are the number of attributes and the order of arguments to built-in functions correct? - Are any references to parameters not associated with current point of entry? - Have input only arguments altered? - Are global variable definitions consistent across modules?- Are constraints passed as arguments? When a module performs external I/O, additional interface tests must be conducted: - File attributes correct? - OPEN/CLOSE statements correct?- Format specification matches I/O statement?- Buffer size matches record size? - Files opened before us

How do you create a test plan/design?

Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results.. Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.. Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. . It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.. Test scenarios are executed through the use of test procedures or scripts.. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.. Test procedures or scripts include the specific data that will be used for testing the process or transaction.. T

How do you create a Test Strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process:· A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.· A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.· Testing methodology. This is based on known standards.· Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.· Requirements that the system ca

Software Testing - Taxonomy

There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing.

Manual testing procedure

Introduction There are many potential pitfalls to manual software testing, including: 1. Manual testing is slow and costly. 2. Manual tests do not scale well. 3. Manual testing is not consistent or repeatable. 4. Lack of training. 5. Testing is difficult to manage. This article will cover five "best practices" recommendations to help avoid the pitfalls associated with manual software testing. Be Thorough in Test Design and Documentation In designing the tests, there should be agreement among the business staff, product and project managers, developers, and testers on test coverage. This can be documented as test requirements in a test plan. With this documentation, management can have visibility of the test coverage and know that the right areas are being tested. This then becomes an important management tool in managing testing. The goal is to find the easiest way to document as many test cases as possible without having the test effort turn int

Explain Boundary value testing and Equivalence testing with some examples.

Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range. Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters. BVA is done like this, max value:10 pass; max-1: 9 pass; max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail; Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values. Equivalence testing is normally used to check the type of the object. Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters. In +ve condition we have test the object by giving alphabets. i.e a-z char only, after that we need to check whether the object accepts the value, it will pass. In -ve condition we have to test by giving other than alphabets (a-z) i.e A-Z,0-9,blank etc, it will fail.

What is Security testing?

It is a process used to look out whether the security features of a system are implemented as designed and also whether they are adequate for a proposed application environment. This process involves functional testing, penetration testing and verification

What is Installation testing?

Installation testing is done to verify whether the hardware and software are installed and configured properly. This will ensure that all the system components were used during the testing process. This Installation testing will look out the testing for a high volume data, error messages as well as security testing

What is AUT ?

AUT is nothing but "Application Under Test". After the designing and coding phase in Software development life cycle, the application comes for testing then at that time the application is stated as Application Under Test.

What is Defect Leakage ?

Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

What are the contents in an effective Bug report?

Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Test case that is failed for the Bug)

What is Error guessing and Error seeding ?

Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them. Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program

What is Test bed and Test data ?

Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used. Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.

Describe bottom-up and top-down approaches in Regression Testing.

Bottom-up approach : In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module. Top-down approach : In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.

What is Re- test ? What is Regression Testing ?

Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application. Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.

What are the basic solutions for the software development problems?

Basic requirements - clear, detailed, complete, achievable, testable requirements has to be developed. Use some prototypes to help pin down requirements. In nimble environments, continuous and close coordination with customers/end-users is needed. Schedules should be realistic - enough time to plan, design, test, bug fix, re-test, change, and document in the given schedule. Adequate testing – testing should be started early, it should be re-tested after the bug fixed or changed, enough time should be spend for testing and bug-fixing. Proper study on initial requirements – be ready to look after more changes after the development has begun and be ready to explain the changes done to others. Work closely with the customers and end-users to manage expectations. This avoids excessive changes in the later stages. Communication – conduct frequent inspections and walkthroughs in appropriate time period; ensure that the information and the documentation is available on up-to-date if possibl

What are the common problems in the software development process?

Inadequate requirements from the Client - if the requirements given by the client is not clear, unfinished and not testable, then problems may come. Unrealistic schedules – Sometimes too much of work is being given to the developer and ask him to complete in a Short duration, then the problems are unavoidable. Insufficient testing – The problems can arise when the developed software is not tested properly. Given another work under the existing process – request from the higher management to work on another project or task will bring some problems when the project is being tested as a team. Miscommunication – in some cases, the developer was not informed about the Clients requirement and expectations, so there can be deviations.

Why does software have bugs?

Miscommunication or no communication – about the details of what an application should or shouldn't do Programming errors – in some cases the programmers can make mistakes. Changing requirements – there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out. Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.

What software testing types can be considered?

Black box testing – This type of testing doesn’t require any knowledge of the internal design or coding. These Tests are based on the requirements and functionality. White box testing – This kind of testing is based on the knowledge of internal logic of a particular application code. The Testing is done based on the coverage of code statements, paths, conditions. Unit testing – the 'micro' scale of testing; this is mostly used to test the particular functions or code modules. This is typically done by the programmer and not by testers; it requires detailed knowledge of the internal program design and code. It cannot be done easily unless the application has a well-designed architecture with tight code; this type may require developing test driver modules or test harnesses. Sanity testing or Smoke testing – This type of testing is done initially to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new so

What is the difference between QA and testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented to 'detection'. Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

What is quality assurance?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Explain Load, Performance, Stress Testing with an example

Load Testing and Performance Testing are commonly said as positive testing where as Stress Testing is said to be as negative testing. Say for example there is a application which can handle 25 simultaneous user logins at a time. In load testing we will test the application for 25 users and check how application is working in this stage, in performance testing we will concentrate on the time taken to perform the operation. Where as in stress testing we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking the Hardware resources.

What is Traceability Matrix ?

Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, This document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects/Bug id. Using this document the person can track the Requirement based on the Defect id.

Explain Compatibility Testing with an example

Compatibility testing is to evaluate the application compatibility with the computing environment like Operating System, Database, Browser compatibility, Backwards compatibility, Computing capacity of the Hardware Platform and compatibility of the Peripherals. Ex : If Compatibility testing is done on a Game application, before installing a game on a computer, its compatibility is checked with the computer specification that whether it is compatible with the computer having that much of specification or not.

Explain Peer Review in Software Testing

It is an alternative form of Testing, where some colleagues were invited to examine your work products for defects and improvement opportunities. Some Peer review approaches are, Inspection – It is a more systematic and rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.Ex : In Motorola’s Iridium project nearly 80% of the defects were detected through inspections where only 60% of the defects were detected through informal reviews. Team Reviews – It is a planned and structured approach but less formal and less rigorous comparing to Inspections. Walkthrough – It is an informal review because the work product’s author describes it to some colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a defined procedure, do not specify exit criteria, require no management reporting, and generate no metrics. Pair Programming – In Pair Programming, two developers work together on the same program at

Software testing guide - Testing tutorial

Introduction There are many potential pitfalls to manual software testing, including: 1. Manual testing is slow and costly. 2. Manual tests do not scale well. 3. Manual testing is not consistent or repeatable. 4. Lack of training. 5. Testing is difficult to manage. This article will cover five "best practices" recommendations to help avoid the pitfalls associated with manual software testing. Be Thorough in Test Design and Documentation In designing the tests, there should be agreement among the business staff, product and project managers, developers, and testers on test coverage. This can be documented as test requirements in a test plan. With this documentation, management can have visibility of the test coverage and know that the right areas are being tested. This then becomes an important management tool in managing testing. The goal is to find the easiest way to document as many test cases as possible without having the test effort turn int