Skip to main content

Checklist for conducting Unit Tests

- Is the number of input parameters equal to number of arguments?
- Do parameter and argument attributes match?- Do parameter and argument units system match?
- Is the number of arguments transmitted to called modules equal to number of parameters?
- Are the attributes of arguments transmitted to called modules equal to attributes of parameters?
- Is the units system of arguments transmitted to called modules equal to units system of parameters?
- Are the number of attributes and the order of arguments to built-in functions correct?
- Are any references to parameters not associated with current point of entry?
- Have input only arguments altered?
- Are global variable definitions consistent across modules?- Are constraints passed as arguments?

When a module performs external I/O, additional interface tests must be conducted:

- File attributes correct?
- OPEN/CLOSE statements correct?- Format specification matches I/O statement?- Buffer size matches record size?
- Files opened before use?- End-of-file conditions handled?
- I/O errors handled?- Any textual errors in output information?
The local data structure for a module is a common source of errors.

Test cases should be designed to uncover errors in the following categories:

- improper or inconsistent typing- erroneous initialization or default values
- incorrect (misspelled or truncated) variable names- inconsistent data types
- underflow, overflow and addressing exceptions

From a strategic point of view, the following questions should be addressed:

- Has the component interface been fully tested?
- Have local data structured been exercised at their boundaries?
- Has the cyclomatic complexity of the module been determined?
- Have all independent basis paths been tested?
- Have all loops been tested appropriately?
- Have data flow paths been tested? Have all error handling paths been tested?

Comments

Popular posts from this blog

Explain Boundary value testing and Equivalence testing with some examples.

Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range. Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters. BVA is done like this, max value:10 pass; max-1: 9 pass; max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail; Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values. Equivalence testing is normally used to check the type of the object. Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters. In +ve condition we have test the object by giving alphabets. i.e a-z char only, after that we need to check whether the object accepts the value, it will pass. In -ve condition we have to test by giving other than alphabets (a-z) i.e A-Z,0-9,blank etc, it will fail.

What is Client server testing & Web testing?

Projects are broadly divided into two types of: 2 tier applications 3 tier applications CLIENT / SERVER TESTING This type of testing usually done for 2 tier applications (usually developed for LAN)Here we will be having front-end and backend. The application launched on front-end will be having forms and reports which will be monitoring and manipulating data E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase The tests performed on these types of applications would be- User interface testing- Manual support testing- Functionality testing- Compatibility testing & configuration testing- Intersystem testing WEB TESTING This is done for 3 tier applications (developed for Internet / intranet / xtranet)Here we will be having Browser, web server and DB server. The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can...

Manual testing - Brief Summary

Manual testing is a type of software testing in which testers execute test cases without the use of automation tools or scripts. Instead, testers follow a series of predefined steps to verify that a software application or system functions correctly and meets its requirements. Manual testing is an essential part of the software testing process and is typically performed alongside automated testing, where applicable. Here are some key aspects of manual testing: Test Case Design : Testers create test cases based on the software's requirements, specifications, and design documents. These test cases outline the steps to be followed, the expected results, and any necessary preconditions. Test Execution: Testers manually execute the test cases by interacting with the software just like a user would. They input data, navigate through the user interface, and observe the system's behavior. Exploratory Testing: In addition to predefined test cases, manual testers often perform ex...