Monday, June 8, 2009

Differentiation between Quality Control & Quality Assurance as defined by Industry Experts

Many industry experts have concluded following statements to differentiate quality control from quality assurance:

1) Quality control relates to a specific product or service.

2) Quality control verifies whether specific attribute(s) are in, or are not in, a specific product or service.

3) Quality control identifies defects for the primary purpose of correcting defects.

4) Quality control is the responsibility of the team/worker.

5) Quality control is concerned with a specific product.

6) Quality assurance helps establish processes.

7) Quality assurance sets up measurement programs to evaluate processes.

8) Quality assurance identifies weaknesses in processes and improves them.

9) Quality assurance is a management responsibility, frequently performed by a staff function.

10) Quality assurance is concerned with all of the products that will ever be produced by a process.

11) Quality assurance is sometimes called quality control over quality control because it evaluates whether quality control is working.

12) Quality assurance personnel should never perform quality control unless it is to validate quality control.


Tags: Quality Control, Quality Assurance, Software Testing

Understand the difference between quality control and quality assurance

There is often confusion in the IT industry regarding the difference between quality control and quality assurance. Many “quality assurance” groups, in fact, practice quality control. Quality methods can be segmented into two categories: preventive methods and detective methods. This distinction serves as the mechanism to distinguish quality assurance activities from quality control activities. This discussion explains the critical difference between control and assurance, and how to recognize a control practice from an assurance practice.

Quality has following two working definitions:

1) Definition from Producer’s Viewpoint: The quality of the product meets the requirements.

2) Definition from Customer’s Viewpoint: The quality of the product is “fit for use” or meets the customer’s needs.

There are many “products” produced from the software development process in addition to the software itself, including requirements, design documents, data models, GUI screens,programs, and so on. To ensure that these products meet both requirements and user needs, both quality assurance and quality control are necessary.

Let us understand Quality Assurance:

Quality assurance is a planned and systematic set of activities necessary to provide adequate confidence that products and services will conform to specified requirements and meet user needs. Quality assurance is a staff function, responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.

Quality assurance is an activity that establishes and evaluates the processes that produce products. If there is no need for process, there is no role for quality assurance. For example, quality assurance activities in an IT environment would determine the need for, acquire, or help install the following:

1) System development methodologies

2) Estimation processes

3) System maintenance processes

4) Requirements definition processes

5) Testing processes and standards

Once installed, quality assurance would measure these processes to identify weaknesses, and then correct those weaknesses to continually improve the process.

Now Let us understand Quality Control:

Quality control is the process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Quality control is a line function, and the work is done within a process to ensure that the work product conforms to standards and requirements.

Quality control activities focus on identifying defects in the actual products produced. These activities begin at the start of the software development process with reviews of requirements, and continue until all application testing is complete.

It is possible to have quality control without quality assurance. For example, a test team may be in place to conduct system testing at the end of development, regardless of whether that system is produced using a software development methodology.

Tags: Quality Control, Quality Assurance, Software Testing

Sunday, June 7, 2009

What is the difference between Acceptance Testing and System Testing?

Acceptance testing is performed by user personnel and may include assistance by software testers. System testing is performed by developers and / or software testers. The objective of both types of testing is to assure that when the software is complete it will be acceptable to the user.

System test should be performed before acceptance testing. There is a logical sequence for testing, and an important reason for the logical steps of the different levels of testing. Unless each level of testing fulfills its objective, the following level of testing will have to compensate for weaknesses in testing at the previous level.

In most organization units, integration and system testing will focus on determining whether or not the software specifications have been implemented as specified. In conducting testing to meet this objective it is unimportant whether or not the software specifications are those needed by the user. The specifications should be the agreed upon specifications for the software system.

The system specifications tend to focus on the software specifications. They rarely address the processing concerns over input to the software, nor do they address the concerns over the ability of user personnel to effectively use the system in performing their day-to-day business activities.

Acceptance testing should focus on input processing, use of the software in the user organization, and whether or not the specifications meet the true processing needs of the user. Sometimes these user needs are not included in the specifications; sometimes these user needs are incorrectly specified in the software specifications; and sometimes the user was unaware that without certain attributes of the system, the system was not acceptable to the user. Examples include users not specifying the skill level of the people who will be using the system; processing may be specified but turnaround time not specified, and the user may not know that they have to specify the maintainability attributes of the software.

Effective software testers will focus on all three reasons why the software specified may not meet the user’s true needs. For example they may recommend developmental reviews with users involved. Testers may ask users if the quality factors are important to them in the operational software. Testers may work with users to define acceptance criteria early in a development process so that the developers are aware and can address those acceptance criteria.

Tags: User Acceptance Testing, Software Testing, Quality Assurance

Methodology of assigning Acceptance Criteria by the users?

The user must assign the criteria the software must meet to be deemed acceptable. Ideally, this is included in the software requirement specifications.

In preparation for developing the acceptance criteria, the user should:

1) Acquire full knowledge of the application for which the system is intended

2) Become fully acquainted with the application as it is currently implemented by the user’s organization

3) Understand the risks and benefits of the development methodology that is to be used in correcting the software system

4) Fully understand the consequences of adding new functions to enhance the system.

Acceptance requirements that a system must meet can be divided into following four categories:

1) Functionality Requirements: These requirements relate to the business rules that the system must execute.

2) Performance Requirements: These requirements relate to operational aspects, such as time or resource constraints.

3) Interface Quality Requirements: These requirements relate to connections from one component to another component of processing (e.g., human-machine, machine-module).

4) Overall Software Quality Requirements: These requirements specify limits for factors or attributes such as reliability, testability, correctness, and usability.

The criterion that a requirements document may have no more than five statements with missing information is an example of quantifying the quality factor of completeness. Assessing the criticality of a system is important in determining quantitative acceptance criteria. The user should determine the degree of criticality of the requirements by the above acceptance requirements categories.

By definition, all safety criteria are critical; and by law, certain security requirements are critical.

Some typical factors affecting criticality are:

a) Importance of the system to organization or industry
b) Consequence of failure
c) Complexity of the project
d) Technology risk
e) Complexity of the user environment

Products or pieces of products with critical requirements do not qualify for acceptance if they do not satisfy their acceptance criteria. A product with failed non-critical requirements may qualify for acceptance, depending upon quantitative acceptance criteria for quality factors. Clearly, if a product fails a substantial number of non-critical requirements, the quality of the product is questionable.

The user has the responsibility of ensuring that acceptance criteria contain pass or fail criteria. The acceptance tester should approach testing assuming that the least acceptable corrections have been made; while the developer believes the corrected system is fully acceptable. Similarly, a contract with what could be interpreted as a range of acceptable values could result in a corrected system that might never satisfy the user’s interpretation of the acceptance criteria.

For specific software systems, users must examine their projects’ characteristics and criticality in order to develop expanded lists of acceptance criteria for those software systems. Some of the criteria may change according to the phase of correction for which criteria are being defined. For example, for requirements, the “testability” quality may mean that test cases can be developed automatically.

The user must also establish acceptance criteria for individual elements of a product. These criteria should be the acceptable numeric values or ranges of values. The buyer should compare the established acceptable values against the number of problems presented at acceptance time. For example, if the number of inconsistent requirements exceeds the acceptance criteria, then the requirements document should be rejected. At that time, the established procedures for iteration and change control go into effect.

Acceptance Criteria related Information Required to be Documented by the users:

It should be prepared for each hardware or software project within the overall project, where the acceptance criteria requirements should be listed and uniquely numbered for control purposes.

Criteria - 1: Hardware / Software Project:
Information to be documented: The name of the project being acceptance-tested. This is the name the user or customer calls the project.

Criteria - 2: Number:
Information to be documented: A sequential number identifying acceptance criteria.

Criteria - 3: Acceptance Requirement:
Information to be documented: A user requirement that will be used to determine whether the corrected hardware/software is acceptable.

Criteria - 4: Critical / Non-Critical:
Information to be documented: Indicate whether the acceptance requirement is critical, meaning that it must be met, or non-critical, meaning that it is desirable but not essential.

Criteria - 5: Test Result:
Information to be documented: Indicates after acceptance testing whether the requirement is acceptable or not acceptable, meaning that the project is rejected because it does not meet the requirement.

Criteria - 6: Comments:
Information to be documented: Clarify the criticality of the requirement; or indicate the meaning of the test result rejection. For example: The software cannot be run; or management will make a judgment after acceptance testing as to whether the project can be run.

After defining the acceptance criteria, determine whether meeting the criteria is critical to the success of the system.

Tags: User Acceptance Testing, Software Testing, Quality Assurance

What is the role of Software Testers in acceptance testing?

Software testers can have one of three roles in acceptance testing.

Role - 1) No involvement at all:
In that instance the user accepts full responsibility for developing and executing the acceptance test plan.

Role - 2) Act as an advisor:
The user will develop and execute the test plan, but rely on software testers to compensate for a lack of competency on the part of the users, or to provide a quality control role.

Role - 3) Be an active participant in software testing:
This role can include any or the entire acceptance testing activities. The role of the software tester cannot include defining the acceptance criteria, or making the decision as to whether or not the software can be placed into operation. If software testers are active participants in acceptance testing, then they may conduct any part of acceptance testing up to the point where the results of acceptance testing are documented.

A role that software testers should accept is developing the acceptance test process. This means that they will develop a process for defining acceptance criteria, develop a process for building an acceptance test plan, develop a process to execute the acceptance test plan, and develop a process for recording and presenting the results of acceptance testing.

Tags: User Acceptance Testing, Software Testing, Quality Assurance

What is the role of the Users in acceptance testing?

The user’s role in acceptance testing begins with the user making the determination as to whether acceptance testing will or will not occur. If the totality of user’s needs have been incorporated into the software requirements, then the software testers should test to assure those needs are met in unit, integration, and system testing.

If acceptance testing is to occur the user has primary responsibility for planning and conducting acceptance testing. This assumes that the users have the necessary testing competency to develop and execute an acceptance test plan.

If the user does not have the needed competency to develop and execute an acceptance test plan the user will need to acquire that competency from other organizational units or out source the activity. Normally, the IT organization’s software testers would assist the user in the acceptance testing process if additional competency is needed.

The users will have the following minimum roles in acceptance testing:

1) Defining acceptance criteria in a testable format

2) Providing the use cases that will be used in acceptance testing

3) Training user personnel in using the new software system

4) Providing the necessary resources, primarily user staff personnel, for acceptance testing

5) Comparing the actual acceptance testing results with the desired acceptance testing results. This may be performed using testing software.

6) Making decisions as to whether additional work is needed prior to placing the software in operation, whether the software can be placed in operation with additional work to be done, or whether the software is fully acceptable and can be placed into production as is

If the software does not fully meet the user needs, but will be placed into operation, the user must develop a strategy to anticipate problems and pre-define the actions to be taken should those problems occur.

Tags: User Acceptance Testing, Software Testing, Quality Assurance

Friday, June 5, 2009

Understanding of the objective of User Acceptance Testing

Let us begin with the understanding of the objective of User Acceptance Testing :


The objective of software development is to develop the software that meets the true needs of the user, not just the system specifications. To accomplish this, testers should work with the users early in a project to clearly define the criteria that would make the software acceptable in meeting the user needs. As much as possible, once the acceptance criterion has been established, they should integrate those criteria into all aspects of development. This same process can be used by software testers when users are unavailable for test; when diverse users use the same software; and for beta testing software.

Although acceptance testing is a customer and user responsibility, testers normally help develop an acceptance test plan, include that plan in the system test plan to avoid test duplication; and, in many cases, perform or assist in performing the acceptance test.

What are the key Concepts of Acceptance Testing?
It is important that both software testers and user personnel understand the basics of acceptance testing.

Acceptance testing is formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable the buyer to determine whether to accept the system. Software acceptance testing at delivery is usually the final opportunity for the buyer to examine the software and to seek redress from the developer for insufficient or incorrect software.

Frequently, the software acceptance test is the only time the buyer is involved in acceptance and the only opportunity the buyer has to identify deficiencies in a critical software system. The term critical implies economic or social catastrophe, such as loss of life; it implies the strategic importance to an organization’s long-term economic welfare. The buyer is thus exposed to the considerable risk that a needed system will never operate reliably because of inadequate quality control during development. To reduce the risk of problems arising at delivery or during operation, the buyer must become involved with software acceptance early i the acquisition process.

Software acceptance is an incremental process of approving or rejecting software systems during development or maintenance, according to how well the software satisfies predefined criteria. For the purpose of software acceptance, the activities of software maintenance are assumed to share the properties of software development.

Acceptance decisions occur at pre-specified times when processes, support tools, interim documentation, segments of the software, and finally the total software system must meet predefined criteria for acceptance. Subsequent changes to the software may affect previously accepted elements. The final acceptance decision occurs with verification that the delivered documentation is adequate and consistent with the executable system and that the complete software system meets all buyer requirements. This decision is usually based on software acceptance testing.

Formal final software acceptance testing must occur at the end of the development process. It consists of tests to determine whether the developed system meets predetermined functionality, performance, quality, and interface criteria. Criteria for security or safety may be mandated legally or by the nature of the system.

Acceptance testing involves procedures for identifying acceptance criteria for interim life cycle products and for accepting them. Final acceptance not only acknowledges that the entire software product adequately meets the buyer’s requirements, but also acknowledges that the process of development was adequate.
Tags: User Acceptance Testing, Software Testing, Quality Assurance

What are the benefits of software acceptance Tests as a Life Cycle Process

Let us understand the benefits of software acceptance Tests as As a life cycle process?

A) Early detection of software problems (and time for the customer or user to plan for possible late delivery)
B) Preparation of appropriate test facilities
C) Early consideration of the user’s needs during software development
D) Accountability for software acceptance belongs to the customer or user of the software


What are the responsibilities of customer or end users in acceptance testing?

1) Ensure user involvement in developing system requirements and acceptance criteria
2) Identify interim and final products for acceptance, their acceptance criteria, and schedule
3) Plan how and by whom each acceptance activity will be performed
4) Plan resources for providing information on which to base acceptance decisions
5) Schedule adequate time for buyer staff to receive and examine products and evaluations prior to acceptance review
6) Prepare the Acceptance Plan
7) Respond to the analyses of project entities before accepting or rejecting
8) Approve the various interim software products against quantified criteria at interim points
9) Perform the final acceptance activities, including formal acceptance testing, at delivery
10) Make an acceptance decision for each product

The customer or user must be actively involved in defining the type of information required, evaluating that information, and deciding at various points in the development activities if the products are ready for progression to the next activity.

Acceptance testing is designed to determine whether the software is fit for use. The concept of fit for use is important in both design and testing. Design must attempt to build the application to fit into the user’s business process; the test process must ensure a prescribed degree of fit. Testing that concentrates on structure and requirements may fail to assess fit, and thus fail to test the value of the automated application to the business.

What are the components of fit?

1) Data:
The reliability, timeliness, consistency, and usefulness of the data included in the automated application.
2) People: People should have the skills, training, aptitude, and desire to properly use and interact with the automated application.
3) Structure: The structure is the proper development of application systems to optimize technology and satisfy requirements.
4) Rules: The rules are the procedures to follow in processing the data.

Tags: Software Testing, User Acceptance Testing