Wednesday, September 9, 2009

Which defects come under the prime focus of testers?

All testing focuses on discovering and eliminating defects or variances from what is expected.

Testers need to identify following two types of defects:

A) Variance from Specifications – A defect from the perspective of the builder of the product.

B) Variance from what is Desired – A defect from a user (or customer) perspective.

Typical software system defects are as under:

1) IT improperly interprets requirements: IT staff misinterprets what the user wants, but correctly implements what the IT people believe is wanted.

2) Users specify the wrong requirements: The specifications given to IT are erroneous.

3) Requirements are incorrectly recorded: IT fails to record the specifications properly.

4) Design specifications are incorrect: The application system design does not achieve the system requirements, but the design as specified is implemented correctly.

5) Program specifications are incorrect: The design specifications are incorrectly interpreted, making the program specifications inaccurate; however, it is possible to properly code the program to achieve the specifications.

6) Errors in program coding: The program is not coded according to the program specifications.

7) Data entry errors: Data entry staff incorrectly enters information into your computers.

8) Testing errors: Tests either falsely detect an error or fail to detect one.

9) Mistakes in error correction: Your implementation team makes errors in implementing your solutions.

10) The corrected condition causes another defect: In the process of correcting a defect, the correction process itself injects additional defects into the application system.

Tags: Software Testing, Software Quality, Software system defects, quality Assurance, software defects

What are software design & data defects?

Let us firstly see what is a defect?

The problems, resulting in the software applications automatically initiating uneconomical or otherwise incorrect actions, can be broadly categorized as software design defects and data defects.

Software Design Defects:

Software design defects that most commonly cause bad decisions by automated decision making applications include:

1) Designing software with incomplete or erroneous decision-making criteria. Actions have been incorrect because the decision-making logic omitted factors that should have been included. In other cases, decision-making criteria included in the software were appropriate, either at the time of design or later, because of changed circumstances.

2) Failing to program the software as intended by the customer (user), or designer, resulting in logic errors often referred to as programming errors.

3) Omitting needed edit checks for determining completeness of output data. Critical data elements have been left blank on many input documents, and because no checks were included, the applications processed the transactions with incomplete data.

Data Defects:

Input data is frequently a problem. Since much of this data is an integral part of the decision making process, its poor quality can adversely affect the computer-directed actions.

Common problems are:

1) Incomplete data used by automated decision-making applications. Some input documents prepared by people omitted entries in data elements that were critical to the application but were processed anyway. The documents were not rejected when incomplete data was being used. In other instances, data needed by the application that should have become part of IT files was not put into the system.

2) Incorrect data used in automated decision-making application processing. People have often unintentionally introduced incorrect data into the IT system.

3) Obsolete data used in automated decision-making application processing. Data in the IT files became obsolete due to new circumstances. The new data may have been available but was not put into the computer.

Tags: Software Testing, Software Quality, Software design defects, software data defects, quality Assurance, software defects

What is statistical process control in software development process?

Let us firstly see what is a defect?

A defect is an undesirable state. There are two types of defects: process and procedure. For example, if a Test Plan Standard is not followed, it would be a process defect. However, if the Test Plan did not contain a Statement of Usability as specified in the Requirements documentation it would be a product defect.

The term quality is used to define a desirable state. A defect is defined as the lack of that desirable state. In order to fully understand what a defect is we must understand quality.

What do we mean by Software Process Defects?

Ideally, the software development process should produce the same results each time the process is executed. For example, if we follow a process that produced one function-point-of-logic in 100 person hours, we would expect that the next time we followed that process, we would again produce one function-point-of-logic in 100 hours. However, if we follow the process the second time and it took 110 hours to produce one function-point-of-logic, we would state that there is “variability” in the software development process. Variability is the “enemy” of quality – the concepts behind maturing a software development process is to reduce variability.

The concept of measuring and reducing variability is commonly called statistical process control (SPC).

To understand SPC we need to first understand the following:

1) What constitutes an in control process

2) What constitutes an out of control process

3) What are some of the steps necessary to reduce variability within a process

Testers need to understand process variability, because the more variance in the process the greater the need for software testing. Following is a brief tutorial on processes and process variability.

Tags: Software Testing, Software Quality, Statistical Process Control, Quality Assurance, SPC

Why should we test Software & who should do that?

Let us firstly see why we test software?

The simple answer as to why we test software is that developers are unable to build defect-free software. If the development processes were perfect, meaning no defects were produced, testing would not be necessary.

Let’s compare the manufacturing process of producing boxes of cereal to the process of making software. We find that, as is the case for most food manufacturing companies, testing each box of cereal produced is unnecessary. Making software is a significantly different process than making a box of cereal however. Cereal manufacturers may produce 50,000 identical boxes of cereal a day, while each software process is unique. This uniqueness introduces defects, and thus making testing software necessary.

Now let us see why developers are not Good Testers:
Testing by the individual who developed the work has not proven to be a substitute to building and following a detailed test plan.

The disadvantages of a person checking their own work using their own documentation are as under:

1) Misunderstandings will not be detected, because the checker will assume that what the other individual heard from him was correct.

2) Improper use of the development process may not be detected because the individual may not understand the process.

3) The individual may be “blinded” into accepting erroneous system specifications and coding because he falls into the same trap during testing that led to the introduction of the defect in the first place.

4) Information services people are optimistic in their ability to do defect-free work and thus sometimes underestimate the need for extensive testing.

Without a formal division between development and test, an individual may be tempted to improve the system structure and documentation, rather than allocate that time and effort to the test.

Tags: Quality Control, Quality Assurance, Software Testing

Expectations of which customers should we satisfy & how much?

Let us firstly define excellence:

The Random House College Dictionary defines excellence as "superiority; eminence.” Excellence, then, is a measure or degree of quality. These definitions of quality and excellence are important because it is a starting point for any management team contemplating the implementation of a quality policy. They must agree on a definition of quality and the degree of excellence they want to achieve.

The common thread that runs through today's quality improvement efforts is the focus on the customer and, more importantly, customer satisfaction. The customer is the most important person in any process.

How many types of customers are there?

Customers may be either internal or external. The question of customer satisfaction (whether that customer is located in the next workstation, building, or country) is the essence of a quality product. Identifying customers' needs in the areas of what, when, why, and how are an essential part of process evaluation and may be accomplished only through communication.

The internal customer is the person or group that receives the results (outputs) of any individual's work. The outputs may include a product, a report, a directive, a communication, or a service. In fact, anything that is passed between people or groups. Customers include peers, subordinates, supervisors, and other units within the organization. Their expectations must also be known and exceeded to achieve quality.

External customers are those using the products or services provided by the organization. Organizations need to identify and understand their customers. The challenge is to understand and exceed their expectations.

An organization must focus on both internal and external customers and be dedicated to exceeding customer expectations.

Tags: Quality Control, Quality Assurance, Software Testing, Customer expectations, Excellence

Quality Gaps from the view point of Customer and Producer

Let us firstly define Quality:

Quality is frequently defined as meeting the customer's requirements the first time and every time. Quality is also defined as conformance to a set of customer requirements that, if met, result in a product that is fit for its intended use.

Quality is much more than the absence of defects, which allows us to meet customer expectations. Quality requires controlled process improvement, allowing loyalty in organizations. Quality can only be achieved by the continuous improvement of all systems and processes in the organization, not only the production of products and services but also the design, development, service, purchasing, administration, and, indeed, all aspects of the transaction with the customer. All must work together toward the same end.

Quality can only be seen through the eyes of the customers. An understanding of the customer's expectations (effectiveness) is the first step; then exceeding those expectations (efficiency) is required. Communications will be the key. Exceeding customer expectations assures meeting all the definitions of quality.

Now let us see what is Quality Software?

There are two important definitions of quality software:

1) The producer’s view of quality software means meeting requirements.

2) Customer’s/User’s of software view of quality software means fit for use.

These two definitions are not inconsistent. Meeting requirements is the producer’s definition of quality; it means that the person building the software builds it in accordance with requirements. The fit for use definition is a user’s definition of software quality; it means that the software developed by the producer meets the user’s need regardless of the software requirements.

The Two Software Quality Gaps:

In most IT groups, there are two gaps as illustrated in Figure 1-6, the different views of software quality between the customer and the producer.



The first gap is the producer gap. It is the gap between what is specified to be delivered, meaning the documented requirements and internal IT standards, and what is actually delivered. The second gap is between what the producer actually delivers compared to what the customer expects.


To close the customer’s gap, the IT quality function must understand the true needs of the user. This can be done by the following actions:

1) Customer surveys

2) JAD (joint application development) sessions – the producer and user come together and negotiate and agree upon requirements

3) More user involvement while building information products

It is accomplished through changing the processes to close the user gap so that there is consistency and producing software and services that the user needs. Software testing professionals can participate in closing these “quality” gaps.

Tags: Quality Control, Quality Assurance, Software Testing, Software Quality Gaps

What are the different perceptions of Quality?

The definition of “Quality” is a factor in determining the scope of software testing. Although there are multiple quality philosophies documented, it is important to note that most contains the same core components:


1) Quality is based upon customer satisfaction

2) Your organization must define quality before it can be achieved

3) Management must lead the organization through any improvement efforts

There are five perspectives of quality: Each of these perspectives must be considered as important to the customer.

1. Transcendent – I know it when I see it

2. Product-Based – Possesses desired features

3. User-Based – Fitness for use

4. Development & Manufacturing-Based – Conforms to requirements

5. Value-Based – At an acceptable cost

Peter R. Scholtes introduces the contrast between effectiveness (doing the right things) and efficiency (doing things right). Quality organizations must be both effective and efficient.

Patrick Townsend examines quality in fact and quality in perception as explained by four different views given below.

Quality in fact is usually the supplier's point of view, while quality in perception is the customer's. Any difference between the former and the latter can cause problems between the two.

1st View:
Quality in Fact: Doing the right thing.
Quality in Perception: Delivering the right product.

2nd View:
Quality in Fact: Doing it the right way.
Quality in Perception: Satisfying our customer’s needs.

3rd View:
Quality in Fact: Doing it right the first time.
Quality in Perception: Meeting the customer’s expectations.

4th View:
Quality in Fact: Doing it on time.
Quality in Perception: Treating every customer with integrity, courtesy, and respect.

An organization’s quality policy must define and view quality from their customer's perspectives. If there are conflicts, they must be resolved.

Tags: Quality Control, Quality Assurance, Software Testing, Software Quality Perceptions

What are the Software Quality Factors?

In defining the scope of testing, the risk factors become the basis or objective of testing. The objectives for many tests are associated with testing software quality factors. The software quality factors are attributes of the software that, if they are wanted and not present, pose a risk to the success of the software, and thus constitute a business risk. For example, if the software is not easy to use, the resulting processing may be incorrect. The definition of the software quality factors and determining their priority enables the test process to be logically constructed.


When software quality factors are considered in the development of the test strategy, results from testing successfully meet your objectives.

The primary purpose of applying software quality factors in a software development program is to improve the quality of the software product. Rather than simply measuring, the concepts are based on achieving a positive influence on the product, to improve its development.

How can we Identify Important Software Quality Factors?

Following Figure describes the Software Quality Factors defined by McCall.



Brief explanation of Eleven Important Software Quality Factors:

1) Correctness: Extent to which a program satisfies its specifications and fulfills the user's mission objectives.

2) Reliability: Extent to which a program can be expected to perform its intended function with required precision

3) Efficiency: The amount of computing resources and code required by a program to perform a function.

4) Integrity: Extent to which access to software or data by unauthorized persons can be controlled.

5) Usability: Effort required learning, operating, preparing input, and interpreting output of a program

6) Maintainability: Effort required locating and fixing an error in an operational program.

7) Testability: Effort required testing a program to ensure that it performs its intended function.

8) Flexibility: Effort required modifying an operational program.

9) Portability: Effort required to transfer software from one configuration to another.

10) Reusability: Extent to which a program can be used in other applications - related to the packaging and scope of the functions that programs perform.

11) Interoperability: Effort required to couple one system with another.

Tags: Quality Control, Quality Assurance, Software Testing, Software Quality Factors

What is the Cost of Quality?

When calculating the total costs associated with the development of a new application or system, three cost components must be considered. The Cost of Quality, as shown in following figure, is all the costs that occur beyond the cost of producing the product “right the first time.” Cost of Quality is a term used to quantify the total cost of prevention and appraisal, and costs associated with the production of software.


The Cost of Quality includes the additional costs associated with assuring that the product delivered meets the quality goals established for the product. This cost component is called the Cost of Quality, and includes all costs associated with the prevention, identification, and correction of product defects.




The three categories of costs associated with producing quality products are:


1) Prevention Costs:
Money required to prevent errors and to do the job right the first time. These normally require up-front costs for benefits that will be derived months or even years later. This category includes money spent on establishing methods and procedures, training workers, acquiring tools, and planning for quality. Prevention money is all spent before the product is actually built.

2) Appraisal Costs:
Money spent to review completed products against requirements. Appraisal includes the cost of inspections, testing, and reviews. This money is spent after the product is built but before it is shipped to the user or moved into production.

3) Failure Costs:
All costs associated with defective products that have been delivered to the user or moved into production. Some failure costs involve repairing products to make them meet requirements. Others are costs generated by failures such as the cost of operating faulty products, damage incurred by using them, and the costs associated with operating a Help Desk.

The Cost of Quality will vary from one organization to the next. The majority of costs associated with the Cost of Quality are associated with the identification and correction of defects. To minimize production costs, the project team must focus on defect prevention. The goal is to optimize the production process to the extent that rework is eliminated and inspection is built into the production process. The IT quality assurance group must identify the costs within these three categories, quantify them, and then develop programs to minimize the totality of these three costs. Applying the concepts of continuous testing to the systems development process can reduce the cost of quality.

Tags: Quality Control, Quality Assurance, Software Testing, Cost of Quality