Software testing is an investigation conducted to provide stakeholders with
information about the quality of the product or service under test. Software
testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software
implementation. Test techniques include, but are not limited to, the process of
executing a program or application with the intent of finding software bugs
(errors or other defects).
Software testing can be stated as the
process of validating and verifying that a software program/application/product:
- meets the requirements that guided
its design and development;
- works as expected;
- can be implemented with the same
characteristics.
- satisfies the needs of
stakeholders
Software testing, depending on the
testing method employed, can be implemented at any time in the development
process. However, most of the test effort traditionally occurs after the
requirements have been defined and the coding process has been completed.
Although in the Agile approaches most of the test effort is, conversely, on-going.
As such, the methodology of the test is governed by the software development
methodology adopted.
Different software development models
will focus the test effort at different points in the development process.
Newer development models, such as Agile, often employ test
driven development and place an
increased portion of the testing in the hands of the developer, before it
reaches a formal team of testers. In a more traditional model, most of the test
execution occurs after the requirements have been defined and the coding
process has been completed.
Overview
Testing can never completely identify
all the defects within software. Instead, it furnishes a criticism or comparison
that compares the state and behavior of the product against oracles—principles
or mechanisms by which someone might recognize a problem. These oracles may
include (but are not limited to) specifications, contracts,
comparable products, past versions of the same product, inferences about
intended or expected purpose, user or customer expectations, relevant
standards, applicable laws, or other criteria.
Every software product has a target
audience. For example, the audience for video game software is completely
different from banking software. Therefore, when an organization develops or
otherwise invests in a software product, it can assess whether the software
product will be acceptable to its end users, its target audience, its
purchasers, and other stakeholders. Software testing is the process of
attempting to make this assessment.
A study conducted by NIST
in 2002 reports that software bugs cost the U.S. economy $59.5 billion
annually. More than a third of this cost could be avoided if better software
testing was performed.
History
The separation of debugging from testing was initially introduced
by Glenford J. Myers in 1979. Although his attention was on breakage testing
("a successful test is one that finds a bug") it illustrated the
desire of the software engineering community to separate fundamental
development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988
the phases and goals in software testing in the following stages:
- Until 1956 - Debugging oriented
- 1957–1978 - Demonstration oriented
- 1979–1982 - Destruction oriented
- 1983–1987 - Evaluation oriented
- 1988–2000 - Prevention oriented
Software testing
topics
Scope
A primary purpose of testing is to
detect software failures so that defects may be discovered and corrected.
Testing cannot establish that a product functions properly under all conditions
but can only establish that it does not function properly under specific
conditions. The scope of software testing often includes examination of code as
well as execution of that code in various environments and conditions as well
as examining the aspects of code: does it do what it is supposed to do and do
what it needs to do. In the current culture of software development, a testing
organization may be separate from the development team. There are various roles
for testing team members. Information derived from software testing may be used
to correct the process by which software is developed.
Functional vs
non-functional testing
Functional testing refers to activities
that verify a specific action or function of the code. These are usually found
in the code requirements documentation, although some development methodologies
work from use cases or user stories. Functional tests tend to answer the
question of "can the user do this" or "does this particular
feature work."
Non-functional testing refers to
aspects of the software that may not be related to a specific function or user
action, such as scalability
or other performance,
behavior under certain constraints,
or security.
Testing will determine the flake
point, the point at which extremes of
scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly
in the context of the suitability perspective of its users.
Defects and failures
Not all software defects are caused by
coding errors. One common source of expensive defects is caused by requirement
gaps, e.g., unrecognized requirements, that result in errors of omission by the
program designer. A common source of requirements gaps is non-functional requirements
such as testability,
scalability, maintainability, usability, performance,
and security.
Software faults occur through the
following processes. A programmer makes an error (mistake), which results in a defect
(fault, bug) in the software source code. If this defect is executed, in
certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily
result in failures. For example, defects in dead code will never result in failures. A
defect can turn into a failure when the environment is changed. Examples of
these changes in environment include the software being run on a new computer hardware platform,
alterations in source data
or interacting with different software. A single defect may result in a wide
range of failure symptoms.
Finding faults early
It is commonly believed that the
earlier a defect is found the cheaper it is to fix it. The following table
shows the cost of fixing the defect depending on the stage it was found. For
example, if a problem in the requirements is found only post-release, then it
would cost 10–100 times more to fix than if it had already been found by the
requirements review. Modern continuous deployment practices, and cloud-based
services may cost less for re-deployment and maintenance than in the past.
Cost to fix a defect
|
Time detected
|
|||||
Requirements
|
Architecture
|
Construction
|
System test
|
Post-release
|
||
Time introduced
|
Requirements
|
1×
|
3×
|
5–10×
|
10×
|
10–100×
|
Architecture
|
-
|
1×
|
10×
|
15×
|
25–100×
|
|
Construction
|
-
|
-
|
1×
|
10×
|
10–25×
|
Compatibility testing
A common cause of software failure
(real or perceived) is a lack of its compatibility
with other application software,
operating systems
(or operating system versions,
old or new), or target environments that differ greatly from the original (such
as a terminal
or GUI application intended to be run on the desktop now being required to become a web application, which must render
in a web browser).
For example, in the case of a lack of backward compatibility,
this can occur because the programmers develop and test software only on the
latest version of the target environment, which not all users may be running.
This results in the unintended consequence that the latest work may not
function on earlier versions of the target environment, or on older hardware
that earlier versions of the target environment was capable of using. Sometimes
such issues can be fixed by proactively abstracting
operating system functionality into a separate program module
or library.
Input combinations
and preconditions
A very fundamental problem with
software testing is that testing under all combinations of inputs and
preconditions (initial state) is not feasible, even with a simple product. This
means that the number of defects in a software product can be very
large and defects that occur infrequently are difficult to find in testing.
More significantly, non-functional
dimensions of quality (how it is supposed to be versus what it is
supposed to do)—usability,
scalability, performance,
compatibility,
reliability—can
be highly subjective; something that constitutes sufficient value to one person
may be intolerable to another.
Static vs. dynamic
testing
There are many approaches to software
testing. Reviews,
walkthroughs,
or inspections
are considered as static testing,
whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can
be (and unfortunately in practice often is) omitted. Dynamic testing takes
place when the program itself is used for the first time (which is generally
considered the beginning of the testing stage). Dynamic testing may begin
before the program is 100% complete in order to test particular sections of
code (modules or discrete functions).
Typical techniques for this are either using stubs/drivers or execution from a debugger environment. For example, spreadsheet programs are, by their very nature,
tested to a large extent interactively ("on the fly"), with results displayed
immediately after each calculation or text manipulation.
Software verification
and validation
Main article: Verification
and validation (software)
Software testing is used in association
with verification
and validation:
- Verification: Have we built the software right? (i.e.,
does it implement the requirements).
- Validation: Have we built the right software? (i.e., do
the requirements satisfy the customer).
The terms verification and validation
are commonly used interchangeably in the industry; it is also common to see
these two terms incorrectly defined. According to the IEEE Standard Glossary of
Software Engineering Terminology:
Verification
is the process of evaluating a system or component to determine whether the
products of a given development phase satisfy the conditions imposed at the
start of that phase.
Validation
is the process of evaluating a system or component during or at the end of the
development process to determine whether it satisfies specified requirements.
According to the IS0 9000 standard:
Verification
is confirmation by examination and through provision of objective evidence that
specified requirements have been fulfilled.
Validation
is confirmation by examination and through provision of objective evidence that
the requirements for a specific intended use or application have been
fulfilled.
The software testing
team
Software testing can be done by
software testers. Until the 1980s the term "software tester" was used
generally, but later it was also seen as a separate profession. Regarding the periods
and the different goals in software testing, different roles have been
established: manager, test lead, test designer, tester,
automation developer, and test administrator.
Software quality
assurance (SQA)
Though controversial, software testing
is a part of the software quality assurance
(SQA) process.
In SQA, software process specialists and auditors are concerned for the
software development process rather than just the artifacts such as
documentation, code and systems. They examine and change the software
engineering process itself to reduce the number of faults that end up in the
delivered software: the so-called defect rate. What constitutes an
"acceptable defect rate" depends on the nature of the software; A
flight simulator video game would have much higher defect tolerance than
software for an actual airplane. Although there are close links with SQA,
testing departments often exist independently, and there may be no SQA function
in some companies. Software testing is a task intended to detect defects in
software by contrasting a computer program's expected results with its actual
results for a given set of inputs. By contrast, QA (quality assurance) is the
implementation of policies and procedures intended to prevent defects from
occurring in the first place.
Testing methods
The box approach
Software testing methods are
traditionally divided into white- and black-box testing. These two approaches
are used to describe the point of view that a test engineer takes when
designing test cases.
White-Box testing
Main article: White-box testing
White-box testing
is when the tester has access to the internal data structures and algorithms
including the code that implements these.
Types of white-box testing
The
following types of white-box testing exist:
·
API
testing (application programming interface) - testing of the application using
public and private APIs
·
Code coverage - creating tests to satisfy some
criteria of code coverage (e.g., the test designer can create tests to cause
all statements in the program to be executed at least once)
·
Fault injection methods -
intentionally introducing faults to gauge the efficacy of testing strategies
Test coverage
White-box
testing methods can also be used to evaluate the completeness of a test suite
that was created with black-box testing methods. This allows the software team
to examine parts of a system that are rarely tested and ensures that the most
important function points
have been tested.
Two
common forms of code coverage are:
·
Function coverage,
which reports on functions executed
·
Statement coverage,
which reports on the number of lines executed to complete the test
Black-box testing
Main article: Black-box testing
Black-box testing
treats the software as a "black box"—without any knowledge of
internal implementation. Black-box testing methods include: equivalence partitioning,
boundary value analysis,
all-pairs testing,
fuzz testing, model-based testing,
exploratory testing
and specification-based testing.
Specification-based
testing: Aims to test the functionality of
software according to the applicable requirements. Thus, the tester inputs data
into, and only sees the output from, the test object. This level of testing
usually requires thorough test cases to be provided to the tester, who then can
simply verify that for a given input, the output value (or behavior), either
"is" or "is not" the same as the expected value specified
in the test case.
Specification-based
testing is necessary, but it is insufficient to guard against certain risks.
Advantages
and disadvantages: The black-box tester has no
"bonds" with the code, and a tester's perception is very simple: a
code must have bugs. Using the principle, "Ask and you shall
receive," black-box testers find bugs where programmers do not. On the
other hand, black-box testing has been said to be "like a walk in a dark
labyrinth without a flashlight," because the tester doesn't know how the
software being tested was actually constructed. As a result, there are
situations when (1) a tester writes many test cases to check something that
could have been tested by only one test case, and/or (2) some parts of the
back-end are not tested at all.
Therefore, black-box testing has the
advantage of "an unaffiliated opinion", on the one hand, and the
disadvantage of "blind exploring", on the other.
Grey-box testing
Grey-box testing
(American spelling: gray-box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing tests, while
executing those tests at the user, or black-box level. The tester is not
required to have full access to the software's source code.[25][not in
citation given]
Manipulating input data and formatting output do not qualify as grey-box,
because the input and output are clearly outside of the "black box"
that we are calling the system under test. This distinction is particularly
important when conducting integration testing
between two modules of code written by two different developers, where only the
interfaces are exposed for test. However, modifying a data repository does
qualify as grey-box, as the user would not normally be able to change the data
outside of the system under test. Grey-box testing may also include reverse engineering to determine, for
instance, boundary values or error messages.
By knowing the underlying concepts of
how the software works, the tester makes better-informed testing choices while
testing the software from outside. Typically, a grey-box tester will be
permitted to set up his testing environment; for instance, seeding a database; and the tester can observe the state
of the product being tested after performing certain actions. For instance, in
testing a database product he/she may fire an SQL
query on the database and then observe the database, to ensure that the
expected changes have been reflected. Grey-box testing implements intelligent
test scenarios, based on limited information. This will particularly apply to
data type handling, exception handling, and so on.
Visual
testing
The aim of visual testing is to provide
developers with the ability to examine what was happening at the point of
software failure by presenting the data in such a way that the developer can
easily find the information he requires, and the information is expressed
clearly.
At the core of visual testing is the
idea that showing someone a problem (or a test failure), rather than just
describing it, greatly increases clarity and understanding. Visual testing
therefore requires the recording of the entire test process – capturing
everything that occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam and
audio commentary from microphones.
Visual testing provides a number of
advantages. The quality of communication is increased dramatically because
testers can show the problem (and the events leading up to it) to the developer
as opposed to just describing it and the need to replicate test failures will
cease to exist in many cases. The developer will have all the evidence he
requires of a test failure and can instead focus on the cause of the fault and
how it should be fixed.
Visual testing is particularly
well-suited for environments that deploy agile
methods in their development of software,
since agile methods require greater communication between testers and
developers and collaboration within small teams.[citation needed]
Ad hoc
testing and exploratory
testing are important methodologies for
checking software integrity, because they require less preparation time to
implement, whilst important bugs can be found quickly. In ad hoc testing, where
testing takes place in an improvised, impromptu way, the ability of a test tool
to visually record everything that occurs on a system becomes very important.
Visual testing is gathering recognition
in customer
acceptance and usability testing, because the test can be used by many individuals involved
in the development process. For the customer, it becomes easy to provide
detailed bug reports and feedback, and for program users, visual testing can
record user actions on screen, as well as their voice and image, to provide a
complete picture at the time of software failure for the developer.
Testing
levels
Tests are frequently grouped by where
they are added in the software development process, or by the level of
specificity of the test. The main levels during the development process as
defined by the SWEBOK guide are unit-, integration-, and system testing that are
distinguished by the test target without implying a specific process model.
Other test levels are classified by the testing objective.
Test
target
Unit
testing
Unit testing, also known as component
testing, refers to tests that verify the functionality of a specific section of
code, usually at the function level. In an object-oriented environment, this is
usually at the class level, and the minimal unit tests include the constructors
and destructors.
These types of tests are usually
written by developers as they work on code (white-box style), to ensure that
the specific function is working as expected. One function might have multiple
tests, to catch corner cases or other branches in the code. Unit testing alone cannot
verify the functionality of a piece of software, but rather is used to assure
that the building blocks the software uses work independently of each other.
Integration
testing
Integration testing is any type of
software testing that seeks to verify the interfaces between components against
a software design. Software components may be integrated in an iterative way or
all together ("big bang"). Normally the former is considered a better
practice since it allows interface issues to be localised more quickly and
fixed.
Integration testing works to expose
defects in the interfaces and interaction between integrated components
(modules). Progressively larger groups of tested software components
corresponding to elements of the architectural design are integrated and tested
until the software works as a system.
System testing
System testing tests a completely
integrated system to verify that it meets its requirements.
System
integration testing
System integration testing verifies
that a system is integrated to any external or third-party systems defined in
the system requirements.]
Objectives
of testing
Installation
testing
An Installation test assures that the
system is installed correctly and working at actual customer's hardware.
Sanity
testing
A Sanity test determines whether it is
reasonable to proceed with further testing.
Regression
testing
Regression testing focuses on finding
defects after a major code change has occurred. Specifically, it seeks to
uncover software
regressions, or old bugs that
have come back. Such regressions occur whenever software functionality that was
previously working correctly stops working as intended. Typically, regressions
occur as an unintended
consequence of program changes,
when the newly developed part of the software collides with the previously
existing code. Common methods of regression testing include re-running
previously run tests and checking whether previously fixed faults have
re-emerged. The depth of testing depends on the phase in the release process
and the risk of the added features. They can either be complete, for
changes added late in the release or deemed to be risky, to very shallow,
consisting of positive tests on each feature, if the changes are early in the
release or deemed to be of low risk.
Acceptance
testing
Acceptance testing can mean one of two
things:
- A smoke
test is used as an acceptance test
prior to introducing a new build to the main testing process, i.e. before integration or regression.
- Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.
Non-functional
testing
Special methods exist to test
non-functional aspects of software. In contrast to functional testing, which
establishes the correct operation of the software (for example that it matches
the expected behavior defined in the design requirements), non-functional
testing verifies that the software functions properly even when it receives invalid
or unexpected inputs. Software fault injection, in the form of fuzzing,
is an example of non-functional testing. Non-functional testing, especially for
software, is designed to establish whether the device under test can tolerate
invalid or unexpected inputs, thereby establishing the robustness of input
validation routines as well as error-management routines. Various commercial
non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software
tools available that perform non-functional testing.
Software
performance testing
Performance
testing is in general executed to determine
how a system or sub-system performs in terms of responsiveness and stability
under a particular workload. It can also serve to investigate, measure,
validate or verify other quality attributes of the system, such as scalability,
reliability and resource usage.
Load
testing is primarily
concerned with testing that the system can continue to operate under a specific
load, whether that be large quantities of data or a large number of users.
This is generally referred to as software scalability.
The related load testing activity of when performed as a non-functional
activity is often referred to as endurance testing. Volume testing is a way to test functionality. Stress testing is a way to test reliability under unexpected or rare
workloads. Stability testing (often referred to as load or endurance
testing) checks to see if the software can continuously function well in or
above an acceptable period.
There is little agreement on what the
specific goals of performance testing are. The terms load testing, performance
testing, reliability testing, and volume testing, are often used
interchangeably.
Usability
testing
Usability
testing is needed to check if the user
interface is easy to use and understand. It is concerned mainly with the use of
the application.
Security
testing
Security
testing is essential for software that
processes confidential data to prevent system
intrusion by hackers.
Internationalization
and localization
The general ability of software to be internationalized and localized
can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even
after it has been translated into a new language or adapted for a new culture
(such as different currencies or time zones).
Actual translation to human languages
must be tested, too. Possible localization failures include:
- Software is often localized by
translating a list of strings out of context, and the translator may
choose the wrong translation for an ambiguous source string.
- Technical terminology may become
inconsistent if the project is translated by several people without proper
coordination or if the translator is imprudent.
- Literal word-for-word translations
may sound inappropriate, artificial or too technical in the target
language.
- Untranslated messages in the
original language may be left hard coded in the source code.
- Some messages may be created
automatically at run time and the
resulting string may be ungrammatical, functionally incorrect, misleading
or confusing.
- Software may use a keyboard
shortcut which has no
function on the source language's keyboard layout, but is used for typing
characters in the layout of the target language.
- Software may lack support for the character
encoding of the target
language.
- Fonts and font sizes which are
appropriate in the source language may be inappropriate in the target
language; for example, CJK characters may become unreadable if the font
is too small.
- A string in the target language
may be longer than the software can handle. This may make the string
partly invisible to the user or cause the software to crash or
malfunction.
- Software may lack proper support
for reading or writing bi-directional
text.
- Software may display images with
text that was not localized.
- Localized operating systems may
have differently-named system configuration
files and environment
variables and different formats for date
and currency.
To avoid these and other localization
problems, a tester who knows the target language must run the program with all
the possible use cases for translation to see if the messages are readable,
translated correctly in context and do not cause failures.
Destructive
testing
Destructive testing attempts to cause the
software or a sub-system to fail, in order to test its robustness.
The
testing process
Traditional
CMMI or waterfall development model
A common practice of software testing
is that testing is performed by an independent group of testers after the
functionality is developed, before it is shipped to the customer. This practice
often results in the testing phase being used as a project buffer to compensate for project delays, thereby
compromising the time devoted to testing
Another practice is to start software
testing at the same moment the project starts and it is a continuous process
until the project finishes
Agile
or Extreme development model
In contrast, some emerging software
disciplines such as extreme
programming and the agile
software development movement, adhere to
a "test-driven
software development" model. In this
process, unit tests are written first, by the software
engineers (often with pair programming in the extreme programming methodology). Of course these
tests fail initially; as they are expected to. Then as code is written it
passes incrementally larger portions of the test suites. The test suites are
continuously updated as new failure conditions and corner cases are discovered,
and they are integrated with any regression tests that are developed. Unit
tests are maintained along with the rest of the software source code and
generally integrated into the build process (with inherently interactive tests
being relegated to a partially manual build acceptance process). The ultimate
goal of this test process is to achieve continuous
integration where software
updates can be published to the public frequently. [38]
[39]
A
sample testing cycle
Although variations exist between
organizations, there is a typical cycle for testing.[40]
The sample below is common among organizations employing the Waterfall
development model.
- Requirements
analysis: Testing should
begin in the requirements phase of the software development life cycle. During the design phase, testers
work with developers in determining what aspects of a design are testable
and with what parameters those tests work.
- Test planning: Test strategy, test plan, testbed creation. Since many activities
will be carried out during testing, a plan is needed.
- Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to
use in testing software.
- Test execution: Testers execute the software
based on the plans and test documents then report any errors found to the
development team.
- Test reporting: Once testing is completed,
testers generate metrics and make final reports on their test effort and whether or not the software
tested is ready for release.
- Test result analysis: Or Defect Analysis, is done by
the development team usually along with the client, in order to decide
what defects should be assigned, fixed, rejected (i.e. found software
working properly) or deferred to be dealt with later.
- Defect Retesting: Once a defect has been dealt
with by the development team, it is retested by the testing team. AKA Resolution testing.
- Regression testing: It is common to have a small
test program built of a subset of tests, for each integration of new,
modified, or fixed software, in order to ensure that the latest delivery
has not ruined anything, and that the software product as a whole is still
working correctly.
- Test Closure: Once the test meets the exit
criteria, the activities such as capturing the key outputs, lessons
learned, results, logs, documents related to the project are archived and
used as a reference for future projects.
Black
Box Testing
Black box testing takes an external perspective of the test
object to derive test cases. These tests can be functional or non-functional,
though usually functional. The test designer selects valid and invalid input
and determines the correct output. There is no knowledge of the test object's
internal structure.
This method of test design is
applicable to all levels of software testing: unit, integration, functional testing, system and acceptance. The higher the
level, and hence the bigger and more complex the box, the more one is forced to
use black box testing to simplify. While this method can uncover unimplemented
parts of the specification, one cannot be sure that all existent paths are
tested.
Black
Box Testing Example
In this technique, we do not use the
code to determine a test suite; rather, knowing the problem that we're trying
to solve, we come up with four types of test data:
- Easy-to-compute data
- Typical data
- Boundary / extreme data
- Bogus data
For example, suppose we are testing a
function that uses the quadratic formula to determine the two roots of a
second-degree polynomial ax2+bx+c. For
simplicity, assume that we are going to work only with real numbers, and print
an error message if it turns out that the two roots are complex numbers
(numbers involving the square root of a negative number).
We can come up with test data for each
of the four cases, based on values of the polynomial's discriminant (b2-4ac):
BVA and ECP
Boundary Value Analysis
Boundary Value Analysis (BVA) is a test
data selection technique (Functional Testing technique) where the extreme
values are chosen. Boundary values include maximum, minimum, just
inside/outside boundaries, typical values, and error values. The hope is that,
if a system works correctly for these special values then it will work
correctly for all values in between.
§ Extends equivalence partitioning
§ Test both sides of each boundary
§ Look at output boundaries for test cases too
§ Test min, min-1, max, max+1, typical values
§ BVA focuses on the boundary of the input space to identify
test cases
§ Rational is that errors tend to occur near the extreme
values of an input variable
There are two ways to
generalize the BVA techniques:
1. By the number of variables
o For n variables: BVA yields 4n + 1 test
cases.
2. By the kinds of ranges
o Generalizing ranges depends on the
nature or type of variables
§ NextDate has a variable Month and the range could be defined
as {Jan, Feb, …Dec}
§ Min = Jan, Min +1 = Feb, etc.
§ Triangle had a declared range of {1, 20,000}
§ Boolean variables have extreme values True and False but
there is no clear choice for the remaining three values
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value
Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max,
Max +1
3. Forces attention to exception handling
4. For strongly typed languages robust
testing results in run-time errors that abort normal execution
Limitations of Boundary Value Analysis
BVA works best when
the program is a function of several independent variables that represent
bounded physical quantities
1. Independent Variables
o NextDate test cases derived from BVA
would be inadequate: focusing on the boundary would not leave emphasis on February
or leap years
o Dependencies exist with NextDate's Day,
Month and Year
o Test cases derived without
consideration of the function
2. Physical Quantities
o An example of physical variables being
tested, telephone numbers - what faults might be revealed by numbers of
000-0000, 000-0001, 555-5555, 999-9998, 999-9999?
Equivalence
partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
EP can be defined
according to the following guidelines:
1. If an input
condition specifies a range, one valid and one two invalid classes are defined.
2. If an input
condition requires a specific value, one valid and two invalid equivalence classes
are defined.
3. If an input
condition specifies a member of a set, one valid and one invalid equivalence
class is defined.
4. If an input
condition is Boolean, one valid and one invalid class is defined.
ISTQB
question pattern and tips to solve:
ISTQB question pattern and tips
to solve: ISTQB questions are formatted in such a way that the answers look very much similar. People often choose the one, which they are more familiar with. We should carefully read the question twice or thrice or may be more than that, till we are clear about what is being asked in the question.
Now look at the options carefully. The options are chosen to confuse the candidates. To choose the correct answer, we should start eliminating one by one. Go through each option and check whether it is appropriate or not. If you end up selecting more than one option, repeat the above logic for the answers that you selected. This will definitely work.
Before you start with the question papers, please read the material thoroughly. Practice as many papers as possible. This will help a lot because, when we actually solve the papers, we apply the logic that we know.
ISTQB 'Foundation level' sample questions with answers:
1. Designing the test environment set-up and identifying any required infrastructure and tools are a part of which phase
a) Test Implementation and execution
b) Test Analysis and Design
c) Evaluating the Exit Criteria and reporting
d) Test Closure Activities
Evaluating the options:
a) Option a: as the name suggests these activities are part of the actual implementation cycle. So do not fall under set-up
b) Option b: Analysis and design activities come before implementation. The test environment set-up, identifying any required infrastructure and tools are part of this activity.
c) Option c: These are post implementation activities
d) Option d: These are related to closing activities. This is the last activity.
So, the answer is 'B'
2. Test Implementation and execution has which of the following major tasks?
i. Developing and prioritizing test cases, creating test data, writing test procedures and optionally preparing the test harnesses and writing automated test scripts.
ii. Creating the test suite from the test cases for efficient test execution.
iii. Verifying that the test environment has been set up correctly.
iv. Determining the exit criteria.
a) i,ii,iii are true and iv is false
b) i,,iv are true and ii is false
c) i,ii are true and iii,iv are false
d) ii,iii,iv are true and i is false
Evaluating the options:
Let's follow a different approach in this case. As can be seen from the above options, determining the exit criteria is definitely not a part of test implementation and execution. So choose the options where (iv) is false. This filters out 'b' and 'd'.
We need to select only from 'a' and 'c'. We only need to analyze option (iii) as (i) and (ii) are marked as true in both the cases. Verification of the test environment is part of the implementation activity. Hence option (iii) is true. This leaves the only option as 'a'.
So, the answer is 'A'
3. A Test Plan Outline contains which of the following:-
i. Test Items
ii. Test Scripts
iii. Test Deliverables
iv. Responsibilities
a) I,ii,iii are true and iv is false
b) i,iii,iv are true and ii is false
c) ii,iii are true and i and iv are false
d) i,ii are false and iii , iv are true
Evaluating the options:
Let's use the approach given in question no. 2. Test scripts are not part of the test plan (this must be clear). So choose the options where (ii) is false. So we end up selecting 'b' and 'd'. Now evaluate the option (i), as option (iii) and (iv) are already given as true in both the cases. Test items are part of the test plan. Test items are the modules or features which will be tested and these will be part of the test plan.
So, the answer is 'B'
4. One of the fields on a form contains a text box which accepts numeric values in the range of 18 to 25. Identify the invalid Equivalence class
a) 17
b) 19
c) 24
d) 21
Evaluating the options:
In this case, first we should identify valid and invalid equivalence classes.
Invalid Class | Valid Class | Invalid Class
Below 18 | 18 to 25 | 26 and above
Option 'a' falls under invalid class. Options 'b', 'c' and 'd' fall under valid class.
So, the answer is 'A'
5. In an Examination a candidate has to score minimum of 24 marks in order to clear the exam. The maximum that he can score is 40 marks. Identify the Valid Equivalence values if the student clears the exam.
a) 22,23,26
b) 21,39,40
c) 29,30,31
d) 0,15,22
Evaluating the options:
Let's use the approach given in question 4. Identify valid and invalid equivalence classes.
Invalid Class | Valid Class | Invalid Class
Below 24 | 24 to 40 | 41 and above
The question is to identify valid equivalence values. So all the values must be from 'Valid class' only.
a) Option a: all the values are not from valid class
b) Option b: all the values are not from valid class
c) Option c: all the values are from valid class
d) Option d: all the values are not from valid class
So, the answer is 'C'
6. Which of the following statements regarding static testing is false:
a) static testing requires the running of tests through the code
b) static testing includes desk checking
c) static testing includes techniques such as reviews and inspections
d) static testing can give measurements such as cyclomatic complexity
Evaluating the options:
a) Option a: is wrong. Static testing has nothing to do with code
b) Option b: correct, static testing does include desk checking
c) Option c: correct, it includes reviews and inspections
d) Option d: correct, it can give measurements such as cyclomatic complexity
So, the answer is 'A'
7. Verification involves which of the following:-
i. Helps to check the Quality of the built product
ii. Helps to check that we have built the right product.
iii. Helps in developing the product
iv. Monitoring tool wastage and obsoleteness.
a) Options i,ii,iii,iv are true.
b) i is true and ii,iii,iv are false
c) i,ii,iii are true and iv is false
d) ii is true and i,iii,iv are false.
Evaluating the options:
a) Option a: The quality of the product can be checked only after building it.
Verification is a cycle before completing the product.
b) Option b: Verification checks that we have built the right product.
c) Option c: it does not help in developing the product
d) Option d: it does not involve monitory activities.
So, the answer is 'B'
8. Component Testing is also called as :-
i. Unit Testing
ii. Program Testing
iii. Module Testing
iv. System Component Testing .
a) i,ii,iii are true and iv is false
b) i,ii,iii,iv are false
c) i,ii,iv are true and iii is false
d) all of above is true
Evaluating the options:
a) Option a: correct, component testing is also called as unit testing
b) Option b: not sure (but as all the options indicate this as true, we can conclude that Program testing is also called as unit testing)
c) Option c: correct, component testing is also called as module testing
d) Option d: wrong. System component testing comes under system testing.
So, the answer is 'A'
9. Link Testing is also called as :
a) Component Integration testing
b) Component System Testing
c) Component Sub System Testing
d) Maintenance testing
Test Plan
The test strategy
identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to
be formally documented. Based on the individual plans only, the individual test
levels are carried out.
The plans are to be
prepared by experienced people only. In all test plans, the ETVX
{Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the
entry point to that phase. For example, for unit testing, the coding must be
complete and then only one can start unit testing. Task is the activity that is
performed. Validation is the way in which the progress and correctness and
compliance are verified for that phase. Exit tells the completion criteria of
that phase, after the validation is done. For example, the exit criterion for
unit testing is all unit test cases must pass.
ETVX is a modeling
technique for developing worldly and atomic level models. It sands for Entry,
Task, Verification and Exit. It is a task-based model where the details of each
task are explicitly defined in a specification table against each phase i.e.
Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two types
of cells, unit cells and implementation cells. The implementation cells are
basically unit cells containing the further tasks.
For example if there
is a task of size estimation, then there will be a unit cell of size
estimation. Then since this task has further tasks namely, define measures,
estimate size. The unit cell containing these further tasks will be referred to
as the implementation cell and a separate table will be constructed for it.
A purpose is also
stated and the viewer of the model may also be defined e.g. top management or
customer.
18.2.1 Unit Test Plan {UTP}
The unit test plan is
the overall plan to carry out the unit test activities. The lead tester prepares
it and it will be distributed to the individual testers, which contains the
following sections.
18.2.1.1 What is to be tested?
The unit test plan
must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested.
In this case mostly the input units will be tested for the format, alignment,
accuracy and the totals. The UTP will clearly give the rules of what data types
are present in the system, their format and their boundary conditions. This
list may not be exhaustive; but it is better to have a complete list of these
details.
18.2.1.2 Sequence of Testing
The sequences of test
activities that are to be carried out in this phase are to be listed in this
section. This includes, whether to execute positive test cases first or
negative test cases first, to execute test cases based on the priority, to
execute test cases based on test groups etc. Positive test cases prove that the
system performs what is supposed to do; negative test cases prove that the
system does not perform what is not supposed to do. Testing the screens, files,
database etc., are to be given in proper sequence.
18.2.1.4 Basic
Functionality of Units
How the independent
functionalities of the units are tested which excludes any communication
between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are
addressed, very specific to unit testing.
·
Unit Testing Tools
·
Priority of Program
units
·
Naming convention for
test cases
·
Status reporting
mechanism
·
Regression test
approach
·
ETVX criteria
18.2.2 Integration Test Plan
The integration test
plan is the overall plan for carrying out the activities in the integration
test level, which contains the following sections.
2.2.1 What is to be tested?
This section clearly
specifies the kinds of interfaces fall under the scope of testing internal,
external interfaces, with request and response is to be explained. This need
not go deep in terms of technical details but the general approach how the
interfaces are triggered is explained.
18.2.2.1Sequence of
Integration
When there are
multiple modules present in an application, the sequence in which they are to
be integrated will be specified in this section. In this, the dependencies
between the modules play a vital role. If a unit B has to be executed, it may
need the data that is fed by unit A and unit X. In this case, the units A and X
have to be integrated and then using that data, the unit B has to be tested.
This has to be stated to the whole set of units in the program. Given this
correctly, the testing activities will lead to the product, slowly building the
product, unit by unit and then integrating them.
18.2.2.2 List of Modules and Interface
Functions
There may be N number
of units in the application, but the units that are going to communicate with
each other, alone are tested in this phase. If the units are designed in such a
way that they are mutually independent, then the interfaces do not come into
picture. This is almost impossible in any system, as the units have to
communicate to other units, in order to get different types of functionalities
executed. In this section, we need to list the units and for what purpose it
talks to the others need to be mentioned. This will not go into technical
aspects, but at a higher level, this has to be explained in plain English.
Apart from the above
sections, the following sections are addressed, very specific to integration
testing.
·
Integration Testing
Tools
·
Priority of Program
interfaces
·
Naming convention for
test cases
·
Status reporting
mechanism
·
Regression test approach
·
ETVX criteria
·
Build/Refresh
criteria {When multiple programs or objects are to be linked to arrived at
single product, and one unit has some modifications, then it may need to
rebuild the entire product and then load it into the integration test
environment. When and how often, the product is rebuilt and refreshed is to be
mentioned}.
18.2.3 System Test
Plan {STP}
The system test plan
is the overall plan carrying out the system test level activities. In the
system test, apart from testing the functional aspects of the system, there are
some special testing activities carried out, such as stress testing etc. The
following are the sections normally present in system test plan.
18.2.3.1 What is to be tested?
This section defines the
scope of system testing, very specific to the project. Normally, the system
testing is based on the requirements. All requirements are to be verified in
the scope of system testing. This covers the functionality of the product.
Apart from this what special testing is performed are also stated here.
18.2.3.2 Functional
Groups and the Sequence
The requirements can
be grouped in terms of the functionality. Based on this, there may be
priorities also among the functional groups. For example, in a banking application,
anything related to customer accounts can be grouped into one area, anything
related to inter-branch transactions may be grouped into one area etc. Same way
for the product being tested, these areas are to be mentioned here and the
suggested sequences of testing of these areas, based on the priorities are to
be described.
18.2.3.3 Special Testing Methods
This covers the
different special tests like load/volume testing, stress testing,
interoperability testing etc. These testing are to be done based on the nature
of the product and it is not mandatory that every one of these special tests
must be performed for every product.
Apart from the above
sections, the following sections are addressed, very specific to system
testing.
·
System Testing Tools
·
Priority of
functional groups
·
Naming convention for
test cases
·
Status reporting
mechanism
·
Regression test
approach
·
ETVX criteria
·
Build/Refresh
criteria
18.2.4 Acceptance Test Plan {ATP}
The client at their
place performs the acceptance testing. It will be very similar to the system
test performed by the Software Development Unit. Since the client is the one
who decides the format and testing methods as part of acceptance testing, there
is no specific clue on the way they will carry out the testing. But it will not
differ much from the system testing. Assume that all the rules, which are
applicable to system test, can be implemented to acceptance testing also.
Since this is just
one level of testing done by the client for the overall product, it may include
test cases including the unit and integration test level details.