Test Case Documents
Designing good test cases is a
complex art. The complexity comes from three sources:
§ Test cases help us
discover information. Different types of tests are more effective for different
classes of information.
§ Test cases can be
"good" in a variety of ways. No test case will be good in all of
them.
§ People tend to create
test cases according to certain testing styles, such as domain testing or
risk-based testing. Good domain tests are different from good risk-based tests.
What's a test case?
"A test case
specifies the pretest state of the IUT and its environment, the test inputs or
conditions, and the expected result. The expected result specifies what the IUT
should produce from the test inputs. This specification includes messages
generated by the IUT, exceptions, returned values, and resultant state of the
IUT and its environment. Test cases may also specify initial and resulting
conditions for other objects that constitute the IUT and its environment."
What's a
scenario?
A scenario is a hypothetical story, used
to help a person think through a complex problem or system.
Characteristics of Good Scenarios
A scenario
test has five key characteristics. It is (a) a story that is (b) motivating,
(c) credible, (d) complex, and (e) easy to evaluate.
The primary objective of test case design is to derive a set of
tests that have the highest attitude of discovering defects in the software.
Test cases are designed based on the analysis of requirements, use cases, and
technical specifications, and they should be developed in parallel with the
software development effort.
A test case describes a set of actions to be performed and the
results that are expected. A test case should target specific functionality or
aim to exercise a valid path through a use case. This should include invalid
user actions and illegal inputs that are not necessarily listed in the use
case. A test case is described depends on several factors, e.g. the number of
test cases, the frequency with which they change, the level of automation
employed, the skill of the testers, the selected testing methodology, staff
turnover, and risk.
The test cases will have a generic
format as below.
Test case ID - The test case id must be unique across the
application
Test case description - The test case description must be very
brief.
Test prerequisite - The test pre-requisite clearly describes what
should be present in the system, before the test can be executes.
Test Inputs - The test input is nothing but the test data that is
prepared to be fed to the system.
Test steps - The test steps are the step-by-step instructions on
how to carry out the test.
Expected Results - The expected results are the ones that say what
the system must give as output or how the system must react based on the test
steps.
Actual Results – The actual results are
the ones that say outputs of the action for the given inputs or how the system reacts
for the given inputs.
Pass/Fail - If the Expected and Actual results are same then test
is Pass otherwise Fail.
The test cases are
classified into positive and negative test cases. Positive test cases are
designed to prove that the system accepts the valid inputs and then process
them correctly. Suitable techniques to design the positive test cases are
Specification derived tests, Equivalence partitioning and State-transition
testing. The negative test cases are designed to prove that the system rejects
invalid inputs and does not process them.
Suitable techniques to design the negative test cases are Error guessing,
Boundary value analysis, internal boundary value testing and State-transition
testing. The test cases details must be very clearly specified, so that a new
person can go through the test cases step and step and is able to execute it.
The test cases will be explained with specific examples in the following
section.
For example consider online shopping
application. At the user interface level the client request the web server to
display the product details by giving email id and Username. The web server
processes the request and will give the response. For this application we will
design the unit, Integration and system test cases.
Figure 6.Web based application
Unit
Test Cases (UTC)
These are very specific to a particular unit.
The basic functionality of the unit is to be understood based on the
requirements and the design documents. Generally, Design document will provide
a lot of information about the functionality of a unit. The Design document has
to be referred before UTC is written, because it provides the actual
functionality of how the system must behave, for given inputs.
For example, In the Online shopping
application, If the user enters valid Email id and Username values, let us
assume that Design document says, that the system must display a product
details and should insert the Email id and Username in database table. If user
enters invalid values the system will display appropriate error message and
will not store it in database.
Figure 7: Snapshot of Login Screen
Test Conditions for the fields in the Login screen
Email-It should be in this format
(For Eg clickme@yahoo.com).
Username – It should accept only
alphabets not greater than 6.Numerics and special type of characters are not
allowed.
Test Prerequisite: The user should
have access to Customer Login screen form screen
Negative
Test Case
Project Name-Online shopping
Version-1.1
Module-Catalog
Test
#
|
Description
|
Test Inputs
|
Expected Results
|
Actual results
|
Pass/Fail
|
1
|
Check for inputting values in
Email field
|
Username=Xavier
|
Inputs should not be accepted. It
should display message "Enter
valid Email"
|
||
2
|
Check for inputting values in
Email field
|
Email=john26#rediffmail.com
Username=John
|
Inputs should not be accepted. It
should display message "Enter
valid Email"
|
||
3
|
Check for inputting values in
Username field
|
Username=Mark24
|
Inputs should not be accepted. It
should display message "Enter
correct Username"
|
Positive
Test Case
Test
#
|
Description
|
Test Inputs
|
Expected Results
|
Actual results
|
Pass/Fail
|
1
|
Check for inputting values in
Email field
|
Email=shan@yahoo.com
Username=dave
|
Inputs should be accepted.
|
||
2
|
Check for inputting values in
Email field
|
Email=knki@rediffmail.com
Username=john
|
Inputs should be accepted.
|
||
3
|
Check for inputting values in
Username field
|
Email=xav@yahoo.com
Username=mark
|
Inputs should be accepted.
|
Integration
Test Cases
Before designing the integration test cases
the testers should go through the Integration test plan. It will give complete
idea of how to write integration test cases. The main aim of integration test
cases is that it tests the multiple modules together. By executing these test
cases the user can find out the errors in the interfaces between the Modules.
For example, in online shopping, there will
be Catalog and Administration module. In catalog section the customer can track
the list of products and can buy the products online. In administration module
the admin can enter the product name and information related to it.
Table3:
Integration Test Cases
Test #
|
Description
|
Test Inputs
|
Expected Results
|
Actual results
|
Pass/Fail
|
1
|
Check for Login Screen
|
Enter values in Email and
UserName.
For Eg:
Email =shilpa@yahoo.com
Username=shilpa
|
Inputs should be accepted.
|
||
Backend Verification
|
Select email, username from Cus;
|
The entered Email and Username
should be displayed at sqlprompt.
|
|||
2
|
Check for Product Information
|
Click product information link
|
It should display complete
details of the product
|
||
3
|
Check for admin screen
|
Enter values in Product Id and
Product name fields.
For Eg:
Product Id-245
Product name-Norton Antivirus
|
Inputs should be accepted.
|
||
Backend verification
|
Select pid , pname from Product;
|
The entered Product id and
Product name should be displayed at the sql prompt.
|
NOTE:
The tester has to execute above unit and Integration test cases after coding.
And He/She has to fill the actual results and Pass/fail columns. If the test
cases fail then defect report should be prepared.
System Test Cases: -
The system test cases meant to test the
system as per the requirements; end-to end. This is basically to make sure that
the application works as per SRS. In system test cases, (generally in system
testing itself), the testers are supposed to act as an end user. So, system
test cases normally do concentrate on the functionality of the system, inputs
are fed through the system and each and every check is performed using the
system itself. Normally, the verifications done by checking the database tables
directly or running programs manually are not encouraged in the system test.
The system test must focus on functional
groups, rather than identifying the program units. When it comes to system
testing, it is assume that the interfaces between the modules are working fine
(integration passed).
Ideally the test cases are nothing but a
union of the functionalities tested in the unit testing and the integration
testing. Instead of testing the system inputs outputs through database or
external programs, everything is tested through the system itself. For example,
in a online shopping application, the catalog and administration screens
(program units) would have been independently unit tested and the test results
would be verified through the database. In system testing, the tester will
mimic as an end user and hence checks the application through its output.
There are occasions, where
some/many of the integration and unit test cases are repeated in system testing
also; especially when the units are tested with test stubs before and not
actually tested with other real modules, during system testing those cases will
be performed again with real modules/data in
Requirements
Traceability Matrix
Requirements
Traceability Matrix
Ideally each software developed must satisfy the set of
requirements completely. So, right from design, each requirement must be
addressed in every single document in the software process. The documents
include the HLD, LLD, source codes, unit test cases, integration test cases and
the system test cases. Refer the following sample table which describes
Requirements Traceability Matrix process. In this matrix, the rows will have
the requirements. For every document {HLD, LLD etc}, there will be a separate
column. So, in every cell, we need to state, what section in HLD addresses a
particular requirement. Ideally, if every requirement is addressed in every
single document, all the individual cells must have valid section ids or names
filled in. Then we know that every requirement is addressed. In case of any
missing of requirement, we need to go back to the document and correct it, so
that it addressed the requirement.
For testing at each level, we may have to address the
requirements. One integration and the system test case may address multiple
requirements.
DTP Scenario No
|
DTC Id
|
Code
|
LLD Section
|
|
Requirement 1
|
+ve/-ve
|
1,2,3,4
|
||
Requirement 2
|
+ve/-ve
|
1,2,3,4
|
||
Requirement 3
|
+ve/-ve
|
1,2,3,4
|
||
Requirement 4
|
+ve/-ve
|
1,2,3,4
|
||
…
|
||||
…
|
||||
…
|
||||
…
|
||||
…
|
||||
…
|
||||
…
|
||||
Requirement N
|
+ve/-ve
|
1,2,3,4
|
||
TESTER
|
TESTER
|
DEVELOPER
|
TEST LEAD
|
Understanding
Agile Testing
Understanding
Agile Testing
The concept of Agile testing
rests on the values of the Agile Alliance Values, which states that:
"We have come to value:
Individuals
and interactions over processes and
tools
Working
software over comprehensive
documentation
Customer
collaboration over contract
negotiation
Responding
to change over following a plan
That is, while there is value in the
items on the right, we value the items on the left more.
What is Agile testing?
1) Agile testers treat the
developers as their customer and follow the agile manifesto. The Context driven testing principles
(explained in later part) act as a set of principles for the agile tester.
2) Or it can be treated as the testing
methodology followed by testing team when an entire project follows agile
methodologies. If so what is the role of a tester in such a fast paced
methodology?)
Traditional QA seems
to be totally at loggerheads with the Agile manifesto in the following regard
where:
·
Process and tools are a key part of QA and testing.
·
QA people seem to love documentation.
·
QA people want to see the written specification.
·
And where is testing without a PLAN?
So the question arises is there a
role for QA in Agile projects?
There answer is maybe but the roles
and tasks are different.
In the first definition of Agile
testing we described it as one following the Context driven principles.
The context driven principles which
are guidelines for the agile tester are:
1. The value of any practice
depends on its context.
2. There are
good practices in context, but there are no best practices.
3. People, working together, are
the most important part of any project's context.
4. Projects
unfold over time in ways that are often not predictable.
5. The product is a solution. If
the problem isn't solved, the product doesn't work.
6. Good software testing is a challenging
intellectual process.
7. Only through judgment and skill,
exercised cooperatively throughout the entire project, are we able to do the
right things at the right times to effectively test our products.
In the second definition we
described Agile testing as a testing methodology adopted when an entire project
follows Agile (development) Methodology. We shall have a look at the Agile
development methodologies being practiced currently:
Agile Development Methodologies
·
Extreme Programming (XP)
·
Crystal
·
Adaptive Software Development (ASD)
·
Scrum
·
Feature Driven Development (FDD)
·
Dynamic Systems Development Method (DSDM)
·
Xbreed
In
a fast paced environment such as in Agile development the question then arises
as to what is the "Role" of testing?
Testing
is as relevant in an Agile scenario if not more than a traditional software
development scenario.
Testing
is the Headlight of the agile project
showing where the project is standing now and the direction it is headed.
Testing
provides the required and relevant information to the teams to take informed
and precise decisions.
The
testers in agile frameworks get involved in much more than finding
"software bugs", anything that can "bug" the potential user is a issue for them but testers don't
make the final call, it's the entire team that discusses over it and takes a
decision over a potential issues.
A
firm belief of Agile practitioners is that any testing approach does not assure
quality it's the team that does (or doesn't) do it, so there is a heavy
emphasis on the skill and attitude of the people involved.
Agile
Testing is not a game of "gotcha", it's about finding ways to set
goals rather than focus on mistakes.
Among these Agile methodologies
mentioned we shall look at XP (Extreme Programming) in detail, as this is the
most commonly used and popular one.
The basic components of the XP practices are:
·
Test- First Programming
·
Pair Programming
·
Short Iterations & Releases
·
Refactoring
·
User Stories
·
Acceptance Testing
We
shall discuss these factors in detail.
Test-First Programming
§ Developers write unit tests before coding. It has been
noted that this kind of approach motivates the coding, speeds coding and also
and improves design results in better designs (with less coupling and more
cohesion)
§ It supports a practice called Refactoring (discussed
later on).
§ Agile practitioners prefer Tests (code) to Text (written
documents) for describing system behavior. Tests are more precise than human
language and they are also a lot more likely to be updated when the design
changes. How many times have you seen design documents that no longer
accurately described the current workings of the software? Out-of-date design
documents look pretty much like up-to-date documents. Out-of-date tests fail.
§ Many open source tools like xUnit have been developed
to support this methodology.
Refactoring
§ Refactoring is the practice changing a software system
in such a way that it does not alter the external behavior of the code yet
improves its internal structure.
§ Traditional development tries to understand how all
the code will work together in advance. This is the design. With agile methods,
this difficult process of imagining what code might look like before it is
written is avoided. Instead, the code is restructured as needed to maintain a
coherent design. Frequent refactoring allows less up-front planning of design.
§ Agile methods replace high-level design with frequent
redesign (refactoring). Successful refactoring But it also requires a way of
ensuring checking whether that the behavior wasn't inadvertently changed.
That's where the tests come in.
Make the simplest design that will work and add
complexity only when needed and refactor as necessary.
§ Refactoring requires unit tests to ensure that design
changes (refactorings) don't break existing code.
Acceptance Testing
§ Make up user experiences or User stories, which are
short descriptions of the features to be coded.
§ Acceptance tests verify the completion of user
stories.
§ Ideally they are written before coding.
With all these features and process
included we can define a practice for Agile testing encompassing the following
features.
·
Conversational Test Creation
·
Coaching Tests
·
Providing Test Interfaces
·
Exploratory Learning
Looking
deep into each of these practices we can describe each of them as:
Conversational
Test Creation
§ Test case writing should be a collaborative activity
including majority of the entire team. As the customers will be busy we should
have someone representing the customer.
§ Defining tests is a key activity that should include
programmers and customer representatives.
§ Don't do it alone.
Coaching Tests
§ A way of thinking about Acceptance Tests.
§ Turn user stories into tests.
§ Tests should provide Goals and guidance, Instant
feedback and Progress measurement
§ Tests should be in specified in a format that is clear
enough that users/ customers can understand and that is specific enough that it
can be executed
§ Specification should be done by example.
Providing Test Interfaces
§ Developers are responsible for providing the fixtures
that automate coaching tests
§ In most cases XP teams are adding test interfaces to
their products, rather than using external test tools
Exploratory Learning
§ Plan to explore, learn and understand the product with
each iteration.
§ Look for bugs, missing features and opportunities for
improvement.
§ We don't understand software until we have used it.
We believe
that Agile Testing is a major step forward. You may disagree. But regardless Agile Programming is the wave
of the future. These practices will develop and some of the extreme edges may
be worn off, but it's only growing in influence and attraction. Some testers may not like it, but those who
don't figure out how to live with it are simply going to be left behind.
Some testers are still upset that
they don't have the authority to block the release. Do they think that they now
have the authority to block the adoption of these new development methods?
They'll need to get on this ship and if they want to try to keep it from the
shoals. Stay on the dock if you wish. Bon Voyage!
Role of test lead
- Understand the system requirements
completely.
- Initiate the preparation of test
plan for the beta phase.
Role of the tester
- to provide input while there is
still time to make significant changes as the design evolves.
- Report errors to developers
Try installing
on system having non-compliant configuration such as less memory / RAM / HDD.
Role of a Test Lead
·
Provide Test Instruction Sheet that describes items such as testing objectives, steps to
follow, data to enter, functions to invoke.
·
Provide feedback forms and comments.
Role of a tester
·
Understand the software requirements
and the testing objectives.
·
Carry out the test cases
Regression
Testing
Regression Testing
Regression testing as the name suggests is used to test /
check the effect of changes made in the code.
Most of the time the testing team
is asked to check last minute changes in the code just before making a release
to the client, in this situation the testing team needs to check only the
affected areas.
So in short for the regression
testing the testing team should get the input from the development team about
the nature / amount of change in the fix so that testing team can first check
the fix and then the side effects of the fix.
In my present organization we too
faced the same problem. So we made a regression bucket (this is a simple excel
sheet containing the test cases that we need think assure us of bare minimum
functionality) this bucket is run every time before the release.
In fact the regression testing is
the testing in which maximum automation can be done. The reason being the same
set of test cases will be run on different builds multiple times.
But again the extent of automation depends on whether the
test cases will remain applicable over the time, In case the automated test
cases do not remain applicable for some amount of time then test engineers will
end up in wasting time to automate and don't get enough out of automation.
- What is Regression testing?
Regression Testing
is retesting unchanged segments of application. It involves rerunning tests
that have been previously executed to ensure that the same results can be
achieved currently as were achieved when the segment was last tested.
The selective
retesting of a software
system that has been modified to ensure that any bugs have been fixed and
that no other previously working functions have failed as a result of the
reparations and that newly added features have not created problems with
previous versions of the software. Also referred to as verification testing,
regression testing is initiated after a programmer has
attempted to fix a recognized problem or has added source code to a
program that may have inadvertently introduced errors. It is a quality control
measure to ensure that the newly modified code still complies with its
specified requirements and that unmodified code has not been affected by the
maintenance activity.
- What do you do during Regression testing?
- Rerunning of previously conducted tests
- Reviewing previously prepared manual procedures
- Comparing the current test results with the previously executed test results
- What are the tools available for Regression testing?
Although the
process is simple i.e. the test cases that have been prepared can be used and
the expected results are also known, if the process is not automated it can be
very time-consuming and tedious operation.
Some of the tools
available for regression testing are:
Record and
Playback tools – Here the previously executed scripts can be rerun to verify
whether the same set of results are obtained. E.g. Rational Robot
- What are the end goals of Regression testing?
- To ensure that the unchanged system segments function properly
- To ensure that the previously prepared manual procedures remain correct after the changes have been made to the application system
- To verify that the data dictionary of data elements that have been changed is correct
Regression testing as the name suggests is used to test /
check the effect of changes made in the code.
Most of the time the testing team
is asked to check the last minute changes in the code just before making a
release to the client, in this situation the testing team needs to check only
the affected areas.
So in short for the regression
testing the testing team should get the input from the development team about
the nature / amount of change in the fix so that testing team can first check
the fix and then the affected areas.
In my present organization we too
faced the same problem. So we made a regression bucket (this is a simple excel
sheet containing the test cases that we need think assure us of bare minimum
functionality) this bucket is run every time before the release.
In fact the regression testing is
the testing in which maximum automation can be done. The reason being the same
set of test cases will be run on different builds multiple times.
But again the
extent of automation depends on whether the test cases will remain applicable
over the time, In case the automated test cases do not remain applicable for
some amount of time then test engineers will end up in wasting time to automate
and don't get enough out of automation.
Stress
Testing,Performance Testing and Performance Testing
Stress
Testing
Stress testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume. The following types of
tests may be conducted during stress testing;
· Special
tests may be designed that generate ten interrupts per second, when one or two
is the average rate.
· Input
data rates may be increases by an order of magnitude to determine how input
functions will respond.
· Test
Cases that require maximum memory or other resources.
· Test
Cases that may cause excessive hunting for disk-resident data.
· Test
Cases that my cause thrashing in a virtual operating system.
Performance
Testing
Performance testing of a Web site is basically the process
of understanding how the Web application and its operating environment respond
at various user load levels. In general, we want to measure the Response Time, Throughput, and Utilization of the Web site while
simulating attempts by virtual users to simultaneously access the site. One of
the main objectives of performance testing is to maintain a Web site with low
response time, high throughput, and low utilization.
Response Time
Response Time is the delay experienced when a request is
made to the server and the server's response to the client is received. It is
usually measured in units of time, such as seconds or milliseconds. Generally
speaking, Response Time increases as the inverse of unutilized capacity. It
increases slowly at low levels of user load, but increases rapidly as capacity
is utilized. Figure 1 demonstrates such typical characteristics of Response
Time versus user load.
Figure1. Typical characteristics of latency versus user load
The sudden increase in response time is often caused by the
maximum utilization of one or more system resources. For example, most Web
servers can be configured to start up a fixed number of threads to handle
concurrent user requests. If the number of concurrent requests is greater than
the number of threads available, any incoming requests will be placed in a
queue and will wait for their turn to be processed. Any time spent in a queue
naturally adds extra wait time to the overall Response Time.
To better understand what Response Time means in a typical
Web farm, we can divide response time into many segments and categorize these
segments into two major types: network response time and application response
time. Network response time refers to the time it takes for data to travel from
one server to another. Application response time is the time required for data
to be processed within a server. Figure 2 shows the different response time in
the entire process of a typical Web request.
Figure 2 shows the different response time in the entire
process of a typical Web request.
Total Response Time = (N1 + N2 + N3 + N4) + (A1
+ A2 + A3)
Where Nx
represents the network Response Time and Ax
represents the application Response Time.
In general, the Response Time is mainly
constrained by N1 and N4. This Response Time represents the method your clients
are using to access the Internet. In the most common scenario, e-commerce
clients access the Internet using relatively slow dial-up connections. Once
Internet access is achieved, a client's request will spend an indeterminate
amount of time in the Internet cloud shown in Figure 2 as requests and
responses are funneled from router to router across the Internet.
To reduce these networks Response Time (N1 and
N4), one common solution is to move the servers and/or Web contents closer to
the clients. This can be achieved by hosting your farm of servers or
replicating your Web contents with major Internet hosting providers who have
redundant high-speed connections to major public and private Internet exchange
points, thus reducing the number of network routing hops between the clients
and the servers.
Network Response Times N2 and N3 usually depend
on the performance of the switching equipment in the server farm. When traffic
to the back-end database grows, consider upgrading the switches and network
adapters to boost performance.
Reducing application Response Times (A1, A2, and
A3) is an art form unto itself because the complexity of server applications
can make analyzing performance data and performance tuning quite challenging.
Typically, multiple software components interact on the server to service a
given request. Response time can be introduced by any of the components. That
said, there are ways you can approach the problem:
·
First, your application design should minimize
round trips wherever possible. Multiple round trips (client to server or
application to database) multiply transmission and resource acquisition
Response time. Use a single round trip wherever possible.
·
You can optimize many server components to
improve performance for your configuration. Database tuning is one of the most
important areas on which to focus. Optimize stored procedures and indexes.
·
Look for contention among threads or components
competing for common resources. There are several methods you can use to
identify contention bottlenecks. Depending on the specific problem, eliminating
a resource contention bottleneck may involve restructuring your code, applying
service packs, or upgrading components on your server. Not all resource
contention problems can be completely eliminated, but you should strive to
reduce them wherever possible. They can become bottlenecks for the entire
system.
·
Finally, to increase capacity, you may want to
upgrade the server hardware (scaling up), if system resources such as CPU or
memory are stretched out and have become the bottleneck. Using multiple servers
as a cluster (scaling out) may help to lessen the load on an individual server,
thus improving system performance and reducing application latencies.
Throughput
Throughput refers to the number of client requests processed
within a certain unit of time. Typically, the unit of measurement is requests
per second or pages per second. From a marketing perspective, throughput may
also be measured in terms of visitors per day or page views per day, although
smaller time units are more useful for performance testing because applications
typically see peak loads of several times the average load in a day.
As one of the most useful metrics, the throughput of a Web
site is often measured and analyzed at different stages of the design, develop,
and deploy cycle. For example, in the process of capacity planning, throughput
is one of the key parameters for determining the hardware and system
requirements of a Web site. Throughput also plays an important role in
identifying performance bottlenecks and improving application and system
performance. Whether a Web farm uses a single server or multiple servers,
throughput statistics show similar characteristics in reactions to various user
load levels. Figure 3 demonstrates such typical characteristics of throughput
versus user load.
Figure 3. Typical
characteristics of throughput versus user load
As Figure 3 illustrates, the throughput of a typical Web
site increases proportionally at the initial stages of increasing load.
However, due to limited system resources, throughput cannot be increased
indefinitely. It will eventually reach a peak, and the overall performance of
the site will start degrading with increased load. Maximum throughput,
illustrated by the peak of the graph in Figure 3, is the maximum number of user
requests that can be supported concurrently by the site in the given unit of
time.
Note that it is sometimes confusing to compare the
throughput metrics for your Web site to the published metrics of other sites.
The value of maximum throughput varies from site to site. It mainly depends on
the complexity of the application. For example, a Web site consisting largely
of static HTML pages may be able to serve many more requests per second than a
site serving dynamic pages. As with any statistic, throughput metrics can be
manipulated by selectively ignoring some of the data. For example, in your
measurements, you may have included separate data for all the supporting files
on a page, such as graphic files. Another site's published measurements might
consider the overall page as one unit. As a result, throughput values are most
useful for comparisons within the same site, using a common measuring
methodology and set of metrics.
In many ways, throughput and Response
time are related, as different approaches to thinking about the same problem.
In general, sites with high latency will have low throughput. If you want to
improve your throughput, you should analyze the same criteria as you would to
reduce latency. Also, measurement of throughput without consideration of latency
is misleading because latency often rises under load before throughput peaks.
This means that peak throughput may occur at a latency that is unacceptable
from an application usability standpoint. This suggests that Performance
reports include a cut-off value for Response time, such as:250 requests/second
@ 5 seconds maximum Response time
Utilization
Utilization refers to the usage level of different system
resources, such as the server's CPU(s), memory, network bandwidth, and so
forth. It is usually measured as a percentage of the maximum available level of
the specific resource. Utilization versus user load for a Web server typically
produces a curve, as shown in Figure 4.
Figure
4. Typical characteristics of utilization versus user load
As Figure 4 illustrates, utilization usually increases
proportionally to increasing user load. However, it will top off and remain at
a constant when the load continues to build up.
If the specific system resource tops off at 100-percent
utilization, it's very likely that this resource has become the performance
bottleneck of the site. Upgrading the resource with higher capacity would allow
greater throughput and lower latency—thus better performance. If the measured
resource does not top off close to 100-percent utilization, it is probably
because one or more of the other system resources have already reached their
maximum usage levels. They have become the performance bottleneck of the site.
To locate the bottleneck, you may need to go through a long
and painstaking process of running performance tests against each of the
suspected resources, and then verifying if performance is improved by
increasing the capacity of the resource. In many cases, performance of the site
will start deteriorating to an unacceptable level well before the major system
resources, such as CPU and memory, are maximized. For example, Figure 5
illustrates a case where response time rises sharply to 45 seconds when CPU
utilization has reached only 60 percent.
Figure
5. An example of Response Time versus utilization
As Figure 5 demonstrates, monitoring
the CPU or memory utilization alone may not always indicate the true capacity
level of the server farm with acceptable performance.
While most traditional applications are designed to respond
to a single user at any time, most Web applications are expected to support a
wide range of concurrent users, from a dozen to a couple thousand or more. As a
result, performance testing has become a critical component in the process of
deploying a Web application. It has proven to be most useful in (but not
limited to) the following areas:
· Capacity
planning
· Bug
fixing
Capacity Planning
How do you know if your server configuration is sufficient
to support two million visitors per day with average response time of less than
five seconds? If your company is projecting a business growth of 200 percent
over the next two months, how do you know if you need to upgrade your server or
add more servers to the Web farm? Can your server and application support a
six-fold traffic increase during the Christmas shopping season?
Capacity planning is about being prepared. You need to set
the hardware and software requirements of your application so that you'll have
sufficient capacity to meet anticipated and unanticipated user load.
One approach in capacity planning is to load-test your
application in a testing (staging) server farm. By simulating different load
levels on the farm using a Web application performance testing tool such as
WAS, you can collect and analyze the test results to better understand the
performance characteristics of the application. Performance charts such as
those shown in Figures 1, 3, and 4 can then be generated to show the expected
Response Time, throughput, and utilization at these load levels.
In addition, you may also want to test the scalability of
your application with different hardware configurations. For example, load
testing your application on servers with one, two, and four CPUs respectively
would help to determine how well the application scales with symmetric
multiprocessor (SMP) servers. Likewise, you should load test your application
with different numbers of clustered servers to confirm that your application
scales well in a cluster environment.
Although performance testing is as important as functional
testing, it's often overlooked .Since the requirements to ensure the
performance of the system is not as straightforward as the functionalities of
the system, achieving it correctly is more difficult.
The effort of performance testing is addressed in two ways:
- Load testing
- Stress testing
Load testing
Load testing is a much used industry term for the effort of
performance testing. Here load means the number of users or the traffic for the
system. Load testing is defined as the testing to determine whether the system
is capable of handling anticipated number of users or not.
In Load Testing, the virtual users are simulated to exhibit
the real user behavior as much as possible. Even the user think time such as
how users will take time to think before inputting data will also be emulated.
It is carried out to justify whether the system is performing well for the
specified limit of load.
For example, Let us say an online-shopping application is
anticipating 1000 concurrent user hits at peak period. In addition, the peak
period is expected to stay for 12 hrs. Then the system is load tested with 1000
virtual users for 12 hrs. These kinds of tests are carried out in levels: first
1 user, 50 users, and 100 users, 250 users, 500 users and so on till the
anticipated limit are reached. The testing effort is closed exactly for 1000
concurrent users.
The objective of load testing is to check whether the system
can perform well for specified load. The system may be capable of accommodating
more than 1000 concurrent users. But, validating that is not under the scope of
load testing. No attempt is made to determine how many more concurrent users
the system is capable of servicing. Table 1 illustrates the example specified.
Stress testing
Stress testing is another industry term of performance
testing. Though load testing & Stress testing are used synonymously for
performance–related efforts, their goal is different.
Unlike load testing where testing is conducted for specified
number of users, stress testing is conducted for the number of concurrent users
beyond the specified limit. The objective is to identify the maximum number of
users the system can handle before breaking down or degrading drastically.
Since the aim is to put more stress on system, think time of the user is
ignored and the system is exposed to excess load. The goals of load and stress
testing are listed in Table 2. Refer to table 3 for the inference drawn through
the Performance Testing Efforts.
Let us take the same example of online shopping application
to illustrate the objective of stress testing. It determines the maximum number
of concurrent users an online system can service which can be beyond 1000 users
(specified limit). However, there is a possibility that the maximum load that
can be handled by the system may found to be same as the anticipated limit. The
Table<##>illustrates the example specified.
Stress testing also determines the behavior of the system as
user base increases. It checks whether the system is going to degrade
gracefully or crash at a shot when the load goes beyond the specified limit.
Table
1: Load and stress testing of illustrative example
Types of Testing
|
Number of
Concurrent users
|
Duration
|
Load Testing
|
1 User ร
50 Users ร 100
Users ร 250
Users ร 500
Users…………. ร 1000Users
|
12 Hours
|
Stress Testing
|
1 User ร
50 Users ร 100
Users ร 250
Users ร 500
Users…………. ร 1000Users
ร Beyond
1000 Users……….. ร Maximum
Users
|
12 Hours
|
Table 2: Goals of load and stress
testing
Types of testing
|
Goals
|
Load testing
|
|
Stress testing
|
|
Table 3: Inference
drawn by load and stress testing
Type of Testing
|
Inference
|
Load Testing
|
Whether system Available?
If yes, is the available system is stable?
|
Stress Testing
|
Whether system is Available?
If yes, is the available system is stable?
If Yes, is it moving towards Unstable state?
When the system is going to break down or degrade
drastically?
|
Usability Testing
Usability
Testing
Usability is the degree to which a user can easily
learn and use a product to achieve a goal. Usability testing is the system
testing which attempts to find any human-factor problems. A
simpler description is testing the software from a users' point of view.
Essentially it means testing software to prove/ensure that it is user-friendly,
as distinct from testing the functionality of the software. In practical terms
it includes ergonomic considerations, screen design, standardization etc.
The idea behind usability testing is to have actual users
perform the tasks for which the product was designed. If they can't do the
tasks or if they have difficulty performing the tasks, the UI is not adequate
and should be redesigned. It should be remembered that usability testing is
just one of the many techniques that serve as a basis for evaluating the UI in
a user-centered approach. Other techniques for evaluating a UI include
inspection methods such as heuristic evaluations, expert reviews, card-sorting,
matching test or Icon intuitiveness evaluation, cognitive walkthroughs.
Confusion regarding usage of the term can be avoided if we use 'usability evaluation' for the generic term and
reserve 'usability testing' for the specific evaluation method
based on user performance. Heuristic Evaluation and Usability Inspection or
cognitive walkthrough does not involve real users.
It often involves building prototypes of parts of the user
interface, having representative users perform representative tasks and seeing
if the appropriate users can perform the tasks. In other techniques such as the
inspection methods, it is not performance, but someone's opinion of how users
might perform that is offered as evidence that the UI is acceptable or not.
This distinction between performance
and opinion about performance is crucial. Opinions are subjective.
Whether a sample of users can accomplish what they want or not is objective.
Under many circumstances it is more useful to find out if users can do what
they want to do rather than asking someone.
Performing the
test
- Get a person who fits the user profile. Make sure that
you are not getting someone who has worked on it.
- Sit them down in front of a computer, give them the
application, and tell them a small scenario, like: "Thank you for volunteering
making it easier for users to find what they are looking for. We would
like you to answer several questions. There is no right or wrong answers.
What we want to learn is why you make the choices you do, what is
confusing, why choose one thing and not another, etc. Just talk us
through your search and let us know what you are thinking. We have a
recorder which is going to capture what you say, so you will have to tell
us what you are clicking on as you also tell us what you are thinking.
Also think aloud when you are stuck somewhere"
- Now don't speak anything. Sounds easy, but see if you
actually can shut up.
- Watch them use the application. If they ask you
something, tell them you're not there. Then shut up again.
- Start noting all the things you will have to change.
- Afterwards ask them what they thought and note them
down.
Once the whole thing is done thank the volunteer.
End goals of
Usability Testing
To summarize the goals, it can be said that it makes the
software more user friendly. The end result will be:
- Better quality software.
- Software is easier to use.
- Software is more readily accepted by users.
- Shortens the learning curve for new users.
Integration Testing
Integration Testing
Integration testing is a systematic technique for
constructing the program structure while at the same time conducting tests to
uncover errors associated with interfacing. The objective is to take unit
tested components and build a program structure that has been dictated by
design.
Usually, the following methods of Integration testing are
followed:
1.
Top-down Integration approach.
2. Bottom-up Integration approach.
12.2.1
Top-Down Integration
Top-down integration testing is an incremental approach to
construction of program structure. Modules are integrated by moving downward
through the control hierarchy, beginning with the main control module. Modules
subordinate to the main control module are incorporated into the structure in
either a depth-first or breadth-first manner.
- The Integration
process is performed in a series of five steps:
- The main control
module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
- Depending on the
integration approach selected subordinate stubs are replaced one at a time
with actual components.
- Tests are
conducted as each component is integrated.
- On completion of
each set of tests, stub is replaced with the real component.
- Regression
testing may be conducted to ensure that new errors have not been
introduced.
12.2.2
Bottom-Up Integration
Bottom-up integration testing begins construction and
testing with atomic modules (i.e. components at the lowest levels in the
program structure). Because components are integrated from the bottom up,
processing required for components subordinate to a given level is always
available and the need for stubs is eliminated.
- A Bottom-up
integration strategy may be implemented with the following steps:
- Low level
components are combined into clusters that perform a specific software sub
function.
- A driver is
written to coordinate test case input and output.
- The cluster is
tested.
Drivers are removed and clusters are combined moving upward
in the program structure.
Compatibility Testing
Compatibility
Testing
Compatibility Testing concentrates on testing whether the
given application goes well with third party tools, software or hardware
platform.
For example, you have developed a web application. The major
compatibility issue is, the web site should work well in various browsers.
Similarly when you develop applications on one platform, you need to check if
the application works on other operating systems as well. This is the main goal of Compatibility
Testing.
Before you begin compatibility tests, our sincere suggestion
is that you should have a cross reference matrix between various software's,
hardware based on the application requirements. For example, let us suppose you
are testing a web application. A sample list can be as follows:
Hardware
|
Software
|
Operating System
|
Pentium – II, 128 MB RAM
|
IE 4.x, Opera, Netscape
|
Windows 95
|
Pentium – III, 256 MB RAM
|
IE 5.x, Netscape
|
Windows XP
|
Pentium – IV, 512 MB RAM
|
Mozilla
|
Linux
|
Compatibility tests are also performed for various
client/server based applications where the hardware changes from client to
client.
Compatibility Testing is very crucial to organizations
developing their own products. The products have to be checked for compliance
with the competitors of the third party tools, hardware, or software platform.
E.g. A Call center product has been built for a solution with X product but
there is a client interested in using it with Y product; then the issue of
compatibility arises. It is of importance that the product is compatible with
varying platforms. Within the same platform, the organization has to be
watchful that with each new release the product has to be tested for
compatibility.
A good way to keep up with this would be to have a few
resources assigned along with their routine tasks to keep updated about such
compatibility issues and plan for testing when and if the need arises.
By the above example it is not intended that companies which
are not developing products do not have to cater for this type of testing.
There case is equally existent, if an application uses standard software then
would it be able to run successfully with the newer versions too? Or if a
website is running on IE or Netscape, what will happen when it is opened
through Opera or Mozilla. Here again it is best to keep these issues in mind
and plan for compatibility testing in parallel to avoid any catastrophic
failures and delays
Unit Testing
Unit Testing
This is a typical scenario of Manual Unit Testing activity-
A Unit is allocated to a Programmer for programming.
Programmer has to use 'Functional Specifications' document as input for his
work.
Programmer prepares 'Program Specifications' for his Unit
from the Functional Specifications. Program Specifications describe the
programming approach, coding tips for the Unit's coding.
Using these 'Program specifications' as input, Programmer
prepares 'Unit Test Cases' document for that particular Unit. A 'Unit Test
Cases Checklist' may be used to check the completeness of Unit Test Cases
document.
'Program Specifications' and 'Unit Test Cases' are reviewed
and approved by Quality Assurance Analyst or by peer programmer.
Stubs
and Drivers
A software application is made up of a number of 'Units',
where output of one 'Unit' goes as an 'Input' of another Unit. e.g. A 'Sales Order
Printing' program takes a 'Sales Order' as an input, which is actually an
output of 'Sales Order Creation' program.
Due to such interfaces, independent testing of a Unit
becomes impossible. But that is what we want to do; we want to test a Unit in
isolation! So here we use 'Stub' and 'Driver.
A 'Driver' is a piece of software that drives (invokes) the
Unit being tested. A driver creates necessary 'Inputs' required for the Unit
and then invokes the Unit.
A Unit may reference another Unit in its logic. A 'Stub'
takes place of such subordinate unit during the Unit Testing. A 'Stub' is a
piece of software that works similar to a unit which is referenced by the Unit
being tested, but it is much simpler that the actual unit. A Stub works as a
'Stand-in' for the subordinate unit and provides the minimum required behavior
for that unit.
Programmer needs to create such 'Drivers' and 'Stubs' for
carrying out Unit Testing.
Both the Driver and the Stub are kept at a minimum level of
complexity, so that they do not induce any errors while testing the Unit in
question.
Example - For Unit Testing of 'Sales Order Printing'
program, a 'Driver' program will have the code which will create Sales Order
records using hardcoded data and then call 'Sales Order Printing' program.
Suppose this printing program uses another unit which calculates Sales
discounts by some complex calculations. Then call to this unit will be replaced
by a 'Stub', which will simply return
fix discount data.
Unit Test Cases
It must be clear by now that preparing Unit Test Cases
document (referred to as UTC hereafter) is an important task in Unit Testing
activity. Having an UTC, which is complete with every possible test case, leads
to complete Unit Testing and thus
gives an assurance of defect-free Unit at the end of Unit Testing stage. So let
us discuss about how to prepare a UTC.
Think of following aspects while preparing Unit Test Cases –
v Expected
Functionality: Write test cases against each functionality that is expected to
be provided from the Unit being developed.
e.g.
If an SQL script contains commands for creating one table and altering another
table then test cases should be written for testing creation of one table and
alteration of another.
It
is important that User Requirements should be traceable to Functional
Specifications, Functional Specifications be traceable to Program
Specifications and Program Specifications be traceable to Unit Test Cases.
Maintaining such traceability ensures that the application fulfills User
Requirements.
v Input
values:
o Every
input value: Write test cases for each of the inputs accepted by the Unit.
e.g.
If a Data Entry Form has 10 fields on it, write test cases for all 10 fields.
o Validation
of input: Every input has certain validation rule associated with it. Write
test cases to validate this rule. Also, there can be cross-field validations in
which one field is enabled depending upon input of another field. Test cases
for these should not be missed.
e.g.
A combo box or list box has a valid set of values associated with it.
A
numeric field may accept only positive values.
An
email address field must have ampersand (@) and period (.) in it.
A
'Sales tax code' entered by user must belong to the 'State' specified by the
user.
o Boundary
conditions: Inputs often have minimum and maximum possible values. Do not
forget to write test cases for them.
e.g.
A field that accepts 'percentage' on a Data Entry Form should be able to accept
inputs only from 1 to 100.
o Limitations
of data types: Variables that hold the data have their value limits depending
upon their data types. In case of computed fields, it is very important to
write cases to arrive at an upper limit value of the variables.
o Computations:
If any calculations are involved in the processing, write test cases to check
the arithmetic expressions with all possible combinations of values.
v Output
values: Write test cases to generate scenarios, which will produce all types of
output values that are expected from the Unit.
e.g.
A Report can display one set of data if user chooses a particular option and
another set of data if user chooses a different option. Write test cases to
check each of these outputs. When the output is a result of some calculations
being performed or some formulae being used, then approximations play a major
role and must be checked.
v Screen
/ Report Layout: Screen Layout or web page layout and Report layout must be
tested against the requirements. It should not happen that the screen or the
report looks beautiful and perfect, but user wanted something entirely
different! It should ensure that pages and screens are consistent.
v Path
coverage: A Unit may have conditional processing which results in various paths
the control can traverse through. Test case must be written for each of these
paths.
v Assumptions:
A Unit may assume certain things for it to function. For example, a Unit may need a database to be
open. Then test case must be written to check that the Unit reports error if
such assumptions are not met.
v Transactions:
In case of database applications, it is important to make sure that
transactions are properly designed and no way inconsistent data gets saved in
the database.
v Abnormal
terminations: Behavior of the Unit in case of abnormal termination should be
tested.
v Error
messages: Error messages should be short, precise and self-explanatory. They
should be properly phrased and should be free of grammatical mistakes.
UTC Document
Given below is a simple format for UTC document.
Test Case No.
|
Test Case purpose
|
Procedure
|
Expected Result
|
Actual result
|
ID which can be referred to in other documents like
'Traceability Matrix', Root Cause Analysis of Defects etc.
|
What to test
|
How to test
|
What should happen
|
What actually happened?
This column can be omitted when Defect Recording Tool is
used.
|
Designing Test Cases
Designing Test
Cases
There are various techniques in which you can design test
cases. For example, the below illustrated gives you an overview as to how you
derive test cases using the basis path method:
The basis path testing method can be applied to a procedural
design or to source code. The following steps can be applied to derive the
basis set:
1. Use the design or code as a foundation, draw
corresponding flow graph.
2. Determine the Cyclomatic complexity of the resultant flow
graph.
3. Determine a basis set of linearly independent paths.
4. Prepare test cases that will fore execution of each path
in the basis set.
Let us now see how to design test cases in a generic manner:
1.
Understand the requirements document.
2.
Break the requirements into smaller requirements (if it
improves your testability).
3.
For each Requirement, decide what technique you should use
to derive the test cases. For example, if you are testing a Login page, you
need to write test cases basing on error guessing and also negative cases for
handling failures.
4.
Have a Traceability Matrix as follows:
Requirement No (In RD)
|
Requirement
|
Test Case No
|
What
this Traceability Matrix provides you is the coverage of Testing. Keep filling in the Traceability matrix when
you complete writing test case's for each requirement.
What Is Software Testing
The British Standards Institution, in
their standard BS7925-1, define testing as "the process of exercising
software to verify that it satisfies specified requirements and to detect
faults; the measurement of software quality. Where the actual behavior of the
system is different from the expected behavior, a failure is considered to have
occurred.
A failure is the result of a fault.
A fault is an error in the programming or specification that may or may
not result in a failure. A failure is the manifestation of fault.
The principal aim of testing is to
detect faults so that they can be removed before the product is made available
to customers. Faults in software are made for a variety of reasons, from
misinterpreting the requirements through to simple typing mistakes. It is the
role of software testing and quality
assurance, to reduce those faults by identifying the failures.
Testing is a process of executing a
program and comparing the results to an agreed upon standard called
requirements. If the results match the requirements, then the software has
passed testing.
There are several methods of testing.
There is exploratory testing, scripted,
ad-hoc, regression and many more variations.
Testing involves operation of a system
or application under controlled conditions and evaluating the results
Testing is a process for trying out a
piece of software with data in valid or invalid condition in a controlled
manner.
Focus on trying to find bugs
The goal of software testing should
always be to find as many faults as possible(and find them early). If you set
out with the goal of testing your software works, then you will prove it works,
you will not prove that it doesn't break.
For example, if you try to show it
works, then you'll use a valid postcode to ensure it returns a valid response
IEEE Standard Definitions of Software
Testing
IEEE Standard 610
(1990): A set of test inputs, execution
conditions, and expected results developed for a particular objective, such as
to exercise a particular program path or to verify compliance with a specific
requirement.
IEEE Std 829-1983: Documentation specifying inputs, predicted results, and a
set of execution conditions for a test item.
Purpose of testing
The testing activity in an information
system development can be defined as follows
Testing is a process of planning,
preparing, executing and analyzing, aimed at establishing the characteristics
of an information system, and demonstrating the difference between the actual
status and the required status.
Test planning and preparation
activities emphasize the fact that testing should not be regarded as a process
that can be started when the object to be tested is delivered. A test process
requires accurate planning and preparation phases before any measurement
actions can be implemented
Testing reduces the level of
uncertainty about the quality of a system. The level of testing effort depends
on the risks involved in bringing this system in to operation, and on the
decision of how much time and money is to be spent on reducing the level of
uncertainty
Testing
Guidelines
THE DEFINITION OF
TESTING
Stop for a moment to define testing for yourself. (Don't peak ahead!) Does one of the following definition match yours?
Testing is a process designed to
• Prove that the program is error free
• Establish that the software performs its functions correctly
• Establish with confidence that the software does its job fully
If yours matches, you are in for a surprise because testing is none of these. Rather it is properly defined as follows (see Concepts):
Concepts - Testing is the task of locating errors.
THE IMPORTANCE OF A
GOOD DEFINITION
Any definition of testing is more than a passive description. It has a profound impact on the way testing is carried out. Since people are highly goal-oriented, the setting of proper goals can mean the difference between success and failure. If the goal of testing were to prove [that a program or process functions properly], the tester would subconsciously work towards this goal, choosing test data that would prove that point.
The reverse would be true if the goal
were to locate and correct errors. Test data would be selected with an eye
toward providing the program with cases that would likely cause the program to
fail. This would be a more desirable result. Why? We begin with the assumption
that the system, like most systems, contains errors. The job of testing is to
discover them before the user does. In that case, a good tester is one who is
successful in crashing the system, or in causing it to perform in some way that
is counter to the specification.
The mentality of the tester, then, is a
destructive one -quite different from the constructive attitude of the
programmer, the "creator". This is useful information for the
analyst. Who is acting as a project leader, and is responsible for staffing.
Staff should be selected with the appropriate personality traits in mind.
Another effect of having a proper
working definition of testing regards the way the project leader assesses the
performance of the test team. Without a proper definition of testing, the
leader might describe a successful test run as one which proves the program is
error free and describe an unsuccessful test as one which found errors. As is
the case with the testers themselves, this mind-set is actually
counter-productive to the testing process.
GOALS OF TESTING
To satisfy its definition, testing must accomplish the following goals:
1. Find cases where the program does not do what it is supposed to do.
2. Find cases where the program does things it is not supposed to do.
The first goal refers to specifications
which were not satisfied by the program while the second goal refers to
unwanted side-effects.
THE EIGHT BASIC
PRINCIPLES OF TESTING
Following are eight basic principles of testing:
1. Define the expected output or result.
2. Don't test your own programs.
3. Inspect the results of each test completely.
4. Include test cases for invalid or unexpected conditions.
5. Test the program to see if it does what it is not supposed to do as well as what it is supposed to do.
6. Avoid disposable test cases unless the program itself is disposable.
7. Do not plan tests assuming that no errors will be found.
8. The probability of locating more errors in any one module is directly proportional to the number of errors already found in that module.
Let's look at each of these pints.
1) DEFINE THE EXPECTED OUTPUT OR RESULT
More often that not, the tester
approaches a test case without a set of predefined and expected results. The danger
in this lies in the tendency of the eye to see what it wants to see. Without
knowing the expected result, erroneous output can easily be overlooked. This
problem can be avoided by carefully pre-defining all expected results for each
of the test cases. Sounds obvious? You'd be surprised how many people miss this
pint while doing the self-assessment test.
2) DON'T TEST YOUR OWN PROGRAMS
Programming is a constructive activity.
To suddenly reverse constructive thinking and begin the destructive process of
testing is a difficult task. The publishing business has been applying this
idea for years. Writers do not edit their own material for the simple reason
that the work is "their baby" and editing out pieces of their work
can be a very depressing job.
The attitudinal l problem is not the
only consideration for this principle. System errors can be caused by an
incomplete or faulty understanding of the original design specifications; it is
likely that the programmer would carry these misunderstandings into the test
phase.
3) INSPECT THE RESULTS OF EACH TEST COMPLETELY
As obvious as it sounds, this simple
principle is often overlooked. In many test cases, an after-the-fact review of
earlier test results shows that errors were present but overlooked because no
one took the time to study the results.
4) INCLUDE TEST CASES FOR INVALID OR UNEXPECTED CONDITIONS
Programs already in production often
cause errors when used in some new or novel fashion. This stems from the
natural tendency to concentrate on valid and expected input conditions during a
testing cycle. When we use invalid or unexpected input conditions, the
likelihood of boosting the error detection rate is significantly increased.
5) TEST THE PROGRAM TO SEE IF IT DOES WHAT IT IS NOT SUPPOSED TO DO AS WELL AS WHAT IT IS SUPPOSED TO DO
It's not enough to check if the test
produced the expected output. New systems, and especially new modifications,
often produce unintended side effects such as unwanted disk files or destroyed
records. A thorough examination of data structures, reports, and other output
can often show that a program is doing what is not supposed to do and therefore
still contains errors.
6) AVOID DISPOSABLE TEST CASES UNLESS THE PROGRAM ITSELF IS DISPOSABLE
Test cases should be documented so they
can be reproduced. With a non-structured approach to testing, test cases are
often created on-the-fly. The tester sits at a terminal, generates test input,
and submits them to the program. The test data simply disappears when the test
is complete.
Reproducible test cases become
important later when a program is revised, due to the discovery of bugs or
because the user requests new options. In such cases, the revised program can
be put through the same extensive tests that were used for the original
version. Without saved test cases, the temptation is strong to test only the
logic handled by the modifications. This is unsatisfactory because changes
which fix one problem often create a host of other apparently unrelated
problems elsewhere in the system. As considerable time and effort are spent in
creating meaningful tests, tests which are not documented or cannot be
duplicated should be avoided.
7) DO NOT PLAN TESTS ASSUMING THAT NO ERRORS WILL BE FOUND
Testing should be viewed as a process
that locates errors and not ne that proves the program works correctly. The
reasons for this were discussed earlier.
8) THE PROBABILITY OF LOCATING MORE ERRORS IN ANY ONE MODULE IS DIRECTLY PROPORTIONAL TO THE NUMBER OF ERRORS ALREADY FOUND IN THAT MODULE
At first glance, this may seem
surprising. However, it has been shown that if certain modules or sections of
code contain a high number of errors, subsequent testing will discover more
errors in that particular section that in other sections.
Consider a program that consists of two
modules, A and B. If testing reveals five errors in module A and only one error
in module B, module A will likely display more errors that module B in any
subsequent tests.
Why is this so? There is no definitive
explanation, but it is probably due to the fact that the error-prone module in
inherently complex or was badly programmed. By identifying the most
"bug-prone" modules, the tester can concentrate efforts there and
achieve a higher rate of error detection that if all portions of the system
were given equal attention.
Extensive testing of the system after modifications have been made is referred to as regression testing.
Difference
between Testing Types and Testing Techniques
Testing types deal with what aspect
of the computer software would be tested, while testing techniques deal with
how a specific part of the software would be tested.
That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.
On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.)
That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.
On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.)
Top
10 Negative Test Cases
Top 10 Negative Test Cases:
Negative test cases are designed to test the software in ways it was not intended to be used, and should be a part of your testing effort. Below are the top 10 negative test cases you should consider when designing your test effort:
Embedded Single Quote - Most SQL based database systems have issues when users store information that contain a single quote (e.g. John's car). For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.
Required Data Entry - Your functional specification should clearly indicate fields that require data entry on screens. Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.
Field Type Test - Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc). Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc).
Field Size Test - Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters). Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.
Numeric Bounds Test - For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number. Likewise, if your functional specification requires that a field be in a specific range ( e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.
Numeric Limits Test - Most database systems and programming languages allow numeric items to be identified as integers or long integers. Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647. For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.
Date Bounds Test - For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person's birth date is no older than 150 years ago. Likewise, their birth date should not be a date in the future.
Date Validity - For date fields, it is important to ensure that invalid dates are not allowed (04/31/2007 is an invalid date). Your test cases should also check for leap years (every 4th and 400th year is a leap year).
Web Session Testing - Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc. Most screens in a web application are not designed to be launched without first logging in. Create test cases to launch web pages within the application without first logging in. The web application should ensure it has a valid logged in session before rendering pages within the application.
Performance Changes - As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc). Your test suite should include test cases that compare the prior release performance statistics to the current release. This can aid in identifying potential performance problems that will be manifested with code changes to the current release.
Negative test cases are designed to test the software in ways it was not intended to be used, and should be a part of your testing effort. Below are the top 10 negative test cases you should consider when designing your test effort:
Embedded Single Quote - Most SQL based database systems have issues when users store information that contain a single quote (e.g. John's car). For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.
Required Data Entry - Your functional specification should clearly indicate fields that require data entry on screens. Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.
Field Type Test - Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc). Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc).
Field Size Test - Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters). Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.
Numeric Bounds Test - For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number. Likewise, if your functional specification requires that a field be in a specific range ( e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.
Numeric Limits Test - Most database systems and programming languages allow numeric items to be identified as integers or long integers. Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647. For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.
Date Bounds Test - For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person's birth date is no older than 150 years ago. Likewise, their birth date should not be a date in the future.
Date Validity - For date fields, it is important to ensure that invalid dates are not allowed (04/31/2007 is an invalid date). Your test cases should also check for leap years (every 4th and 400th year is a leap year).
Web Session Testing - Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc. Most screens in a web application are not designed to be launched without first logging in. Create test cases to launch web pages within the application without first logging in. The web application should ensure it has a valid logged in session before rendering pages within the application.
Performance Changes - As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc). Your test suite should include test cases that compare the prior release performance statistics to the current release. This can aid in identifying potential performance problems that will be manifested with code changes to the current release.
No comments:
Post a Comment