Total Pageviews

Wednesday 29 February 2012

ETL Traceability matrix


Traceability matrix is where the test plan author,generally the TL,reviews the testcases prepared by test engineers.Here the TL verifies whether the testcases written cover all the Business Requirements or not.He also rejects/deletes extra/not required testcases.
BUSINESS REQUIREMENTS
S/W REQUIREMENTS
TESTCASES
1 – BRS(Business Requirement Specifiaction)
1.01 – S/W requirement specification
1.01.01 - testcase A
1.01.02 - testcase B
1.01.03

1.02- S/W requirement specification
1.02.01 - testcase A
1.02.02 - testcase B
1.02.03

2 – BRS(Business Requirement Specifiaction)
2.01 – S/W requirement specification
2.01.01 - testcase A
2.01.02 - testcase B
2.01.03

2.02- S/W requirement specification
2.02.01 - testcase A
2.02.02 - testcase B
2.02.03



Whereas Test Responsibity Matrix(TRM) is a 15x5(15 cross 5) matrix,which includes 15 testing factors & 5 development stages.Here the author(PM/QA/TL) make a list as to which testing factors are required/needed for their particular type of project.
testing issues\development
SRS
Designing
Coding
System Testing
Maintainence
Ease of use





Authorization





Access Control





Performance





……





………

















Suggested by: er_venkat, admin
Here is an example of a Requirements Traceability Matrix:
Project
Name
Test Case Number
Requirement Number / Document Reference
System Requirements Specification
Business Requirements Document
Work-flow Diagram Number
Wireframe Number
Clarity 10
RC-001
3.2.15
Yes
n/a
n/a
n/a
Clarity 10
RC-101
5.5.3b
Yes
n/a
n/a
n/a
Clarity 10
MB-201
32
n/a
n/a
Yes
n/a
Clarity 10
LT-701
Item B
n/a
n/a
n/a
Yes
Clarity 10
HY-301
12.3.32
n/a
Yes
n/a
n/a

In our company, the test lead or tester is responsible for mapping requirements to test cases.

What is Traceability matrix? Why it is used? Can u tell me the architecture of that?
Ans)
Traceability Matrix - mapping between User Requirements and Test Cases

User Requirements - Taken from RS or FDS whichever is
applicable in the organization
e.g. User should be able to add new record

Test Case - Contains scenario written so as to test the
requirement, with an unique number for reference

Traceability Matrix contains the following columns:
(example values are given in brackets)
Sl.No (1)
FDS/RS Name (FDS_New Module.doc_1.0)
FDS/RS Path (\\inserver\documents\)
Requirement (BR-1, SR-1, RS-1)
Coding Section (add_new - routine names used in code or
name of the coding section given by developer)
Development Resource (Developer's Name)
Test Case (TC_001)
Testing Resource (Tester's name)
Testing Status (Pass/Fail)

T.M:
Traceability Matrix – A document
showing the relationship between Test Requirements and Test Cases. From
Traceability Matrix, we can check that which requirements are covered in which
test cases and "particular test case covers which requirements".

In this matrix, we can also cover that a particular requirement is covered in
 which section of code etc.

In this matrix, the rows will have the requirements. For every document {HLD,
LLD etc}, there will be a separate column. So, in every cell, we need to state,
what section in HLD addresses a particular requirement. Ideally, if every
requirement is addressed in every single document, all the individual cells must
have valid section ids or names filled in. Then we know that every requirement
is addressed. In case of any missing of requirement, we need to go back to the
document and correct it, so that it addressed the requirement.
In a nutshell, requirements traceability is the process of ensuring that one or
more test cases address each requirement.
 

Example of a Traceability Matrix document:
 Req ID
 Req Description
 TC001
 TC002
 TC003
 R1.1
 ………
Yes

Yes
 R1.2
 ……….
Yes


 R2.1
 …….

Yes





Example of a Traceability Matrix document:
 Req ID
 Req Description
 TC001
 TC002
 TC003
 R1.1
 ………
Yes

Yes
 R1.2
 ……….
Yes


 R2.1
 …….

Yes


Above table shows –
Requirement R1.1 is covered in TC001 and TC003.
R1.2 is covered in TC001.
R2.1 is covered in TC002
Above table also provides the test coverage. Fron Traceability Matrix document,
we can ensure that all the requirements are addressed in the test cases. More
science behind traceability matrix can be found at Software Testing Times (See
below links).
 
Source
Description
Tutorial Link
WikiPedia 
Here wikipedia gives a
complete theoretical information about traceability Matrix. – A Must
Read for beginners 
Software Testing Times 
Traceability Matrix from
Software Testing perspective. This is a very helpful topic for test
leads and managers (also for junior testers if they want to grow). It is
a step by step guide to prepare Traceability Matrix with attached sample
template. 
Crosstalk – A Journal of
Defence Software Engineering 
Andrew Kannenberg, Garmin
International Dr. Hossein Saiedian, The University of Kansas 

T.M:
The concept of Traceability Matrix is very important from the Testing perspective. It is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure that we have covered all the required functionalities of the application in our test cases. 
What is Traceability Matrix from Software Testing perspective?

The concept of Traceability Matrix is very important from the Testing perspective. It is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure that we have covered all the required functionalities of the application in our test cases. Some of the features of the traceability matrix:
  •  It is a method for tracing each requirement from its point of origin, through each development phase and work product, to the delivered product
  • Can indicate through identifiers where the requirement is originated, specified, created, tested, and delivered
  • Will indicate for each work product the requirement(s) this work product satisfies
  • Facilitates communications, helping customer relationship management and commitment negotiation
Traceability matrix is the answer of the following basic questions of any Software Project:
  • How is it possible to ensure, for each phase of the lifecycle, that I have correctly accounted for all the customer’s needs?
  • How can I ensure that the final software product will meet the customer’s needs? For example I have a functionality which checks if I put invalid password in the password field the application throws an error message “Invalid password”. Now we can only make sure this requirement is captured in the test case by traceability matrix.
Some more challenges we can overcome by Traceability matrix:
  • Demonstrate to the customer that the requested contents have been developed
  • Ensure that all requirements are correct and included in the test plan and the test cases
  • Ensure that developers are not creating features that no one has requested
  • The system that is built may not have the necessary functionality to meet the customers and users needs and expectations. How to identify the missing parts?
  • If there are modifications in the design specifications, there is no means of tracking the changes
  • If there is no mapping of test cases to the requirements, it may result in missing a major defect in the system
  • The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
  • If the code component that constitutes the customer’s high priority requirements is not known, then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule
  • A seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed, the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated
Step by step process of creating a Traceability Matrix from requirements:
step1: Identify all the testable requirements in granular level from various requirement specification documents. These documents vary from project to project. Typical requirements you need to capture are as follows:
Used cases (all the flows are captured)
Error Messages
Business rules
Functional rules
SRS
FRS
So on…
example requirements: login functionality, generate report, update something etc.
step2: In every project you must be creating test-cases to test the functionality as defined by the requirements. In this case you want to extend the traceability to those test-cases. In the example table  below the test-cases are identified with a TC_ prefix.
Put all those requirements in the top row of a spreadsheet. And use the right hand column of the spreadsheet to jot down all the test cases you have written for that particular requirement. In most of the cases you will have multiple test cases you have written to test one requirement. See the sample spreadsheet below:
Sample traceability matrix
Requirement Identifiers
Reqs Tested
REQ1
UC
1.1
REQ1
UC
1.2
REQ1
UC
1.3
REQ1
UC
2.1
REQ1
UC
2.2
REQ1
UC
2.3.1
REQ1
UC
2.3.2
REQ1
UC
2.3.3
REQ1
UC
2.4
REQ1
UC
3.1
REQ1
UC
3.2
REQ1
TECH
1.1
REQ1
TECH
1.2
REQ1
TECH
1.3
Test Cases
321
3
2
3
1
1
1
1
1
1
2
3
1
1
1
Tested Implicitly
77
TC1.1.1
1
x
TC1.1.2
2
x
x
TC1.1.3
2
x
x
TC1.1.4
1
x
TC1.1.5
2
x
x
TC1.1.6
1
x
TC1.1.7
1
x
TC1.2.1
2
x
x
TC1.2.2
2
x
x
TC1.2.3
2
x
x
TC1.3.1
1
x
TC1.3.2
1
x
TC1.3.3
1
x
TC1.3.4
1
x
TC1.3.5
1
x
etc…
TC5.6.2
1
x
step3: Put cross against each of the test case to each requirement if that particular test case is checking that particular requirement partially or completely. In the above table you can see REQ1 UC1.1 is checked by three test cases. (TC1.1.1, TC1.1.3, TC1.1.5).
Another example of traceability matrix where requirement documents (use case) are mapped back to the test cases.
 
Change management through traceability matrix:
It will be lot easier for you to track changes if you have a good traceability matrix in place. For example REQ1 UC1.1 we know upfront from the traceability matrix that what test cases I need to modify to incorporate those changes. In the above case we need to modify TC1.1.1, TC1.1.3 and TC1.1.5 only. 

Test plan is in high demand. Ya it should be! Test plan reflects your entire project testing schedule and approach. This article is in response to those who have demanded sample test plan.
In my previous article I have outlined Test plan Index. In this article I will elaborate that index to what each point mean to do. So this Test plan will include the purpose of test plan i. e to prescribe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.
Find what actually you need to include in each index point.
I have included link to download PDF format of this test plan template at the end of this post.
Test Plan Template:
(Name of the Product)
Prepared by:
(Names of Preparers)
(Date)
TABLE OF CONTENTS
1.0 INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks

3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing

5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation

7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
1.0 INTRODUCTION
A brief summary of the product being tested. Outline all the functions at a high level.
2.0 OBJECTIVES AND TASKS
2.1 Objectives
Describe the objectives supported by the Master Test Plan, eg., defining tasks and responsibilities, vehicle for communication, document to be used as a service level agreement, etc.

2.2 Tasks
List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.
3.0 SCOPE

General
This section describes what is being tested, such as all the functions of a specific product, its existing interfaces, integration of all functions.

Tactics
List here how you will accomplish the items that you have listed in the “Scope” section. For example, if you have mentioned that you will be testing the existing interfaces, what would be the procedures you would follow to notify the key people to represent their respective areas, as well as allotting time in their schedule for assisting you in accomplishing your activity?

4.0 TESTING STRATEGY
Describe the overall approach to testing. For each major group of features or feature combinations, specify the approach which will ensure that these feature groups are adequately tested. Specify the major activities, techniques, and tools which are used to test the designated groups of features.
The approach should be described in sufficient detail to permit identification of the major testing tasks and estimation of the time required to do each one.


4.1 Unit Testing
Definition:
Specify the minimum degree of comprehensiveness desired. Identify the techniques which will be used to judge the comprehensiveness of the testing effort (for example, determining which statements have been executed at least once). Specify any additional completion criteria (for example, error frequency). The techniques to be used to trace requirements should be specified.

Participants:
List the names of individuals/departments who would be responsible for Unit Testing.

Methodology:
Describe how unit testing will be conducted. Who will write the test scripts for the unit testing, what would be the sequence of events of Unit Testing and how will the testing activity take place?

4.2 System and Integration Testing
Definition:
List what is your understanding of System and Integration Testing for your project.

Participants:
Who will be conducting System and Integration Testing on your project? List the individuals that will be responsible for this activity.

Methodology:
Describe how System & Integration testing will be conducted. Who will write the test scripts for the unit testing, what would be sequence of events of System & Integration Testing, and how will the testing activity take place?

4.3 Performance and Stress Testing
Definition:
List what is your understanding of Stress Testing for your project.

Participants:
Who will be conducting Stress Testing on your project? List the individuals that will be responsible for this activity.

Methodology:
Describe how Performance & Stress testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of Performance & Stress Testing, and how will the testing activity take place?

4.4 User Acceptance Testing
Definition:
The purpose of acceptance test is to confirm that the system is ready for operational use. During acceptance test, end-users (customers) of the system compare the system to its initial requirements.

Participants:
Who will be responsible for User Acceptance Testing? List the individuals’ names and responsibility.

Methodology:
Describe how the User Acceptance testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of User Acceptance Testing, and how will the testing activity take place?

4.5 Batch Testing
4.6 Automated Regression Testing
Definition:
Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still works as specified in the requirements.

Participants:
Methodology:

4.7 Beta Testing
Participants:

Methodology:
5.0 HARDWARE REQUIREMENTS
Computers
Modems

6.0 ENVIRONMENT REQUIREMENTS
6.1 Main Frame
Specify both the necessary and desired properties of the test environment. The specification should contain the physical characteristics of the facilities, including the hardware, the communications and system software, the mode of usage (for example, stand-alone), and any other software or supplies needed to support the test. Also specify the level of security which must be provided for the test facility, system software, and proprietary components such as software, data, and hardware.

Identify special test tools needed. Identify any other testing needs (for example, publications or office space). Identify the source of all needs which are not currently available to your group.
6.2 Workstation
7.0 TEST SCHEDULE

Include test milestones identified in the Software Project Schedule as well as all item transmittal events.
Define any additional test milestones needed. Estimate the time required to do each testing task. Specify the schedule for each testing task and test milestone. For each testing resource (that is, facilities, tools, and staff), specify its periods of use.
8.0 CONTROL PROCEDURES
Problem Reporting
Document the procedures to follow when an incident is encountered during the testing process. If a standard form is going to be used, attach a blank copy as an “Appendix” to the Test Plan. In the event you are using an automated incident logging system, write those procedures in this section.

Change Requests
Document the process of modifications to the software. Identify who will sign off on the changes and what would be the criteria for including the changes to the current product. If the changes will affect existing programs, these modules need to be identified.

9.0 FEATURES TO BE TESTED
Identify all software features and combinations of software features that will be tested.
10.0 FEATURES NOT TO BE TESTED
Identify all features and significant combinations of features which will not be tested and the reasons.
11.0 RESOURCES/ROLES & RESPONSIBILITIES
Specify the staff members who are involved in the test project and what their roles are going to be (for example, Mary Brown (User) compile Test Cases for Acceptance Testing). Identify groups responsible for managing, designing, preparing, executing, and resolving the test activities as well as related issues. Also identify groups responsible for providing the test environment. These groups may include developers, testers, operations staff, testing services, etc.
12.0 SCHEDULES
Major Deliverables
Identify the deliverable documents. You can list the following documents:
- Test Plan
- Test Cases
- Test Incident Reports
- Test Summary Reports

13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs)
Department/Business Area Bus. Manager Tester(s)
14.0 DEPENDENCIES
Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadlines.
15.0 RISKS/ASSUMPTIONS
Identify the high-risk assumptions of the test plan. Specify contingency plans for each (for example, delay in delivery of test items might require increased night shift scheduling to meet the delivery date).
16.0 TOOLS
List the Automation tools you are going to use. List also the Bug tracking tool here.

17.0 APPROVALS
Specify the names and titles of all persons who must approve this plan. Provide space for the signatures and dates.
Name (In Capital Letters) Signature Date
1.
2.
3.
4.
From last couple of days I am getting more request on Sample test plan.
So for your reference I am including one sample test plan template here.

Its a Index of Test plan only.
Each point will help you to elaborate your test plan step by step.
Take this as a guideline and develop a full Test plan for Your project.

Table of Contents :
1. Introduction
1.1. Test Plan Objectives

2. Scope
2.1. Data Entry
2.2. Reports File Transfer
2.3. File Transfer
2.4. Security

3. Test Strategy
3.1. System Test
3.2. Performance Test
3.3. Security Test
3.4. Automated Test
3.5. Stress and Volume Test
3.6. Recovery Test
3.7. Documentation Test
3.8. Beta Test
3.9. User Acceptance Test

4. Environment Requirements
4.1. Data Entry workstations
4.2 MainFrame

5. Test Schedule
6. Control Procedures
6.1 Reviews
6.2 Bug Review meetings
6.3 Change Request
6.4 Defect Reporting

7. Functions To Be Tested
8. Resources and Responsibilities
8.1. Resources
8.2. Responsibilities

9. Deliverables
10. Suspension / Exit Criteria
11. Resumption Criteria
12. Dependencies
12.1 Personnel Dependencies
12.2 Software Dependencies
12.3 Hardware Dependencies
12.3 Test Data & Database

13. Risks
13.1. Schedule
13.2. Technical
13.3. Management
13.4. Personnel
13.5 Requirements

14. Tools
15. Documentation
16. Approvals
How will you report this bug effectively?
Here is the sample bug report for above mentioned example:
(Note that some ‘bug report’ fields might differ depending on your bug tracking system)

SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005

Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.

Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.

Expected result: On clicking SAVE button, should be prompted to a success message “New User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)
Save the defect/bug in the BUG TRACKING TOOL.  You will get a bug id, which you can use for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default module owner (Team leader or manager) for further action.

One of the most frequent and major activity of a Software Tester (SQA/SQC person) is to write Test Cases. First of all, kindly keep in mind that all this discussion is about ‘Writing Test Cases’ not about designing/defining/identifying TCs.
There are some important and critical factors related to this major activity. Let us have a bird’s eye view of those factors first.
a. Test Cases are prone to regular revision and update:
We live in a continuously changing world, software are also not immune to changes. Same holds good for requirements and this directly impacts the test cases. Whenever, requirements are Continue reading →
Do you know “Most of the bugs in software are due to incomplete or inaccurate functional requirements?”  The software code, doesn’t matter how well it’s written, can’t do anything if there are ambiguities in requirements.
It’s better to catch the requirement ambiguities and fix them in early development life cycle. Cost of fixing the bug after completion of development or product release is too high.  So it’s important to have requirement analysis and catch these incorrect requirements before design specifications and project implementation phases of SDLC.
How to measure functional software requirement specifi
How to measure functional software requirement specification (SRS) documents?
Well, we need to define some standard tests to measure the requirements. Once each requirement is passed through these tests you can evaluate and freeze the functional requirements.

Let’s take an example. You are working on a web based application. Requirement is as follows:
“Web application should be able to serve the user queries as early as possible”

How will you freeze the requirement in this case?
What will be your requirement satisfaction criteria? To get the answer, ask this question to stakeholders: How much response time is ok for you?
If they say, we will accept the response if it’s within 2 seconds, then this is your requirement measure. Freeze this requirement and carry the same procedure for next requirement.

We just learned how to measure the requirements and freeze those in design, implementation and testing phases.
Now let’s take other example. I was working on a web based project.
Client (stakeholders) specified the project requirements for initial phase of the project development. My manager circulated all the requirements in the team for review. When we started discussion on these requirements, we were just shocked! Everyone was having his or her own conception about the requirements. We found lot of ambiguities in the ‘terms’ specified in requirement documents, which later on sent to client for review/clarification.
Client used many ambiguous terms, which were having many different meanings, making it difficult to analyze the exact meaning. The next version of the requirement doc from client was clear enough to freeze for design phase.
From this example we learned “Requirements should be clear and consistent”
Next criteria for testing the requirements specification is “Discover missing requirements”
Many times project designers don’t get clear idea about specific modules and they simply assume some requirements while design phase. Any requirement should not be based on assumptions. Requirements should be complete, covering each and every aspect of the system under development.
Specifications should state both type of requirements i.e. what system should do and what should not.
Generally I use my own method to uncover the unspecified requirements. When I read the software requirements specification document (SRS), I note down my own understanding of the requirements that are specified, plus other requirements SRS document should supposed to cover. This helps me to ask the questions about unspecified requirements making it clearer.
For checking the requirements completeness, divide requirements in three sections, ‘Must implement’ requirements, requirements those are not specified but are ‘assumed’ and third type is ‘imagination’ type of requirements. Check if all type of requirements are addressed before software design phase.
Check if the requirements are related to the project goal.
Some times stakeholders have their own expertise, which they expect to come in system under development. They don’t think if that requirement is relevant to project in hand. Make sure to identify such requirements. Try to avoid the irrelevant requirements in first phase of the project development cycle. If not possible ask the questions to stakeholders: why you want to implement this specific requirement? This will describe the particular requirement in detail making it easier for designing the system considering the future scope.
But how to decide the requirements are relevant or not?
Simple answer: Set the project goal and ask this question: If not implementing this requirement will cause any problem achieving our specified goal? If not, then this is irrelevant requirement. Ask the stakeholders if they really want to implement these types of requirements.
In short requirements specification (SRS) doc should address following:
Project functionality (What should be done and what should not)
Software, Hardware interfaces and user interface
System Correctness, Security and performance criteria
Implementation issues (risks) if any
Conclusion:
I have covered all aspects of requirement measurement. To be specific about requirements, I will summarize requirement testing in one sentence:
“Requirements should be clear and specific with no uncertainty, requirements should be measurable in terms of specific values, requirements should be testable having some evaluation criteria for each requirement, and requirements should be complete, without any contradictions”
Testing should start at requirement phase to avoid further requirement related bugs. Communicate more and more with your stakeholder to clarify all the requirements before starting project design and implementation.
Do you have any experience testing software requirements?  
This is the guest post from Vijay D (Coincidence with my name).
Below sample bug/defect report will give you exact idea of how to report a bug in bug tracking tool.
Here is the example scenario that caused a bug:
Lets assume in your application under test you want to create a new user with user information, for that you need to logon into the application and navigate to USERS menu > New User, then enter all the details in the ‘User form’ like, First Name, Last Name, Age, Address, Phone etc. Once you enter all these information, you need to click on ‘SAVE’ button in order to save the user. Now you can see a success message saying, “New User has been created successfully”.
But when you entered into your application by logging in and navigated to USERS menu > New user, entered all the required information to create new user and clicked on SAVE button. BANG! The application crashed and you got one error page on screen. (Capture this error message window and save as a Microsoft paint file)
Now this is the bug scenario and you would like to report this as a BUG in your bug-tracking tool.
How will you report this bug effectively?
Here is the sample bug report for above mentioned example:
(Note that some ‘bug report’ fields might differ depending on your bug tracking system)
SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005
Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.
Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.
Expected result: On clicking SAVE button, should be prompted to a success message “New User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)
Save the defect/bug in the BUG TRACKING TOOL.  You will get a bug id, which you can use for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default module owner (Team leader or manager) for further action.
Related: If you need more information about writing a good bug report read our previous post “How to write a good bug report“.
Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.
“The point of writing problem report(bug report) is to get bugs fixed” – By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)
What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.
How to Report a Bug?
Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
  • Blocker: No further testing work can be done.
  • Critical: Application crash, Loss of data.
  • Major: Major loss of function.
  • Minor: minor loss of function.
  • Trivial: Some UI enhancements.
  • Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
  • Reproduce steps: Clearly mention the steps to reproduce the bug.
  • Expected result: How application should behave on above mentioned steps.
  • Actual result: What is the actual result on running above steps i.e. the bug behavior.
These are the important steps in bug report. You can also add the “Report type” as one more field which will describe the bug type.
The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.
For better productivity write a better bug report.
What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.
or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.
Lastly the general definition of bug is: “failure to conform to specifications”.
If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.
Life cycle of Bug:
1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce
In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.
Look at the following Bug life cycle:
[Click on the image to view full size] Ref: Bugzilla bug life cycle
The figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.
On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.
Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
Are you going to start on a new project for testing? Don’t forget to check this Testing Checklist in each and every step of your Project life cycle. List is mostly equivalent to Test plan, it will cover all quality assurance and testing standards.
·         Testing Checklist:
1 Create System and Acceptance Tests [ ]
2 Start Acceptance test Creation [ ]
3 Identify test team [ ]

ETL Startegy to store data validation rules

Every time there is movement of data the results have to be tested against the expected results. For every ETL process, test conditions for testing data are defined before/during design and development phase itself.  Some that are missed can be added later on.
Various test conditions are used to validate data when the ETL process is migrated from DEV-to->QA-to->PRD. These test conditions are can exists in the developer’s/tester’s mind /documented in word or excel. With time the test conditions either lost ignored or scattered all around to be really useful.
In production if the ETL process runs successfully without error is a good thing. But it does not really mean anything. You still need rules to validate data processed by ETL. At this point you need data validation rules again!
A better ETL strategy is to store the ETL business rules in a RULES table by target table, source system. These rules can be in SQL text. This will create a repository of all the rules in a single location which can be called by any ETL process/ auditor at any phase of the project life cycle.
There is also no need to re-write /rethink rules. Any or all of these rules can be made optional, tolerances can be defined, called immediately after the process is run or data can be audited at leisure.
This Data validation /auditing system will basically contain
A table that contains the rules,
A process to call is dynamically and
A table to store the results from the execution of the rules

Benefits:
Rules can be added dynamically with no cange to code.
Rules are stored permanantly.
Tolerance level can be changed with ever changing the code
Biz rules can be added or validated by business experts without worring about the ETL code.
Data Warehouse Testing
Businesses are increasingly focusing on the collection and organization of data for strategic decision-making. The ability to review historical trends and monitor near real-time operational data has become a key competitive advantage.
We provide practical recommendations for testing extract, transform and load (ETL) applications based on years of experience testing data warehouses in the financial services and consumer retailing areas.
There is an exponentially increasing cost associated with finding software defects later in the development lifecycle. In data warehousing, this is compounded because of the additional business costs of using incorrect data to make critical business decisions. Given the importance of early detection of software defects, here are some general goals of testing an ETL application:
  • Data completeness. Ensures that all expected data is loaded.
  • Data transformation. Ensures that all data is transformed correctly according to business rules and/or design specifications.
  • Data quality. Ensures that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data.
  • Performance and scalability. Ensures that data loads and queries perform within expected time frames and that the technical architecture is scalable.
  • Integration testing. Ensures that the ETL process functions well with other upstream and downstream processes.
  • User-acceptance testing. Ensures the solution meets users’ current expectations and anticipates their future expectations
  • Regression testing. Ensures existing functionality remains intact each time a new release of code is completed
Data Completeness
One of the most basic tests of data completeness is to verify that all expected data loads into the data warehouse. This includes validating that all records, all fields and the full contents of each field are loaded. Strategies to consider include:
  • Comparing record counts between source data, data loaded to the warehouse and rejected records.
  • Comparing unique values of key fields between source data and data loaded to the warehouse. This is a valuable technique that points out a variety of possible data errors without doing a full validation on all fields.
  • Utilizing a data profiling tool that shows the range and value distributions of fields in a data set. This can be used during testing and in production to compare source and target data sets and point out any data anomalies from source systems that may be missed even when the data movement is correct.
  • Populating the full contents of each field to validate that no truncation occurs at any step in the process. For example, if the source data field is a string(30) make sure to test it with 30 characters.
  • Testing the boundaries of each field to find any database limitations. For example, for a decimal(3) field include values of -99 and 999, and for date fields include the entire range of dates expected. Depending on the type of database and how it is indexed, it is possible that the range of values the database accepts is too small.
Data Transformation
Validating that data is transformed correctly based on business rules can be the most complex part of testing an ETL application with significant transformation logic. One typical method is to pick some sample records and “stare and compare” to validate data transformations manually. This can be useful but requires manual testing steps and testers who understand the ETL logic. A combination of automated data profiling and automated data movement validations is a better long-term strategy. Here are some simple automated data movement techniques:
  • Create a spreadsheet of scenarios of input data and expected results and validate these with the business customer. This is a good requirements elicitation exercise during design and can also be used during testing.
  • Create test data that includes all scenarios. Elicit the help of an ETL developer to automate the process of populating data sets with the scenario spreadsheet to allow for flexibility because scenarios will change.
  • Utilize data profiling results to compare range and distribution of values in each field between source and target data.
  • Validate correct processing of ETL-generated fields such as surrogate keys.
  • Validate that data types in the warehouse are as specified in the design and/or the data model.
  • Set up data scenarios that test referential integrity between tables. For example, what happens when the data contains foreign key values not in the parent table?
  • Validate parent-to-child relationships in the data. Set up data scenarios that test how orphaned child records are handled.
Data Quality
For the purposes of this discussion, data quality is defined as “how the ETL system handles data rejection, substitution, correction and notification without modifying data.” To ensure success in testing data quality, include as many data scenarios as possible. Typically, data quality rules are defined during design, for example:
  • Reject the record if a certain decimal field has nonnumeric data.
  • Substitute null if a certain decimal field has nonnumeric data.
  • Validate and correct the state field if necessary based on the ZIP code.
  • Compare product code to values in a lookup table, and if there is no match load anyway but report to users.
Depending on the data quality rules of the application being tested, scenarios to test might include null key values, duplicate records in source data and invalid data types in fields (e.g., alphabetic characters in a decimal field). Review the detailed test scenarios with business users and technical designers to ensure that all are on the same page. Data quality rules applied to the data will usually be invisible to the users once the application is in production; users will only see what’s loaded to the database. For this reason, it is important to ensure that what is done with invalid data is reported to the users. These data quality reports present valuable data that sometimes reveals systematic issues with source data. In some cases, it may be beneficial to populate the “before” data in the database for users to view.
Performance and Scalability
As the volume of data in a data warehouse grows, ETL load times can be expected to increase, and performance of queries can be expected to degrade. This can be mitigated by having a solid technical architecture and good ETL design. The aim of the performance testing is to point out any potential weaknesses in the ETL design, such as reading a file multiple times or creating unnecessary intermediate files. The following strategies will help discover performance issues:
  • Load the database with peak expected production volumes to ensure that this volume of data can be loaded by the ETL process within the agreed-upon window.
  • Compare these ETL loading times to loads performed with a smaller amount of data to anticipate scalability issues. Compare the ETL processing times component by component to point out any areas of weakness.
  • Monitor the timing of the reject process and consider how large volumes of rejected data will be handled.
  • Perform simple and multiple join queries to validate query performance on large database volumes. Work with business users to develop sample queries and acceptable performance criteria for each query.
Integration Testing
Typically, system testing only includes testing within the ETL application. The endpoints for system testing are the input and output of the ETL code being tested. Integration testing shows how the application fits into the overall flow of all upstream and downstream applications. When creating integration test scenarios, consider how the overall process can break and focus on touch points between applications rather than within one application. Consider how process failures at each step would be handled and how data would be recovered or deleted if necessary.
Most issues found during integration testing are either data related to or resulting from false assumptions about the design of another application. Therefore, it is important to integration test with production-like data. Real production data is ideal, but depending on the contents of the data, there could be privacy or security concerns that require certain fields to be randomized before using it in a test environment. As always, don’t forget the importance of good communication between the testing and design teams of all systems involved. To help bridge this communication gap, gather team members from all systems together to formulate test scenarios and discuss what could go wrong in production. Run the overall process from end to end in the same order and with the same dependencies as in production. Integration testing should be a combined effort and not the responsibility solely of the team testing the ETL application
11 Answers
Member Since Aug-2010

Flatfiles Validations

When we are extracting the flatfiles, What are the basic required validations?
Ans. Flatfiles Validations
Folllowing are some common validations performed:
a) Check for blank lines and remove them.
b) Check the number of column in each row of the file.
c) If there is a  trailer line in the flat file containing additional information like total number of records,then a cross check is performed to check if the number of records specified in the trailer and the actual number of records are same.
d) Check if a column contains balnk value (If it is expected to have values).
Data Validations
It depends upon the requirment you needed.

Some bascic checks:

1. NULL validation
2. Data type validation

if you consider Data quality below points may come across

1. Address fileld validations
2. Word validations
3. Character validations

What is Requirements Traceability?
Requirements traceability is defined as the ability to describe and follow the life of a requirement, in both a forward and backward direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of ongoing refinement and iteration in any of these phases)
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements.
                Traceability is also used in managing change and provides the basis for test planning.
*      Benefits:
*      To identify the extent to which the business requirements have been covered by functional and system requirements.
*      To identify the ‘orphan’ functional and system requirements. This would indicate a missing trace between requirements
*      To identify the extent to which system requirements are covered from a design perspective.
*      To identify the functional coverage of the QA test scenarios.
*      To identify which design components implement a requirement.
*      To identify the test scenarios that will be used to verify a requirement
*      To analyze the impact of changing requirements on the software artifacts created in subsequent phases of the SDLC
For Any given project, three important questions that need to be answered before embarking on any particular requirements traceability approach are :
*      What needs to be traced ?
*      What type of linkages need to be made?
*      How and when and who should establish and maintain the links
*      What needs to be traced :
*      Application Components
*      Business Requirements
*      Functional Requirements
*      System Requirements
*      Design Artifacts
*      Testing Artifacts
*      Type of Links:
*      Forward, Backward links between requirements
*      Vertical links between requirements and other artifacts
*      Internal /External Links
*      Who, How and When?
*      Project Manager, Business Analyst, Development Lead?
*      Through Tools or through document linking and references
*      Stages in SDLC with well defined entry and exit criteria for defining the links
*      Link requirements to external documents/ URL’s to enhance requirement description.
*      Link requirements across projects
*      Get a full view of how requirements are related to each other through “Matrix” view or “Tree” view
*      Get full description of the linked requirements through click of a button
*      Prevent unpleasant surprise through real time alerts when requirements change.
*      Traces are automatically marked suspect when requirements change
*      Get full description of the change by comparing versions of the requirements
*      Generate functional coverage reports to reflect requirements which have not been addressed in the project
*      Generate test coverage reports to identify requirements which have not been taken into account for testing purposes
*      Features:
*      Automatic conversion of links to suspect when requirements change
*      Trigger alerts to concerned parties when requirements change
*      Facility to identify the change in the request at click of a button
*      Features:
*      Identify Orphan requirements or hanging requirements (Dark Bands)
*      Identify implied links (shown in red circle)
*      Generate links on the fly (Not shown)
*      Features:
*      Trace to requirements in external projects
*      Trace to artifacts in configuration management tools
*      Trace to artifacts in design tools


5 comments: