SQA Engineer at Federal Reserve Bank of
San Francisco
San Francisco Bay Area
Computer Software
Current
Past
• Over 7 Years of IT experience in QA
Testing/ ETL/ Informatica 7.1/6.2/5.1 & Testing of software systems in Data
Warehouse/ Client/Server Oracle and Web-based environments under Windows and
UNIX platforms
• Expertise in full lifecycle implementation and QA testing
• Analyzed requirements, use cases and technical specifications to develop Test Plans, Test Cases
• Hands on experience in writing test scripts, preparing test data, testing Informatica Mappings and creating SQL scripts using stored procedures, functions, PL/SQL
• Performed Manual, Unit, Integration, System, Functional, Regression, Load, Performance, Stress and Acceptance Testing
• Designed, developed and tested complex mappings. Experienced in writing test scripts, test data and performing Informatica mappings test
• Experienced in data warehouse development works involving data migration, data conversion, ETL operations using Power Center 6.2/5.1
• Extensively used various Informatica Designer Tool’s components such as source analyzer, transformation developer, mapping designer, mapplet designer, workflow manager and workflow monitor
• Experienced in developing applications using Oracle 10g/9i/8i/8x/7x, VB, HTML, SQL, C++ and PL/SQL
• Developed strong skills in oracle PL/SQL programming, backend programming, and shell scripting, which is used for scheduling sessions
• Extracted data from various data sources such as Oracle, MS SQL Server, IBM DB2 and Flat files
• Experienced in performance tuning of Informatica mappings. Used Informatica scheduler and Unix shell scripts for scheduling sessions
• Having good problem solving ability in diverse software environment, very good communication skills
• Expertise in full lifecycle implementation and QA testing
• Analyzed requirements, use cases and technical specifications to develop Test Plans, Test Cases
• Hands on experience in writing test scripts, preparing test data, testing Informatica Mappings and creating SQL scripts using stored procedures, functions, PL/SQL
• Performed Manual, Unit, Integration, System, Functional, Regression, Load, Performance, Stress and Acceptance Testing
• Designed, developed and tested complex mappings. Experienced in writing test scripts, test data and performing Informatica mappings test
• Experienced in data warehouse development works involving data migration, data conversion, ETL operations using Power Center 6.2/5.1
• Extensively used various Informatica Designer Tool’s components such as source analyzer, transformation developer, mapping designer, mapplet designer, workflow manager and workflow monitor
• Experienced in developing applications using Oracle 10g/9i/8i/8x/7x, VB, HTML, SQL, C++ and PL/SQL
• Developed strong skills in oracle PL/SQL programming, backend programming, and shell scripting, which is used for scheduling sessions
• Extracted data from various data sources such as Oracle, MS SQL Server, IBM DB2 and Flat files
• Experienced in performance tuning of Informatica mappings. Used Informatica scheduler and Unix shell scripts for scheduling sessions
• Having good problem solving ability in diverse software environment, very good communication skills
Specialties
TECHNICAL SKILLS
Testing Tools: QTP, WinRunner, Test Director, Mercury Quality Center,
Rational Clear Quest
ETL Tools: Informatica 6.x/5.x/4.x , Abinitio 1.15
Database: Oracle 8/8i/9i, MS SQL Server and DB2
Languages: SQL, PL/SQL
Reporting: Business Objects 6.x/5.x
Operating System: UNIX, MS Windows 2000/NT
ERP: Peoplesoft
CERTIFICATION
• Oracle Certified Professional
• ITIL Foundation certification
Testing Tools: QTP, WinRunner, Test Director, Mercury Quality Center,
Rational Clear Quest
ETL Tools: Informatica 6.x/5.x/4.x , Abinitio 1.15
Database: Oracle 8/8i/9i, MS SQL Server and DB2
Languages: SQL, PL/SQL
Reporting: Business Objects 6.x/5.x
Operating System: UNIX, MS Windows 2000/NT
ERP: Peoplesoft
CERTIFICATION
• Oracle Certified Professional
• ITIL Foundation certification
Government Agency; Banking industry
November 2009 – Present (1 year 10
months)
BI
SQA Engineer
Public Company; SNDK; Semiconductors
industry
November 2007 – October 2009 (2 years)
Responsible for Gathering test
requirements and scoping out the test plan.
Responsible for creating Test Plan and scope of testing.
Created test case scenarios, executed test cases and maintained defects in internal bug tracking systems
Responsible for extensive data validations on Datawarehouse and Business object reports.
Responsible for testing Business Objects universe and universe enhancements.
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project.
Responsible for leading team and co-ordinating with offshore team.
Responsible for creating Test Plan and scope of testing.
Created test case scenarios, executed test cases and maintained defects in internal bug tracking systems
Responsible for extensive data validations on Datawarehouse and Business object reports.
Responsible for testing Business Objects universe and universe enhancements.
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project.
Responsible for leading team and co-ordinating with offshore team.
ETL
Tester
Public Company; eBay; Computer Software
industry
March 2007 – October 2007 (8 months)
Created test case scenarios, executed
test cases and maintained defects in internal bug tracking systems
Developed and executed various manual testing scenarios and exceptionally documented the process to perform functional testing of the application
Performed extensive data validations against Data Warehouse
Loading Flat file Data into Teradata tables using Unix Shell scripts.
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project using Abinitio.
Developed and executed various manual testing scenarios and exceptionally documented the process to perform functional testing of the application
Performed extensive data validations against Data Warehouse
Loading Flat file Data into Teradata tables using Unix Shell scripts.
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project using Abinitio.
ETL
Tester
Public Company; SYMC; Computer Software
industry
July 2006 – February 2007 (8 months)
Created test case scenarios, executed
test cases and maintained defects in Remedy
Tested reports and ETL mappings from Source to Target
Developed and executed various manual testing scenarios and exceptionally documented the process to perform functional testing of the application
Tested Informatica mappings. Performed data validations against Data Warehouse
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project
Tested reports and ETL mappings from Source to Target
Developed and executed various manual testing scenarios and exceptionally documented the process to perform functional testing of the application
Tested Informatica mappings. Performed data validations against Data Warehouse
As a ETL Tester responsible for the business requirements, ETL Analysis, ETL test and design of the flow and the logic for the Data warehouse project
Application
QA tester
GMAC
Computer Software industry
April 2006 – July 2006 (4 months)
Designed, developed the test plans and
test cases for the .Net online broker user interface.
Identified critical checkpoints, and performed manual testing to check the data consistencies between the Mflex - Dataflux platform and the broker UI
Performed functional testing of the application to institute conformance to the Specification requirements
Performed field-level, data-level validations on the different modules of the web UI. Set up regression test units for backend validations.
Designed, executed test scenarios and documented them in Test Director defect tracking system.
Designed and prepared scripts to monitor uptime/downtime of different system components
Identified critical checkpoints, and performed manual testing to check the data consistencies between the Mflex - Dataflux platform and the broker UI
Performed functional testing of the application to institute conformance to the Specification requirements
Performed field-level, data-level validations on the different modules of the web UI. Set up regression test units for backend validations.
Designed, executed test scenarios and documented them in Test Director defect tracking system.
Designed and prepared scripts to monitor uptime/downtime of different system components
ETL
SQA/Tester
Privately Held; Biotechnology industry
February 2005 – March 2006 (1 year 2
months)
Tested reports and ETL mappings from
Source to Target
Created test case scenarios, executed test cases and maintained
Managed test cases in Mercury Quality Center and tracked defects using Rational Clear Quest
Developed and executed various manual testing scenarios and neatly documented the process to perform Functional of the application
Involved in testing of Informatica mappings. Performed Data Validation with Data Warehouse
Worked as ETL Tester responsible for the requirements / ETL Analysis, ETL testing and designing of the flow and the logic for the Data warehouse project.
Created test case scenarios, executed test cases and maintained
Managed test cases in Mercury Quality Center and tracked defects using Rational Clear Quest
Developed and executed various manual testing scenarios and neatly documented the process to perform Functional of the application
Involved in testing of Informatica mappings. Performed Data Validation with Data Warehouse
Worked as ETL Tester responsible for the requirements / ETL Analysis, ETL testing and designing of the flow and the logic for the Data warehouse project.
Padma
Kandaswamy's Education
University
of Madras
Masters,
Computer Science
1997 – 2002
Contact
Padma for:
Padma Kandaswamy is not currently open
to receiving Introductions or InMail™.
View
Padma Kandaswamy’s full profile to...
- See who you and Padma
Kandaswamy know in common
- Get introduced to Padma
Kandaswamy
- Contact Padma Kandaswamy
directly
Testing: A Sample Test Plan
|
If you decide to use
the ‘10 step process for
developing a test plan’,
I have provided a fairly comprehensive description of the tasks and events
involved. But, I thought that having an example of a test plan created using
the process may clarify some questions and better illustrate the result. Keep
in mind that this is a completely fictional project. Any resemblance to a real
project is purely coincidental…
This is the Master Test Plan for the Reassigned Sales Re-write project.
This plan will address only those items and elements that are related to the
Reassigned Sales process, both directly and indirectly affected elements will
be addressed. The primary focus of this plan is to ensure that the new
Reassigned Sales application provides the same level of information and detail
as the current system while allowing for improvements and increases in data
acquisition and level of details available (granularity).
The project will have three levels of testing, Unit, System/Integration
and Acceptance. The details for each level are addressed in the approach
section and will be further defined in the level specific plans.
The estimated time line for this project is very aggressive (six (6)
months), as such, any delays in the development process or in the installation
and verification of the third party software could have significant effects on
the test plan. The acceptance testing is expected to take one (1) month from
the date of application delivery from system test and is to be done in parallel
with the current application process.
The following is a list, by version and release, of the items to be
tested:
A. EXTOL EDI package, Version 3.0
If a new release is available prior to roll-out it will not be used
until after installation. It will be a separate upgrade/update project.
B. DNS PC EDI transaction package, Version 2.2
If a new release is available prior to roll-out it will not be used
until after installation. It will be a separate upgrade/update project.
C. Custom PC EDI transaction package (two distributors only).
D. New reassigned sales software, initial version to be Version 1.0
A detailed listing of programs, databases, screens and reports will be
provided in the system and detailed design documents.
E. Order Entry EDI interface software, Current version at time of pilot.
Currently, version 4.1.
F. Reassigned Sales System requirements document, SST_RQMT.WPD version
4.1
G. Reassigned Sales System Design Document, SST_SYSD.WPD version 3.02
H. Reassigned Sales Detail Design Document, SST_DTLD.WPD version 3.04
There are several parts of the project that are not within the control
of the Reassigned Sales application but have direct impacts on the process and
must be checked as well.
A. The local AS/400 based vendor supplied EDI software package. This
package will be providing all the reformatting support from the ANSI X12 EDI
formats to the internal AS/400 data base file formats.
B. The PC based software package installed at each distributor's location
(both custom written and vendor supplied) will be providing the formatting of
the distributors data into the correct EDI X12 formats.
C. Backup and Recovery of the EDI transmission files, local databases and
restart of the translation process, must be carefully checked.
D. The ability to restart the application in the middle of the process is a
critical factor to application reliability. This is especially true in the case
of the transmission files as once the data is pulled from the mail box it is no
longer available there and must be protected locally.
E. Database security and access must be defined and verified, especially
for files shared between the Order Entry application and the Reassigned Sales
process. All basic security will be provided through the AS/400 systems native
security process.
The following is a list of the areas to be focused on during testing of
the application.
A. New EDI data acquisition process.
B. Redesigned On-line screens.
C. Redesigned/Converted reports.
D. New Automated Archive process.
E. Interface to the Order Entry system and data bases.
F. Computation of Sales Activity by region for commissions.
The following is a list of the areas that will not be specifically
addressed. All testing in these areas will be indirect as a result of other
testing efforts.
A. Non-EDI Order Entry processes.
Only the EDI interface of the Order Entry application will be verified.
Changes to the EDI interface to support Reassigned Sales are not anticipated to
have an impact on the Order Processing application. Order Entry is a separate
application sharing the data interface only, orders will continue to process in
the same manner.
B. Network Security and dial-in access.
Changes to include EDI transactions for reassigned sales will have no
impact on the security aspects of the network or the EXTOL/EDI interface.
C. Operational aspects of the EDI process.
Changes to include EDI transactions for reassigned sales will have no
impact on the operational aspects of the EXTOL/EDI interface.
D. PC based spreadsheet analysis applications using Reassigned Sales
data.
These applications are completely under the control of the customer and are
outside the scope of this project. The necessary data base format information
will be provided to the customers to allow them to extract data. Testing of
their applications is the responsibility of the application
maintainer/developer.
E. Business Analysis functions using Reassigned Sales data.
These applications are completely under the control of the management
support team and are outside the scope of this project. The necessary data base
format information will be provided to the support team to allow them to
extract data. Testing of their applications is the responsibility of the
application maintainer/developer.
F. Marketing/Forecasting processes using Reassigned Sales data.
These applications are completely under the control of marketing and are
outside the scope of this project. The necessary data base format information
will be provided to marketing to allow them to extract data. Testing of their
applications is the responsibility of the application maintainer/developer.
Testing Levels
The testing for the Reassigned Sales project will consist of Unit,
System/Integration (combined) and Acceptance test levels. It is hoped that
there will be at least one full time independent test person for
system/integration testing. However, with the budget constraints and time line
established; most testing will be done by the test manager with the development
teams participation.
UNIT Testing will be done by the developer and will be approved by the
development team leader. Proof of unit testing (test case list, sample output,
data printouts, defect information) must be provided by the programmer to the
team leader before unit testing will be accepted and passed on to the test
person. All unit test information will also be provided to the test person.
SYSTEM/INTEGRATION Testing will be performed by the test manager and
development team leader with assistance from the individual developers as
required. No specific test tools are available for this project. Programs will
enter into System/Integration test after all critical defects have been
corrected. A program may have up to two Major defects as long as they do not
impede testing of the program (I.E. there is a work around for the error).
ACCEPTANCE Testing will be performed by the actual end users with the
assistance of the test manager and development team leader. The acceptance test
will be done in parallel with the existing manual ZIP/FAX process for a period
of one month after completion of the System/Integration test process.
Programs will enter into Acceptance test after all critical and major
defects have been corrected. A program may have one major defect as long as it
does not impede testing of the program (I.E. there is a work around for the
error). Prior to final completion of acceptance testing all open critical and
major defects MUST be corrected and verified by the Customer test
representative.
A limited number of distributors will participate in the initial
acceptance test process. Once acceptance test is complete, distributors will be
added as their ability to generate the required EDI data is verified and
checked against their FAX/ZIP data. As such, some distributors will be in
actual production and some in parallel testing at the same time. This will
require careful coordination of the control tables for the production system to
avoid posting test data into the system.
Configuration Management/Change Control
Movement of programs from the development portion of the ‘RED’
system to the test portion of the ‘RED’ system will be controlled through
the existing Configuration Management application process, ‘EXTRACT’. This
will ensure that programs under development and those in full test will have
the same version controls and tracking of changes. The same extract process
will be used to migrate the programs from the Development/Test ‘RED’ system
to the production ‘BLUE’ system once all testing has been completed
according to published plans and guidelines.
All Unit and initial system testing will be performed on the Development
AS/400 ‘RED’ system. Once the system has reached a reasonable level of
stability, no critical or major defects outstanding, initial pilot testing will
be done on the production AS/400 ‘BLUE’ system. All testing done on the
‘BLUE’ system will be done in a parallel mode will all controls set to
prevent actual updating of the production files.
This will allow some early testing of the numbers received through the
old ZIP/FAX process and the higher level of detail received through the new EDI
process. This will also help identify potential problems with the comparison of
the two sets of numbers.
All changes, enhancements and other modification requests to the system
will be handled through the published change control procedures. Any
modifications to the standard procedures are identified in the project plan
change control section.
Test Tools
The only test tools to be used are the standard AS/400 provided
utilities and commands.
A. The Program Development Manager (PDM) will be used as the source version
configuration management tool in conjunction with the in-house
check-in/check-out control utility. The check-in/out utility is part of each
developers standard AS/400 access menu.
B. The initial prototypes for the new screens will be developed using
the AS/400 Screen Design Aid (SDA). The initial layout and general content of
the screens will be shown to the sales administration staff prior to proceeding
with testing and development of the screens.
C. All editing, compiling and debugging will be done using the Source
Entry Utility (SEU).
D. Data acquisition will be from actual production files where available
using the AS/400 data base copy command CPYF and it's various functions.
Additional data will be created and modified as needed using the Data File
Utility (DFU). No changes will ever be made to actual production files under
any circumstances.
E. Initial data for EDI testing will be done using one or two beta sites
and replicating the data at the mailbox location or locally in the control
files, to create high volume data and to simulate multiple distributors sending
in data.
Meetings
The test team will meet once every two weeks to evaluate progress to
date and to identify error trends and problems as early as possible. The test
team leader will meet with development and the project manager once every two
weeks as well. These two meetings will be scheduled on different weeks.
Additional meetings can be called as required for emergency situations.
Measures and Metrics
The following information will be collected by the Development team
during the Unit testing process. This information will be provided to the test
team at program turnover as well as be provided to the project team on a
biweekly basis.
1. Defects by module and severity.
2. Defect Origin (Requirement,
Design, Code)
3. Time spent on defect resolution
by defect, for Critical & Major only. All Minor defects can be totaled
together.
The following information will be collected by the test team during all
testing phases. This information will be provided on a biweekly basis to the
test manager and to the project team.
1. Defects by module and severity.
2. Defect Origin (Requirement, Design, Code)
3. Time spent on defect investigation by defect, for Critical & Major
only. All Minor defects can be totaled together.
4. Number of times a program submitted to test team as ready for test.
5. Defects located at higher levels that should have been caught at lower
levels of testing.
The test process will be completed once the initial set of distributors
have successfully sent in reassigned sales data for a period of one month and
the new EDI data balances with the old ZIP/FAX data received in parallel. When
the sales administration staff is satisfied that the data is correct the
initial set of distributors will be set to active and all parallel stopped for
those accounts.
At this point the next set of distributors will begin the parallel
process, if not already doing so. Only the initial set of distributors must
pass the data comparison test to complete the testing, at that point the
application is considered live. All additional activations will be on an as
ready basis. When a distributor is ready, and their data is verified, they will
then also be activated.
A. No Distributors are ready for testing at pilot initiation.
The pilot project will be delayed until at least three Distributors are
ready to initiate the pilot process. No additional elements will be added to
the Reassigned Sales project during this delay.
B. Unavailability of two EDI mail boxes.
In the event two production lines and mail box facilities cannot be
obtained the current single production line and mail box will continue to be
used until a second line becomes available. This will necessitate careful
coordination between the Order Entry department and the Reassigned Sales group.
C. Distributor PC EDI software delays.
In the event of a delay in the delivery or availability of the PC
software package, the only major delay will be in pilot testing. Unit,
Integration and Systems testing can continue using limited data until such time
as the PC software is ready.
This will also add time to the lower levels of testing as full complete
testing cannot be done without reasonable amounts of data. The data can only be
derived from actual transmissions from the PC software package.
Acceptance
test plan
System/Integration
test plan
Unit test
plans/turnover documentation
Screen
prototypes
Report
mock-ups
Defect/Incident
reports and summaries
Test logs
and turnover reports
TASK
|
Assigned To
|
Status
|
Create Acceptance Test Plan
|
TM, PM, Client
|
|
Create System/Integration Test Plan
|
TM, PM, Dev.
|
|
Define Unit Test rules and Procedures
|
TM, PM, Dev.
|
|
Define Turnover procedures for each level
|
TM, Dev
|
|
Verify prototypes of Screens
|
Dev, Client, TM
|
|
Verify prototypes of Reports
|
Dev, Client, TM
|
The following elements are required to support the overall testing
effort at all levels within the reassigned sales project:
A. Access to both the development and production AS/400 systems. For
development, data acquisition and testing.
B. A communications line to the EDI mailbox facility. This will have to be
a shared line with the Order Entry process as only one mailbox is in use. There
will have to be a coordinated effort to determine how often to poll the mailbox
as the order entry process requires that data be accessed every hour and the
sales process really only needs to be pulled once a day.
C. An installed and functional copy of the AS/400 based EDI vendor package.
D. At least one distributor with an installed copy of the PC based EDI
vendor package for sales data.
E. Access to the master control tables (data bases) for controlling the
production/testing environment on both production and development systems.
F. Access to the nightly backup/recovery process.
It is preferred that there will be at least one (1) full time tester
assigned to the project for the system/integration and acceptance testing
phases of the project. This will require assignment of a person part time at
the beginning of the project to participate in reviews etc... and approximately
four months into the project they would be assigned full time. If a separate
test person is not available the project manager/test manager will assume this
role.
In order to provide complete and proper testing the following areas need
to be addressed in terms of training.
A. The developers and tester(s) will need to be trained on the basic operations
of the EDI interface. Prior to final acceptance of the project the operations
staff will also require complete training on the EDI communications process.
B. The sales administration staff will require training on the new screens
and reports.
C. At least one developer and operations staff member needs to be trained
on the installation and control of the PC based distributors EDI package. The
distributors personnel will also have to be trained on the PC based package and
its operational characteristics.
MANUAL TESTING
What is software testing?
Testing
is executing a program with an intention of finding defects.
Fault: Is a condition that causes the software to
fail to perform its required function.
Error: Error
refers to difference between actual output & expected output.
Failure: Is the inability of a system or component to
perform the required function according to its specification.
WHY S/W TESTING?
·
To discover defects.
·
To avoid the user from
detecting problems.
·
To prove that the s/w has no
defects.
·
To learn about the reliability
of the software.
·
To ensure that product works
as user expected.
·
To stay in business
·
To avoid being sued by
customers
·
To detect defects early, which
helps in reducing the cost of fixing those defects?
WHY EXACTLY TESTING IS DIFFERENT FROM QA/QC?
Testing
is the process of creating, implementing & evaluating tests. Testing
measures software quality.
Testing
can find faults. When they are removed software quality is improved.
Simply: Testing means “Quality control”.
Quality
control measures the quality of a product.
Quality assurance measures the quality of processes used to create a
quality product.
Quality Control: is the process of
inspections, walk through & reviews.
Inspection: An
inspection is formalized than a ‘ walkthrough
‘ – typical with group of people including a moderator, mediator, reader
& a recorder to take notes. The
subjects of the inspection is typically a document such a requirements
specifications, or a test plan & two purpose is to find problems and see
what is missing, not to fix anything. The primary purpose of inspection is to
detect defects of different stages during a project.
Walkthrough
Informal meeting. The motto of meeting is defined, but the members will come
without any preparation. The author
describes the work product in an informal meeting to his peers or superiors to
get feedback or inform or explain to their work product.
Reviews Means Re-verification. Reviews have been found
to be extremely effective for detecting defects, improving productivity &
lowering costs. They provide good check points for the management to study the
progress of a particular project.
Reviews are also a good tool for ensuring quality control. In short,
they have been found to be extremely useful by a diverse set of people and have
found their way in to standard management & quality control practice of
many institutions. Their use continues
to grow.
Quality Assurance:
Quality
assurance measures the quality of processes used to create a quality product.
Software
QA involves the entire s/w development process monitoring & improving the
process, making sure that any agreed upon standards & procedures are
followed and ensuring that problems are found and deal with.
Why do we need an approach for testing
Yes, we definitely need an approach
for testing.
To over come the following problems,
we need a formal approach to testing.
·
In complete functional
coverage
·
No risk management
·
Too little emphasis on user
tasks
·
Inefficient over the long term
AREAS OF TESTING:
1.
Black box testing
2.
White box testing
3.
Grey box testing
1.
Black Box Testing
Black box testing is also called as functionality
testing. In this testing testers will be asked to test the correctness of the
functionality with the help of inputs & valid outputs.
Black box testing not based on any knowledge of
internal design or code. Tests are based
on requirements & functionality.
Approach
Equivalence Class
Boundary Value Analysis
Error Guessing
Equivalence Class
·
For each piece of the specification,
generate one or more equivalence class.
·
Label the classes as “valid”
or “invalid”.
·
Generate one test case for
each Invalid Equivalence Class.
·
Generate a test case that
covers as many as possible equivalence classes.
Eg: In LIC different types of policies are
there
Policy type
|
Age
|
1
|
0-5 years
|
2
|
6-12 years
|
3
|
13-21 years
|
4
|
21-40 years
|
5
|
40-60 years
|
Here
we test each & every point.
Suppose
0-5 means we write Test cases for 0,1,2,3,4 & 5.
Here
we divide who comes under which policy & write TC’s for valid & invalid
classes.
Boundary values Analysis
·
Generate test cases for the
boundary values.
·
Minimum value, minimum value+1
, minimum value-1
·
Maximum value, Maximum value
+1 , Maximum value –1
Eg: In LIC,
When user applies for
type-5 insurance, system asks to enter the age of the custo0mer. Here age limit
is greater than 40 yrs. & less than 60 yrs.
Here
just we will test boundary values.
40-60
Minimum
=40 Maximum
=60
Minimum
+ 1 =41 Maximum
+1 = 61
Minimum
– 1 =39 Maximum
-1 = 59
Here
we write test cases for this step only.
Error Guessing:
Generate test cases against to the
specification
Eg:
Type-5 policy.
It takes age
limits only 40-60. But here we write test cases against to that like 30, 20, 70
& 65.
WHITE BOX TESTING:
White box testing also called as
Structural Testing. White box testing based on knowledge of the internal logic
of an application’s code. Tests are
based on coverage of code, statements, branches, paths, conditions & loops.
Structure
= 1 Entry + 1 Exit with certain constrains, conditions and loops.
Why do we go for White Box Testing?
Approach
Basic Path Testing
·
Cyclomatic Complexity
·
MC cabe complexity
Structure
Testing
·
Conditions Testing
·
Dataflow Testing
·
Loop Testing
GREY BOX TESTING
This is just a combination of both
black box and white box testing. Tester
should have the knowledge of both the internal and externals of the function.
Tester should have good knowledge of
white box testing & complete knowledge of black box testing
Grey box testing is especially
important with web & internet applications, because the internet is built
around loosely integrated components that connect via relatively well-defined
interfaces.
PHASES OF TESTING – V MODEL
V – MODEL
‘V’ stands for verification &
validation. It is a suitable model for large scale companies to maintain
testing process. This model defines co-existence relation between development
process and testing process.
Draw back: Cost
& Time.
PHASES ARE
1)
Unit Testing
2)
Integration Testing
3)
System Testing
4)
User Acceptance Testing
1)
Unit Testing
The main goal
is to test the internal logic of the module. In unit testing tester is supposed
to check each and every micro function. All field level validations are
expected to be tested at this stage of testing. In most cases the developer
will do this.
·
In unit testing both black box
& white box testing conducted by developers.
·
Depends on LLD
·
Follows white box testing
techniques.
·
Basic path testing
·
Loop coverage
·
Program technique testing
Approach:
i.
Equivalence Class
ii.
Boundary value analysis
iii.
Error guessing
2)
Integration Testing:
In this the primary objective of Integration Testing
is to discover errors in the interface between modules / sub-systems.
In this many
unit tested modules are combined into sub-systems. The goal here is to see if
the modules are combined can be integrated properly. Follows white box testing techniques to
verify coupling of corresponding modules.
Approach
i.
Top-down approach --- this is used for new systems.
ii.
Bottom-up approach --- this is used for existing systems.
Top-down Approach
Testing main module without coming
sub modules is called top-down approach. We can use temporary programs instead
of sub modules is called stub.
Bottom-up approach:
Testing sub modules with out coming
main modules is called bottom-up approach. We can use temporary programs
instead of main module is called driver.
3)
System Testing;
The primary objective of
system testing is to discover errors when the system is tested as a whole.
System testing is also called as End – to – End testing. Tester is expected to
test from login to logout by covering various business functionalities,
conducted by test engineers. Depends on
SRS.
Follows black box testing
techniques.
The main goal is to see if the s/w
meets its requirement.
Approach:
·
Identify the end-to-end
business life cycle.
·
Design the test data.
·
Optimize the end-to-end
business life cycle.
4) Acceptance testing:
Acceptance testing is to get the acceptance from the
client. Client will be using the system against the business requirements.
Client side tests the real-life data of the client.
Approach:
·
Building a team with real-time
users, functional users and developers.
·
Execution of business test
cases.
WHAT IS A TEST CASE?
·
Test case is a description of
what is to be tested. What data to be used and what actions to be done to check
the actual result against the expected result.
·
A test case is simply a test
with formal steps and instructions.
·
Test cases are valuable
because they are repeatable, reproducible under the same/different
environments.
·
A test case is a document that
describes an input action or event and an expected response to determine if a
feature of an application is working correctly.
WHAT ARE THE ITEMS OF A TEST CASE?
Test
case item are,
·
Test case number (unique
number)
·
Pre-condition (The assertion
(declaration) about the i/p condition is called the pre-condition.
·
Description (what data to be
used, what data to be provided & what data to be inserted)
·
Expected output (The assertion
about the expected final state of a program is (called post-condition))
·
Actual output (what ever
system displays)
·
Status (pass/fail)
·
Remarks
CAN THESE TEST CASES BE REUSED?
Yes, test cases can be reused.
Test cases developed for
functionality testing can be used for integration/system/regression testing and
performance testing with few modifications.
WHAT ARE THE CHARACTERISTICS OF A GOOD TEST
CASE?
A good test case should have the
following:
·
TC should start with “what you
are testing “
·
TC should be independent
·
TC should not contain “if
“statements.
·
TC should be uniform
Eg: <Action Buttons>, “Links “.
ARE THERE ANY ISSUES TO BE CONSIDERED?
Yes there are few issues…
·
All the TCs should be
traceable.
·
There should not be too many
duplicate test cases.
·
Out dated test cases should be
cleared off.
·
All the test cases should be
executable.
FURRPSC MODEL: (Types of Testing)
F à Functionality Testing
U à Usability Testing
R à Reliability Testing
R à Regression Testing
P à Performance Testing
S à Scalability Testing
C àCompatibility Testing
1)
Functionality Testing
To conform that all the requirements are covered.
Functional requirements specify which o/p should be produced from the given
i/p. They describe the relationship
between Input and Output of the system.
A major part in black box testing is called
functional testing
Eg:
Here we test…,
·
Input domain -- (whether taking right values of i/p or
not)
·
Error handling -- (whether the
application reporting to wrong data or not)
·
URL’s checking – (for only web
application, all links are correcting working or not)
Testing Approach:
·
Equivalence class
·
Boundary value analysis
·
Error guessing
2)
Usability Testing:
To test the ease (comfort, facility) and
user-friendliness of the system.
Approach:
Qualitative and quantitative
Heuristic Check List.
Classifications of checking:
·
Accessibility
·
Clarity of communication
·
Consistency
·
Navigation
·
Design & maintenance
·
Visual representation
Qualitative approach
i.
Each and every function should
be available from all the pages of the site.
ii.
User should be able to submit
request within 4-5 actions.
iii.
Confirmation message should be
displayed for each submit.
Quantative approach:
The average of 10 different people
should be considered as the final result.
Eg:
Some people may feel the system is
more users friendly, if the submit button is on the left side of the
screen. At the same time some others may
feel its better if the submit button is placed on the right side.
3)
Reliability Testing
Which defines how well the software meets its
requirements?
Objective is to find mean time between failure/time
available under specific load pattern and mean for recovery.
Eg:
23 hours/day
availability & 1 hour for recovery (system).
City bank – have 4 servers in each region. Every 6 hrs. It will change servers.
Approach
RRT (Ration Real time tool)
4)
Regression Testing
To check the new functionalities have been
incorporated correctly without failing the existing functionalities.
Approach: Automation Tool.
The bugs need to be communicated and
assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and
determination mode regarding requirements for regression testing to check that
fixes did not create problems else where.
5)
Performance Testing
Primary objective of the performance testing is “ to
demonstrate the system functions to specifications with acceptable response times while processing
the required transaction volume on a production sized data base.
Objectives:
·
Assessing the system capacity
for growth.
·
Identifying weak points in the
architecture.
·
Detect obscure bugs in the
software.
Performance parameters;
·
Request – response time
·
Transactions per second
·
Turn around time
·
Page down load time
·
Through put
Approach:
Classification of performance testing.
·
Load test
·
Volume test
·
Stress test
Stress
testing:
Finding
break point of application. Max. No. of users that an application can handle(at
the same time)
Approach:
RCQE
·
Repeatedly working on the same
functionality.
·
Critical Query Execution.
·
To emulate peak load.
Volume
testing:
Execution
of our application under huge amounts of resources is called volume testing.
To
find out threshold point we may use this test
Approach: Data Profile.
Load testing :
With
the load that customer wants ( not at the same time) . Load is increasing
continuously till the customer is required load.
Gradually
increasing the load on the application and checking the performance.
Approach: Load profile.
6)
Scalability testing:
To
find the maximum number of user system can handle. (customer will give max. no.
)
Approach: performance
tools.
Classification:
·
Network scalability
·
Server scalability
·
Application scalability
7) Compatibility testing:
How a product will perform over a
wide range of hardware, software & network configuration and to isolate the
specific problems.
Approach:
ET Approach.
Environment Selection :
·
Understanding the end users
application environment.
·
Importance of selecting both
old browser & new browser.
·
Selection of the operating
system.
Test Bed Creation :
Partition
of the hard disk.
·
Whether our application run on
all customer expected platforms or not?
·
Platforms means that the
required system software to run our application such as operating system,
compiler, interpreters, browsers ….etc.
What is the software life cycle ?
The life cycle begins when an
application is first conceived (imagine) and ends when it is no longer in use.
It includes aspects such a s initial concept , requirements analysis,
functional design, internal design, documentation planning, test planning,
coding , document preparation , integration testing, maintenances , updates,
re-testing, phase-out, and other aspects.
When should we start designing , test cases /
testing ?
V model is the most suitable way to
follow for deciding when to start writing test cases and conduct testing.
Testing limitations:
·
We can only test against system
requirements.
o
May not detect errors in the
requirements.
o
Incomplete or ambiguous
requirements may lead to inadequate or incorrect testing.
·
Exhaustive (total) testing is
impossible in present scenario.
·
Time and budget constraints
normally require very careful planning of the testing effort.
·
Compromise between through
ness and budget.
·
Test results are used to make
business decisions for release dates.
Test stop criteria:
·
Maximum number of test cases
successfully executed.
·
Uncover minimum number of defects
(16/1000 stm).
·
Statement coverage.
·
Testing uneconomical.
·
Reliability model.
Tester responsibilities :
·
Follow the test plans, scripts
etc, as documented.
·
Report faults objectively and
factually.
·
Check tests are correct before
reporting s/w faults.
·
Assess risk objectively.
·
Prioritize what you report.
·
Communicate the truth.
When should prioritize tests ?
We can’t test every thing. There is
never enough time to do all testing you would like, so what testing should you
do ?
Prioritize tests, so that, whenever
you stop testing, you have done best testing in the time available.
Tips:
·
Possible ranking criteria (all
risk based)
·
Test where a failure would be
most severe
·
Test where failures would be
most visible.
·
Take the help of customer in
understand what is most important to him.
·
What is most critical to the
customers business.
·
Areas changed most often.
·
Areas with most problems in
the past.
·
Most complex areas, or
technically critical.
Software:
Software is a collection / set of
instructions, programs & documents.
Software development life cycle (SDLC) :
Before starting the analysis we
first check the feasibility of the project/work/system. If we feel it is
feasible then we will go to SDLC phases.
In feasibility we will see the below
functions.
·
Finance feasibility
·
Cost feasibility
·
Resource feasibility
·
Ability to accept
SDLC includes 4 phases :
|
Analysis :
i.
Requirements analysis is done
to understand the problem the software system is to solve.
ii.
Understanding the requirement
of the system is a major task.
iii.
Analysis is on identifying
what is need from the system.
iv.
Main goal of the requirements
specification is to produce the SRS document.
v.
Once he understood the
requirement must be specified in the document.
Design :
i.
Purpose of the design is to
plan a solution of the problem specified by the requirement documents.
ii.
This phase first step is
moving from the problem domain to solution domain.
iii.
The o/p of this phase is the
design document.
iv.
This document similar to a
blue print.
Coding:
i.
Once the design is complete,
most of the major decisions about the system have been made.
ii.
The goal of the coding phase is to translate the design.
iii.
The coding effect both testing
& maintenance. Well-written code can
reduce the testing & maintenance efforts. Because of testing and
maintenance costs of s/w are much higher than to coding cost.
So the goal of the coding should be to
reduce the testing & maintenance efforts.
Testing :
i.
Testing is the major quality
control measure used during s/w development. Its basic function is to detect
errors in the s/w.
ii.
After the coding ,computer
programs are available that can be executed for testing purpose different
levels of testing are used.
iii.
The starting point of testing
is unit testing. A module is tested separately. This is done by the coder
himself simultaneously along with the coding of the module.
iv.
After this modules are
gradually integrated into subsystems which are then integrated from the entire
system. We do integration tests.
v.
System testing: system is tested against the
requirement to see if all the requirement are met all the specified by the
documents.
vi.
Acceptance testing :
client side on the real-life data of the client.
TYPES OF SOFTWARE MODELS
1.
Water Fall Model:
It includes all phases of SDLC. This is the simplest
process model.
O/P in water fall model:
Requirements document,
project plan, system design document, detailed design document, test plan and
test reports, final code software manuals, review reports.
Draw back:
Once request made
freeze, it cannot be changed i.e. changes cannot be done after requirements are
freezed.
Uses:
It is well suited for
routine type of projects where the requirements are well understood & small
project.
2.
Prototype Model:
In this model the requirements are not freeze before
any design or can proceed. The prototype is developed based on the currently
known requirements.
It is sample of how actual system looks like,
3.
Iterative Model:
In this model we can make changes at any level, but
all the four phases of SDLC will take place again.
It is
like continuous model.
4.
Spiral Model :
In this model system is divided into modules and each
module follows phases of SDLC. It is good & successful model.
|
TEST LIFE
CYCLE(TLC)
TLC PHASES:
System study
Scope/Approach/Estimation
Test Plan Design
Test Case Design
Test Case Review
Test Case Execution
Defect Handling
GAP Analysis
1.
System study:
We will study the particular s/w or project/system.
·
Domain:
In domain,
there may be different types of domains like banking, finance, Insurance,
Marketing, Real-time, ERP, SEIBEL, Manufacturing etc.
·
Software:
Front End/Back End/ Process.
Front End: GUI, VB, D2K.
Back end : Oracle,
Sybase, SQL server, MS access, DB2
Process: Languages. Eg:
c, c++, Java..etc.
·
Hardware: servers, internet, intranet applications.
·
Functional Point/LOC:
Functional Point: No of lines that are required to
write a micro function.
Micro function: Function which cannot possible to
break further
1
F.P = 10 lines of code.
·
No. of pages of
software/system
·
No. of resources of
software/system
·
No. of days to be taken to
develop software/system
·
No. of modules in the
software/system (i.e-Associate/Core/Maintainace)
·
Pick one priority à High / Medium / Low.
2.
Scope/Approach/Estimation:
What to be tested.
Scope
What
not be tested.
Eg:
U I
S A
b b b b
|
b b b b
|
r r b b
|
r r b b
|
Module
Approach: Test Life Cycle (All the phases of TLC)
Estimation:
LOC (lines of code) / F.P
(functional point) / Resource.
1
P.F =10 lines of code.
Example: Input = 1000
LOC
For this 1000 LOC we can
estimate the time to complete the whole TLC
System
study -5days -No division
Scope/approach/Estimation -2days -No
division
Test
plan -2days -No division
Test
case design -10days -yes to
divide
Here
we have 1000 LOC = 300 Test cases = 10 days
Test
case review (= ½(Test case design))
-5days -yes to divide
Test
case Execution -10days -yes to
divide
Defect
handling -6days -yes to divide
30TC=1Defect=5
Hours (for tracking)
For
300 TC= 10 Defects=50
Hours=6days
Gap
analysis -5days -No division
-------------------------------------------------------------------------------------
Total
No of days required = 45 man days
(for
one time Manual testing)
-------------------------------------------------------------------------------------
1
time Manual testing = 45 man days
Project
management = 20/100(45 days)
= 9 days
Content management (data storage, Management of
project & tools) =10/100(45days) =4.5days
For
buffer = = 10days
-------------------------------------------------------------------------------------
This
is for one resource= = 68 days
-------------------------------------------------------------------------------------
If we have 4 resources= = 68/4 =17 days
3.
Test Plan Design:
Test Plan:
Test plan includes all the areas.
1.
Who are the
client & their details? And also the company& their details where
testing is taken place
2.
Reference
Documents
Like BRS, SRS, & DFD etc.
3.
Scope of the
Project
4.
Project
Architecture & Data Flow diagrams.
5.
Test
Strategy
6.
Deliverables
7.
Schedules
8.
Milestones
9.
Risk/Mitigation/Contingency
10. Testing Requirements
11. Assumptions
12. Test Environment/ Project
13. Defects
14. Escalation process
Assume us preparing the Test Plan for
Jobs4testing.
- Client:
Thatavarti Technologies.
Company: Thatavarti Technologies.
Here
we write the details about the company.
- Reference
Documents: BRS, SRS, & SDS
- Scope:
Overview: Here in jobs4testing we
have three modules.
Module1: Aspirant Module: Here the Aspirant
uploads his resume.
Module2: Employee Module: Here Employee
uploads jobs.
Module3: Selection Process Module: Here in
this module HR people select each resume and conduct the interviews. (Select
the Quality Resumes).
Finally the main aim of this application is
to find Quality Test resources to the companies and at the same time by using
this Thatavarti wanted to improve their business.
Take in this case where we assume we have to
Releases.
Release 1: In Release 1 we test first Module1
& Module3. But we are not testing Module2.
We test only unit testing, Integration tests
& System testing. After that we find bugs if any and fix it. Then we retest
after fixing.
Release 2: In release 2 we test Module 2
We test only Module2. Also do regression
testing for which if there any affects with this attachment of module2 with
Module1&2.
- Project
Architecture:
In this we represent the application in a
pictorial format by using Dataflow diagrams, Activity diagrams & E-R
Diagrams.
- Test
Strategy:
Test strategy explains the application Test
factors & Test types.
These are the requirements must to be filled
before doing any testing.
|
Pre Condition: It specifies the
requirements to do a particular testing.
Start Criteria: To start a particular test we select criteria.
For example in Thatavarti training selection
criteria is candidate’s communication skills.
Stop Criteria: In the same example if resources are acquired
the good knowledge as per the standards to select in an
interview.
Pause: In case if there is any problem to conduct test case execution we
may stop for sometime.
Suspension: We suspend the test
case execution if any requirements are not
fulfilled.
Deliverables: Test Case execution Reports.
- Resources/
Responsibilities/ Roles:
We specify Each Resource name, Role &
their Responsibilities.
Also we give the clear picture of team who
done the Testing.
Example:
Resource Role Responsibility
Name: Krishna Test
Engineer Preparation
of
Contact ID: Krishna@yahoo.com Test
cases, design, and execution & test reports
- Deliverables: 8. Schedules: 9. Milestone:
a). System
study document Jan 01-jan07 Jan07
b). Understanding documents
c). Issues Document
d). Test plan Document
e). Test Case Documents
f). Test Case Review Documents
g). Defect Reports
h). Tractability Metrics
i). Functional Coverage Document
j). Test Reports.
Like
as above specified for deliverable we main Schedules & Milestone.
Schedule
skip page: The time difference between the Actual and Required time to deliver
a particular document is considered as Scheduled skip page.
So, this
time difference shows effect on milestone.
This is
called as Milestone skip page
Actually in
real time working in Saturdays & Sundays we can cover this Mile stone gap.
10. Risk/Contingency/Mitigation:
Risk: Any Unexpected Event which will affect the project.
Contingency: Prevention step to overcome from the risk.
Mitigation: It specifies the solution to cover the risk after it occurs.
Typical Risks: Broke links, Server problems, weak bandwidth,
wrong builds,
Database
down, wrong data, application server down, Integration failures, Resource
problems, unexpected events,
And other in
general risks.
Example: In
a project where the release engineer not uploaded the required build.
11.
Training: Training will be given to the resources if they are not having the
required
skills.
Example:
Consider a Mainframe project comes with Healthcare domain. But the company is
having only solid test resources that are having knowledge only on web
applications.
Then in this
case they train their resources on these areas.
12.
Assumptions: To test application we make some assumptions.
Example: For
doing System testing we first do the unit & Integration testing.
If these
documents are not sending by the client then we report to them.
- Test Environment: It specifies the
software, hard ware and other system details to test the application.
- Defects: Defect report documents
- Escalation
process: While conducting the test if any resource gets any doubts or
problem whom to report can be specified here.
It is nothing but communication flow from
Bottom-to Top level in testing process.
Note: Test bed: A test bed configuration is identified and planned from hardware
and operating system
version and compatibility
specifications.
Test data: After identifying the requirements for a test the creation of test data
is to be made. The testing team can make the test data or can also be provided
by the client.
4.
Test case Design: (heart
of testing)
·
Test case is description of
what is to be tested what data to be used and what actions to be done to check
the actual risk against the expected result.
·
A test case is simply a test
with formal steps and instructions.
·
Test cases are valuable
because they are repeatable, reproducible under the same/different environments
and easy to improve upon with feedback.
Format of
Test case Design:
Pre Condition
|
Description
|
Data
|
Expected results
|
Actual results
|
Status
|
Remarks
|
Bug Number
|
Constraint/Condition
to be met
|
Format:
#Check
whether/verify system displays expected result page.
#action:
User clicks on particular <button> or link page
#Data:
|
Data to
test
|
System
should display page with the details
|
As
expected (or) Whatever system display
|
Pass
(or)Fail
|
Comments
|
Eg:Bug-01
|
Techniques
to write a test case:
Ø
Boundary
Value analysis:
By using boundary value analysis we take
upper&lower boundary values and we check only those values.
Ø
Equivalence
Class portions
We check attributes/parameters of
functionalities.
Ø
Error
guessing
Testing against specifications.
Use case:
Format:
1.
Description:
it specifies the description of use case
2.
Actors: Here
we specify the actors involved actually in using this use case
3.
Pre
condition:
4.
User
Action& System Response
-Typical flow
-Normal Flow
-Exceptional flow
5. Post condition
6. Specific Requirements
7. Business Validations
8. Parking Lot
5.
Test case items are :
Ø
TC no.
Ø
Pre-condition
Ø
Description
Ø
Expected output
Ø
Actual output
Ø
Status
Ø
Remarks
6.
Test Case Review:
Review means re-verification of test case. These are
included in the review format.
First Time Right (FTR)
TYPES OF
REVIEWS:
Ø
Peer – peer review à same level
Ø
Team lead review
Ø
Team Manager review
REVIEW
PROCESS:
Take
demo of the functionality
Go
through use case / function specification
Try to
see TC & find out the gap between
Test
cases Vs. Use Cases
Submit
the review report
Functional coverage:
To check test coverage we design
test coverage.
Eg: when the source is Use Case.
For example consider J4T as 100%.
It has three modules. Assume module 1 covers 40%, module 2 covers 30%, and
module 3 covers 30%. Let’s take module1 aspirant module, consider it has 20 use
cases…so each test case covers 2%….more over it has typical flow, alternate
flow, and exceptional flow. Each flow covers some 0.9%
So based on each test case
coverage we can say that how much percentage of testing is covered.
Application
name
|
Module
Name
|
Use Case
Name
|
Flows
|
Test Case
|
Test
Execution
|
Jobs4testing
(100%)
|
aspirant
(33%)
|
New user
registration
(3.3%)
|
Typical
Flow (1.1%)
Alternate
Flow (1.1%)
Exceptional
Flow (1.1%)
|
T1.3-T1.12
(check
whether it is completed or not)
|
Not
completed
Not
started
Pending
Progress
|
Eg: When the source is SRS
It is based on modularity.
Application
Name
|
Module
Name
|
Sub
Modules
|
Sub1,sub2,Level
1, level2
|
Functionality
|
TC ID
|
J4t.com
|
Aspirant
Module
|
Login
Module
|
TT Login
SSt Login
General
User
|
|
|
7.
Test Case Execution :
This case execution includes mainly 3 things.
i.
I/P:
Ø
Test cases
Ø
Test data
Ø
Review comments
Ø
SRS
Ø
BRS
Ø
System availability
Ø
Data availability
Ø
Database
Ø
Review doc
ii.
Process: Test it.
iii.
Output:
Ø
Raise the defect
Ø
Take a screen shot & save
it.
8.
Defect Handling:
Identify the following things in defect handling.
v
Defect No./Id.
v
Description
v
Origin TC id
v
Severity
o
Critical
o
Major
o
Medium
o
Minor
o
Cosmetic
v
Priority
o
High
o
Medium
o
Low
v
status
Following is the flow of defect handling :
q Raise the defect
q Review it internally
q Submit to developer
We have to declare severity of defect & after
declare the priority.
According to priority, we will test the defect.
9.
GAP Analysis:
Finding the difference between the client requirement
& the application developed.
Deliverables:
q Test plan
q Test scenarios
q Defect reports
BRs Vs SRs.
SRs Vs Test Case.
TC vs. Defect.
Defect is open / closed.
TEST PLAN DESIGN:
What is Test Plan: A software project test plan is a document that
describes the objectives, scope, approach & focus of a software testing
effort. The completed document will help people outside the test group
understand the “why & how “ of product validation.
WHAT IS DEFECT:
In computer technology, a defect is
a coding error in a computer program. It
is defined by saying that “ A software error is present when the program does
not do what its end user reasonably expects it to do.”
WHO CAN REPORT A DEFECT:
Any one who has involved in software development
lifecycle and who is using the software can report a defect. In most of the cases defects are reported by
testing team.
A short list of people expected to
report bugs.
q Testers / QA engineers.
q Developers.
q Technical support.
q End users.
q Sales and marketing engineers.
TYPES OF DEFECTS:
§
Cosmetic flow
§
Data corruption
§
Data loss
§
Documentation issue.
§
Incorrect operation.
§
Installation problem.
§
Missing feature.
§
Slow performance
§
Unexpected behavior
§
Unfriendly behavior
HOW TO DECIDE THE SEVERITY OF THE DEFECT:
Severity
Level
|
Description
|
Response time /
Turn around time
|
High
|
A defect occurred due to the inability of a key function
to perform. This problem causes the
system to hang or the user dropped out of the sys.
|
Defect should be responded to within 24 hrs & the
situation should be resolved test exit.
|
Medium
|
A defect occurred with severely restriction the system
such as the inability to use a major function of the system. There is no
acceptable work around but the problem does not inhibit the testing of other
function
|
A response or action plan should be provided within 3
working days.
|
Low
|
A defect is occurred which places minor restriction a
function that is not critical. There is an acceptable work-around for the
defect.
|
A response or action plan should be provided within 5
working days.
|
DEFECT SEVERITY Vs. DEFECT PRIORITY:
Severity: How
much the defect is effecting application.
Priority:
§
Relative importance of the
defect, how fast the developer has to take up the defect.
§
The general rule fortune
fixing the defects will depend on the severity.
All the high severity defects should be fixed first.
§
This may not be the same in
all cases some times even though severity of the bug is high it may not be
taken as the high priority.
§
At the same time the low
severity bug may be considered as high priority.
What kind of testing should be considered ?
1.
BLACK BOX TESTING:
Not based on any knowledge of internal design or
code. Tests are based on requirements and functionality.
2.
WHITE BOX TESTING:
Based on knowledge of the internal logic of an
application code. These are based on
coverage of code statements, branches, paths, conditions, loops…etc.
3.
INTEGRATION TESTING:
Testing of combined parts of an application to
determine function together correctly.
4.
FUNCTIONAL TESTING:
Black box type of testing. This type of testing
should be done by testers. This does not mean that the programmers should not
check that their code works before releasing it.
5.
REGRESSION TESTING:
It can be difficult to determine how much re-testing
is needed, especially near the end of the development cycle. Automated testing tools can be especially
useful for this type of testing.
6.
SYSTEM TESTING:
Black box type testing that is based on over all
requirements specifications. Covers all combined parts of a system.
7.
ACCEPTANCE TESTING:
Final testing based on specification of the end-user
or customer or based on use by end-users / customers over same limited period
of time.
8.
RECOVERY TESTING:
Testing how well a system recovers from crashes,
hardware failures or other catastrophic(sudden calamity) problems.
9.
SECURITY TESTING:
How well the system protects against unauthorized
internal or external access.
10. COMPATABILITY
TESTING:
Testing how well software performs in a particular
hardware / software / network etc.
environment.
11. ALPHA TESTING:
Testing of an application when development is nearing
completion, minor design changes may still be made as a result of such testing.
Typically done by end-users or others not by
programmers or testers.
12. BETA TESTING:
Testing when development and testing are essentially
completed and final bugs and problems need to be found before final
release. Typically done by end-users or
others not by programmers or testers.
13. SANITY
TESTING:
This is before testing. Application is stable or not,
we want to write test cases for product whether development team released build
is able to conduct complete testing or not ?
14. SMOKE TESTING:
After testing major & medium or critical
functions are closed or not
15. MONKEY
TESTING:
Testing like monkey . As no proper approach. Taking
any functions and test it.
Coverage of main activities during testing is called
monkey testing (If give one day for testing)
16. MUTENT
TESTING:
Is the defect we have to inject defect into
application and test.
17. BIG BANG
TESTING: (Informal testing)
A single stage of testing after completion of entire
coding is called Big bang testing (no reviews i.e. direct system testing)
18. BIG BANG
THEORY:
Approach for the integration when checking the errors
between module or sub module.
19. AD-HOC
TESTING:
Doing a short cut way, does not following a sequential
order mentioned in the test cases or test plan.
20. PATH TESTING:
To check every possible condition at least one
navigation of flow.
SOFTWARE QUALITY:
§
Meet customer requirements.
§
Meet customer expectations
§
Possible cost
§
Time to market
BRS:
It specifies needs of customer.
Total business logic documents.
SRS:
It specifies Functional
Specifications to develop,
HLD:
High level design document.
It specifies interconnection of modules.
LLD:
It specifies Internal logic of sub-modules.
TESTING TEAM:
Quality
Control
Quality
Analyst
Test
Manager
Test
Lead
Test
Engineers
Quality: Quality means meeting requirements First time, on time& Every
time.
Factors: 1.Product
transition factors
-Reusability
-Interoperability
-Portability
2. Product operational
factors
-Correctness
-Efficiency
-Usability
-Reliability
-Integrity
3. Product Revision
factors
-Maintainability
-Flexibility
-Testability
Quality Assurance: Seeking to agreed upon rules. Like Quality
Standards (CMM/ISO/Six Sigma)
Quality Control: By specifying all these we can say the
quality control
They are
Ø
Inspection(sudden
check)
Ø
Walk
throughs&Reviews (Formal approaches)
-Here both parties are aware of
the object & These are also called verification
& Static Testing.
Above all
We did
Testing- To find bugs. (This is also called as Dynamic testing)
REVIEWS DURING ANALYSIS:
Ø
Conducted by business analyst
Ø
Verifies completeness and
correctness in BRs & SRs
Ø
Are they right requirement ?
Ø
Are they complete ?
Ø
Are they reasonable ?
Ø
Are they achievable ?
Ø
Are they testable ?
REVIEWS DURING DESIGN:
Ø
Conducted by designers
Ø
Verifies completeness and
correctness in HLD & LLD
Ø
Is the design good?
Ø
Is the design complete ?
Ø
Is the design possible ?
Ø
Does the design meet requirements ?
WHY DOES S/W HAVE BUGS:
Ø
Programming errors -- programmers like anyone else can make
mistakes.
Ø
Changing requirements
Ø
Poorly documented code -- its tough to maintain and modify code that
is badly written or poorly documents; the result is bug
Ø
Software development tools:
visual tools, class libraries, compilers, scripting tools else often introduce
their own bugs or poorly documented, resulting in added bugs.
WHAT IS VERIFICATION & VALIDATION:
VERIFICATION:
Typically involves reviews and meetings to
evaluate( estimate , calculate)
documents, plans, code requirements and specifications. This can be done with
check lists, issue lists, walk through & inspections meetings.
VALIDATION:
Typically involves actual testing and takes place
after verifications are completed.
SEVERITY:
Relative
impact of the system.
i.e. how far the application is affected by
this defect ( low, medium, high, critical).
PRIORITY:
Relative
importance of the defect.
(i.e. giving preference to the defect low ,
medium , high).
Which life cycle method is followed in Your
organization:
Now we are using V model and we will
also include in some other methods like prototype and spiral in single
application.
What is software Quality:
Quality s/w is reasonably bug-free, delivered
on time and with in budget, meats
requirements and/or expectations and is maintainable.
SEI:
Software Engineering Institute.
Initiated by the U.S. defense
department to help improve software development processes.
CMM:
Capability maturity model developed
by the SEI. It’s a model of 5 levels of organizational maturity that determine
effectiveness in delivering quality software.
ANSI
--- American National Standards Institute
Will automated testing tools make testing
easier ?
Possible:
For small project, the time
needed to learn and implement them may not be worth it. For larger projects or on-going long-term
projects, they can be valuable.
WEB TEST TOOL:
To check that links are
valid , HTML code usage is correct, client-side and server-side programs work,
a web site’s interactions are secure.
WHAT MAKES A GOOD TEST ENGINEER ?
A good test engineer has a “test to
break “ attitude (approach, manner) an ability to take the point of view of the
customer, a strong desire for quality and attention to details.
WHATS THE ROLW OF DOCUMENTATION IN QA ?
Critical, QA practices should be
documented such that they are repeatable specifications, designs business
rules, inspection reports, configurations, code changes, test plans. etc…..
WHATS A TEST CASE ?
A test case is a document that
describes an input action or event and an expected response, to determine if a
feature of an application is working correctly.
HOW CAN IT BE KNOWN WHEN TO STOP TESTING ?
This can be difficult to
determine. Common factors in deciding
when to stop are …
Ø
Dead lines ( release
deadlines, testing dead lines etc..)
Ø
TC completed with certain
percentage passed
Ø
Test budget / depleted (used U
P)
Ø
Bug rate falls below a certain
level
Ø
Beta or alpha testing period
ends.
WHAT CAN BE DONE IF REQUIREMENTS ARE CHANGING
CONTINUOUSLY:
Use
rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
The
project initial schedule should allow for some extra time corresponding with
the possibility of changes.
Focus
less on detailed test plans and test cases and more an ad-hoc testing.
WHAT IS THE DIFFERENCE BETWEEN A PRODUCT AND A
PROJECT ?
PRODUCT:
Developing a product without interactions to two
client before the product release
PROJECT:
Developing a product based on the client needs or
requirements.
WHAT IS A TEST PROCEDURE ?
Execution of one or more test cases.
WHAT ARE THE DEFECT PARAMETERS ?
There are 5 parameters.
Ø
Source
Ø
Error Description
Ø
Status
Ø
Priority
Ø
Severity
WHAT IS TRACABILITY MATRIX ?
To map the test requirement and the
test case ID, whether it is fulfilling the coverage or not.
WHAT IS TEST STRATEGY ?
Applying
a type of testing techniques to explore the maximum bugs.
WHAT IS CONFIGURATION MANAGEMENT ?
It is version control. It covers the
process which is used to control, co-ordinate and track the requirement
documentation, the problem faced, change request and design and the tools to be
used. The changes made again and who
made the changes.
WHEN U START WRITING TEST CASES ?
Once the requirements are frozen, we
begin writing test cases.
TESTING TECHNIQUE:
Way of executing and preparing the
test cases.
TESTING METHODOLOGIES:
Way of developing the test.
WHATS THE DIFFERENCE BETWEEN IST & UAT?
Particulars
|
IST
|
UAT
|
Acronym
|
Integration System Testing
|
User Acceptance Test
|
Base line Doc’s
|
Functional Specification
|
Business Requirements
|
Location
|
Off site
|
On site
|
Data
|
Simulated
|
Live data
|
Purpose
|
Validation & Verification
|
User needs.
|
Great site for these post and i am seeing the most of contents have useful for my Carrier.Thanks to such a useful information.Any information are commands like to share him.
ReplyDeleteSoftware Testing Training in Chennai
Hi Buddy ,
ReplyDeleteThank you for sharing these details.
Here is the ETL testing challenges that you must know:
https://www.testingxperts.com/blog/the-challenges-of-testing-edi-based-applications
Really nice topics you had discussed above. I am much impressed. Thank you for providing this nice information here.
ReplyDeleteGame Testing Company
Video Game Testing Company
Mobile Game Testing
Focus Group Testing
Game QA Testing
Great Article. Kindly share more article.
ReplyDeleteManual Testing Service
ReplyDeleteI was very interested in the article , it’s quite inspiring I should admit. I like visiting your site since I always come across interesting articles like this one. Keep sharing! Regards. Read more about
Offshore software testing services
software testing services company
software testing services
Software Qa Services
quality assurance service providers
Performance testing services
Security testing services
Thank you so much for this nice information. Hope so many people will get aware of this and useful as well. And please keep update like this.
ReplyDeleteVarious Stages of Game Testing Techniques you need to know
7 Essential Tips for Successful QA Implementation
Types of Game Testing Processes that need to be followed
How Game Testing differs from Software Testing
6 Challenges that every Game Tester Faces
9 Critical Bugs to be Identified in Game Testing process
Is the age of AAA gaming dying?
Major Mobile Game Testing Concerns for Testers
Game Testing Trends to watch out for in 2020
Great post! Your detailed breakdown of the ETL testing process and experience in diverse environments is really insightful. It’s a valuable resource for anyone in the QA field. Thank you!
ReplyDeleteonline internship | internship in chennai | online internship for students with certificate | bca internship | internship for bca students | sql internship | online internship for btech students | internship for 1st year engineering students