STLC MODEL
The Software Testing Life Cycle, (STLC),
is the road map to a product success. QALAB consists of a set of phases that
define what testing activities to do and when to do them. It also enables communication
and synchronization between the various groups that have input to the overall
testing process.
In the best of worlds the STLC parallels the Software Development Life Cycle, coordinating activities, thus providing the vehicle for a close working relationship between testing and development departments.
STLC Phases
Proposal/Contract
Testing Requirements Specification (TRS)
Design
Testing
Inspection and Release
Client Acceptance
Proposal/Contract
• Analyze scope of project
• Prepare Contract
• Review of Contract
• Release
Testing Requirements Specification (TRS)
• Analysis
• Product requirements document
• Develop risk assessment criteria
• Identify acceptance criteria
• Document product Definition, Testing Strategies
• Define problem reporting procedures
• Planning
• Schedules
Design
• Preparation of Master Test Plan
• Setup test environment
• High level test plan
• Design Test plans, Test Cases
• Decide if any set of test cases to be automated
Testing
• Planning
• Testing - Initial test cycles, bug fixes and re-testing
• Final Testing and Implementation
• Setup database to track components of the automated testing system, i.e. reusable modules
Inspection and Release
• Final Review of Testing
• Metrics to measure improvement
In the best of worlds the STLC parallels the Software Development Life Cycle, coordinating activities, thus providing the vehicle for a close working relationship between testing and development departments.
STLC Phases
Proposal/Contract
Testing Requirements Specification (TRS)
Design
Testing
Inspection and Release
Client Acceptance
Proposal/Contract
• Analyze scope of project
• Prepare Contract
• Review of Contract
• Release
Testing Requirements Specification (TRS)
• Analysis
• Product requirements document
• Develop risk assessment criteria
• Identify acceptance criteria
• Document product Definition, Testing Strategies
• Define problem reporting procedures
• Planning
• Schedules
Design
• Preparation of Master Test Plan
• Setup test environment
• High level test plan
• Design Test plans, Test Cases
• Decide if any set of test cases to be automated
Testing
• Planning
• Testing - Initial test cycles, bug fixes and re-testing
• Final Testing and Implementation
• Setup database to track components of the automated testing system, i.e. reusable modules
Inspection and Release
• Final Review of Testing
• Metrics to measure improvement
Testing Documents-
Software Testing interview questions
Software Testing
interview questions
Explain the PDCA cycle.
PDCA cycle stands for Plan Do Check Act;
commonly used for quality control.
Plan: Identify aim and procedure
necessary to deliver the output.
Do: Implement the plan.
Check: Confirm if the result is as per
plan.
Action: Take appropriate action to
deceiver expected outcome. Which may also involve repeat the cycle.
What are white-box, black-box and gray-box
testing?
White Box testing: white box testing
involves thorough testing of the application. It requires knowledge of code and
the test cases chosen verifies if the system is implemented as expected. It
typically includes checking with the data flow, exceptions, and errors, how
they are handled, comparing if the code produces the expected results.
E.g. In electrical appliances the
internal circuit testing.
Black Box testing: Black box testing is
done at an outer level of the system. Test cases merely check if the output is
correct for the given input. User is not expected to the internal flow or
design of the system.
Gray Box testing: Grey box testing is a
combination of both black box and white box testing. This is because it
involves access to the system; however, at an outer level. A little knowledge
of the system is expected in Gray box testing.
Explain the difference between Latent and
Masked Defect.
Latent defects are defects which remain
in the system, however, identified later. They remain in the system for a long
time. The defect is likely to be present in various versions of the software
and may be detected after the release.
E.g. February has 28 days. The system
could have not considered the leap year which results in a latent defect
Masked defect hides other defects in the
system. E.g. there is a link to add employee in the system. On clicking this
link you can also add a task for the employee. Let’s assume, both the
functionalities have bugs. However, the first bug (Add an employee) goes
unnoticed. Because of this the bug in the add task is masked.
What is Big-bang waterfall model?
The waterfall model is also known as the
Big-bang model because all modules using waterfall module follows the cycle
independently and then put together. Big Bang model follows a sequence to
develop a software application. It slowly moves to the next phase starting from
requirement analysis followed by design, implementation, testing and finally
integration and maintenance.
What is configuration Management?
Configuration management aims to
establish consistency in an enterprise. This is attained by continuously
updating processes of the organization, maintaining versioning and handling the
entire organization network, hardware and software components efficiently.
In software, Software Configuration
management deals with controlling and tracking changes made to the software.
This is necessary to allow easy accommodation of changes at any time.
What is Boundary value Analysis?
Test cases written for boundary value
analysis are to detect errors or bugs which are likely to arise while testing
for ranges of values at boundaries. This is to ensure that the application
gives the desired output when tested for boundary values.
E.g. a text box can accept values from
minimum 6 characters to 50 characters. Boundary value testing will test for 5
characters, 6 characters, 50 characters and 51 characters.
What is Equivalence Partitioning?
Equivalence partitioning is a technique
used in software testing which aims to reduce the number of test cases and
choose the right test cases. This is achieved by identifying the “classes” or
“groups” of inputs in such a way that each input value under this class will
give the same result.
E.g. a software application designed for
an airline has special offer functionality. The offer is that first two member
of every city booking the ticket for a particular route gets a discount. Here,
the group of inputs can be “All cities in India”.
Explain Random testing.
Random testing as the name suggests has
no particular approach to test. It is an ad hoc way of testing. The tester
randomly picks modules to test by inputting random values.
E.g. an output is produced by a particular
combination of inputs. Hence, different and random inputs are used.
What is Monkey testing?
Monkey testing is a type of random
testing with no specific test case written. It has no fixed perspective for
testing. E.g. input random and garbage values in an input box.
Explain Software Process.
A software process or software
development process is a method or structure expected to be followed for the
development of software. There are several tasks and activities that take place
in this process. Different processes like waterfall and iterative exists. In
these processes; tasks like analysis, coding, testing and maintenance play an
important role.
What is Maturity level?
Maturity level of a process defines the
nature and maturity present in the organization. These levels help to
understand and set a benchmark for the organization.
Five levels that are identified are:
Level 1: Adhoc or initial
Level 2: Repeatable
Level 3: Defined
Level4: managed
Level 5: Optimized
What is process area in CMMI?
Process areas in Capabilty Maturity model
describe the features of a products development. These process areas help to
identify the level of maturity an organization has attained. These mainly
include:
Project planning and monitoring
Risk Management
Requirements development
Process and Product quality assurance
Product integration
Requirement management
Product integration
Configuration management
Explain about tailoring.
Tailoring a software process means
amending it to meet the needs of the project. It involves altering the
processes in different environments, it’s an ongoing process. Factors like
customer and end user relation ship, goals of business must be kept in mind
while tailoring. Degree to which tailoring is required must be identified.
What are staged and continuous models in
CMMI?
Staged models in CMMI, focus on process
improvement using stages or maturity levels. In staged representation each
process area has one specific goal. Achieving a goal would mean improvement in
control and planning of the tasks associated with the process. Staged representation has 5
maturity levels.
Continuous model in CMMI follow a
recommended order for approaching process improvement within each specified
process area. It allows the user to select the order of improvement that best
meets the organization’s business objectives. Continuous representation has 6
capability levels.
Explain capability levels in continuous
representation.
There are 6 capability levels for
Continuous representation:
Level 0: Not performed
Level 1: Performed
Level 2: Managed
Level 3: Defined
Level 4: Quantitatively managed
Level 5: Optimizing
Each level has process areas. Each
process area has specific goals to achieve. These processes are continuously
improved to achieve the goals in a recommended order.
What is SCAMPI process?
Standard CMMI Appraisal Method for
Process Improvement provides a benchmark relative to Maturity Models. It
describes requirements, activities and processes associated with each process
area. The
SCAMPI appraisals identify the flaws of current processes. It gives an idea of
area of improvement and determines capability and maturity levels.
What is the importance of PII in SCAMPI?
P II is Practice Implementation
Indicator. As the name suggests, P II serves as an indicator or evidence that a
certain practice that supports a goal has been implemented. P II could be a
document and be served as a proof.
For more questions with answers, follow
the link below:
http://www.careerride.com/Software-Testing-Interview-Questions.aspx
Basics of Manual
Testing
Software
Testing Definitions
— Software testing is the
process used to help identify the correctness, completeness, security, and
quality of developed computer software.
— Software testing is a
process of technical investigation, performed on behalf of stakeholders, that
is intended to reveal quality-related information about the product with
respect to the context in which it is intended to operate.
— Software Testing furnishes
a criticism or comparison that compares the state and behavior of
the product against a specification.
Features of Software
Testing
— Software testing proves presence of error. But never its absence.
— Software testing is a constructive destruction process.
—
— Software testing involves operation of a system or application under controlled
conditions and evaluating the results.
Objective of testing
— Finding defects.
—
— Gaining confidence about
the level of quality and providing information.
—
— Preventing defects.
Reasons for bugs
— Miscommunication or no
communication
—
— Software complexity
—
— Programming errors
—
— Changing requirements
— Time pressures
—
— Egos
—
— Poorly documented code
—
— Software development tools
Error, Fault, Failure
— Errors
— It refers to an incorrect
action or calculation performed by software.
— Fault
— An accidental condition
that causes a functional unit to fail to perform its required function.
— Failures
— It refers to the inability
of the system to produce the intended result.
— Defect or Bug
— Non-conformance of software
to its requirements is commonly called a defect.
Quality
Assurance
— Quality assurance is an activity that establishes and evaluates the
processes that produce products
—
Quality
Control
— Quality control is the process by which the product quality is
compared with applicable standards, and the action taken when nonconformance is
detected.
Quality Assurance
|
Quality Control
|
It helps establish processes
|
It helps to execution of process
|
It is a management responsibility
|
It is the producer’s responsibility
|
It identifies the weakness in the process
|
It identifies the weakness in the product
|
Challenges
of Effective and Efficient Testing
— Time pressure
—
— Software complexity
—
— Choosing right approach
—
— Wrong process
The Limits of Software
Testing
— Testing does not guarantee
100% defect free product
—
— Testing proves presence of
error; but not its absence.
—
— Testing will not improve
the development process.
—
Prioritizing Tests
— Test case prioritization
techniques schedule test cases in an order that increases their effectiveness
in meeting some performance goal. One performance goal, rate of fault
detection, is a measure of how quickly faults are detected within the testing
process; an improved rate of fault detection can provide faster feedback on the
system under test, and let software engineers begin locating and correcting
faults earlier than might otherwise be possible.
—
Cost of
quality
— Prevention cost
— Money required to prevent errors and to do the job right the first
time.
— Ex. Establishing methods and procedures, Training workers,
acquiring tools.
— Appraisal cost
— Money spent to review completed products against
requirements.
— Ex. Cost of inspections, testing, reviews.
— Failure cost
— All costs associated with defective products that have been
delivered to the user or moved in to the production.
— Ex. Repairing cost, cost of operating faulty products, damage
incurred by using them.
Software quality
factors
Correctness
|
Reliability
|
Efficiency
|
Integrity
|
Usability
|
Maintainability
|
Testability
|
Flexibility
|
Portability
|
Reusability
|
Interpretability
The
Fundamental Test Process
— Assess development plan and status
— Develop the test plan
— Test Software requirements
— Test software design
— Program (Build) phase testing
— Execute and record results
— Acceptance test
— Report test results
— The software installation
— Test software changes
— Evaluate test effectiveness
Black
box testing
— Also known as functional testing.
—
— A software testing technique whereby the internal workings of
the item being tested are not known by the tester.
—
White
box testing
—
— Also known as glass box, structural, clear box
and open box testing.
—
— A software testing technique whereby explicit knowledge of the
internal workings of the item being tested are used to select the test data.
Unlike black box testing, white box testing uses specific knowledge of
programming code to examine outputs.
Traceability
matrix
— In a software development process, a Traceability Matrix
is a table that correlates any two base lined documents that require a many
to many relationship to the determine completeness of the relationship. It is
often used with high-level requirements (sometimes known as Marketing
Requirements) and detailed requirements of the software product to the
matching parts of high-level design, detailed Design, test plan, and test
cases.
— Template
Black Box Testing Techniques
— Boundary value analysis
— Boundary value analysis
is a software testing related technique to determine test cases covering
known areas of frequent problems at the boundaries of software component
input ranges.
— The boundary value
analysis can have 6 test cases', n-1,n+1 for the upper limit and n, n-1,n+1
for the lower limit.
—
— Equivalence partitioning
— Equivalence partitioning
is software testing related technique with the goal:
— To reduce the number of
test cases to a necessary minimum.
— To select the right test
cases to cover all possible scenarios.
—
— Example : The valid range for the month is 1 to 12, standing for January
to December. This valid range is called a partition. In this example there
are two further partitions of invalid ranges. The first invalid partition
would be <= 0 and the second invalid partition would be >= 13.
—
Test
case
— A test case is a set of conditions or variables under
which a tester will determine if a requirement upon an application is
partially or fully satisfied.
—
Test
script
— A test script is a short program written in a programming
language used to test part of the functionality of a software system.
Best
practices
— Avoid unnecessary duplication of test case
—
— Map all the test cases with its requirements
—
— Provide sufficient information on the test cases with the
appropriate naming conventions
—
Manual Test Execution
— Collect the requirements
from the User Requirement document
—
— Analyze the requirement
—
— Identify the areas to be
tested
—
— Prepare the detailed test
plan and test case
—
— Prepare the environment
for testing
—
— Execute the test cases in
the planned manner
—
— Observe the behavior
—
— Report in the defect log
during abnormality
Test
Reporting
— It is a process to collect data, analyze the data, supplement
the data with metrics, graphs, and charts other pictorial representations
which help the developers and users interpret that data.
—
—
Prerequisites
to test reporting
— Define the test status data to be collected
—
— Define the test metrics to be used in reporting test results
—
— Define effective test metrics
Defect
Management system
— Identify the defect
—
— Identify the priority and severity
—
— Report the bug to the programmer
—
— Once fixed, test again the same test case to verify that the bug
has indeed been fixed.
—
— Test the related test cases to verify there are no side effects
—
— Close the bug
|
Testing Material
1 :: What is bidirectional traceability?
Bidirectional traceability needs to be implemented both
forward and backward (i.e., from requirements to end products and from end
product back to requirements).
When the requirements are managed well, traceability can be
established from the source requirement to its lower level requirements and
from the lower level requirements back to their source. Such bidirectional
traceability helps determine that all source requirements have been completely
addressed and that all lower level requirements can be traced to a valid
source.
2 :: What is the maximum length of the test case we can
write?
We can't say exactly test case length, it depending on
functionality.
3 :: What is internationalization Testing?
Software Internationalization is process of developing
software products independent from cultural norms, language or other specific
attributes of a market
4 :: What does black-box testing mean at the unit,
integration, and system levels?
Tests for each software requirement using
Equivalence Class Partitioning, Boundary Value Testing, and
more
Test cases for system software requirements using the Trace
Matrix, Cross-functional Testing, Decision Tables, and more
Test cases for system integration for configurations, manual
operations, etc.
5 :: What is Bug life cycle?
New: when tester reports a defect
Open: when developer accepts that it is a bug or if the
developer rejects the defect, then the status is turned into
"Rejected"
Fixed: when developer make changes to the code to rectify
the bug...
Closed/Reopen: when tester tests it again. If the expected
result shown up, it is turned into "Closed" and if the problem
persists again, it's "Reopen".
Do you have any collection of Interview Questions and
interested to share with us!!
Please send that collection to iq@GlobalGuideline.Com along
with the category and sub category information
6 :: Smoke test? Do you use any automation tool for smoke
testing?
Testing the application whether it's performing its basic
functionality properly or not, so that the test team can go ahead with the
application. Definitely can use.
7 :: When a bug is found, what is the first action?
Report it in bug tracking tool.
8 :: Advantages of automation over manual testing?
Time saving, resource and money
9 :: For Web Applications what type of tests are you going
to do?
Web-based applications present new challenges, these
challenges include:
- Short release cycles;
- Constantly Changing Technology;
- Possible huge number of users during initial website
launch;
- Inability to control the user's running environment;
- 24-hour availability of the web site.
The quality of a website must be evident from the Onset. Any
difficulty whether in response time, accuracy of information, or ease of
use-will compel the user to click to a competitor's site. Such problems
translate into lost of users, lost sales, and poor company image.
To overcome these types of problems, use the following
techniques:
1. Functionality Testing
Functionality testing involves making Sure the features that
most affect user interactions work properly. These include:
· forms
· searches
· pop-up windows
· shopping carts
· online payments
2. Usability Testing
Many users have low tolerance for anything that is difficult
to use or that does not work. A user's first impression of the site is
important, and many websites have become cluttered with an increasing number of
features. For general-use websites frustrated users can easily click over a
competitor's site.
Usability testing involves following main steps
· identify the website's purpose;
· identify the indented users ;
· define tests and conduct the usability testing
· analyze the acquired information
3. Navigation Testing
Good Navigation is an essential part of a website,
especially those that are complex and provide a lot of information. Assessing
navigation is a major part of usability Testing.
4. Forms Testing
Websites that use forms need tests to ensure that each field
works properly and that the forms posts all data as intended by the designer.
5. Page Content Testing
Each web page must be tested for correct content from the
user perspective for correct content from the user perspective. These tests
fall into two categories: ensuring that each component functions correctly and
ensuring that the content of each is correct.
6. Configuration and Compatibility testing
A key challenge for web applications is ensuring that the
user sees a web page as the designer intended. The user can select different
browser software and browser options, use different network software and
on-line service, and run other concurrent applications. We execute the
application under every browser/platform combination to ensure the web sites
work properly under various environments.
7. Reliability and Availability Testing
A key requirement o a website is that it Be available
whenever the user requests it, after 24-hours a day, every day. The number of
users accessing web site simultaneously may also affect the site's
availability.
8. Performance Testing
Performance Testing, which evaluates System performance
under normal and heavy usage, is crucial to success of any web application. A
system that takes for long to respond may frustrate the user who can then
quickly move to a competitor's site. Given enough time, every page request will
eventually be delivered. Performance testing seeks to ensure that the website
server responds to browser requests within defined parameters.
9. Load Testing
The purpose of Load testing is to model real world
experiences, typically by generating many simultaneous users accessing the
website. We use automation tools to increases the ability to conduct a valid
load test, because it emulates thousand of users by sending simultaneous
requests to the application or the server.
10. Stress Testing
Stress Testing consists of subjecting the system to varying
and maximum loads to evaluate the resulting performance. We use automated test
tools to simulate loads on website and execute the tests continuously for
several hours or days.
11. Security Testing
Security is a primary concern when communicating and
conducting business- especially sensitive and business- critical transactions -
over the internet. The user wants assurance that personal and financial
information is secure. Finding the vulnerabilities in an application that would
grant an unauthorized user access to the system is important.
10 :: What is Testing environment in your company, means how
testing process start?
Testing process is going as follows:
Quality assurance unit
Quality assurance manager
Test lead
Test engneer
Do you have any collection of Interview Questions and
interested to share with us!!
Please send that collection to iq@GlobalGuideline.Com along
with the category and sub category information
11 :: What is a Use case?
A simple flow between the end user and the system. It
contains pre conditions, post conditions, normal flows and exceptions. It is
done by Team Lead/Test Lead/Tester.
12 :: How you are breaking down the project among team
members?
It can be depend on these following cases----
1) Number of modules
2) Number of team members
3) Complexity of the Project
4) Time Duration of the project
5) Team member's experience etc......
13 :: What is a Test Server?
The place where the developers put their development
modules, which are accessed by the testers to test the functionality.
14 :: What are the differences between these three words
Error, Defect and Bug?
Error: The deviation from the required logic, syntax or
standards/ethics is called as error.
There are three types of error. They are:
Syntax error (This is due to deviation from the syntax of
the language what supposed to follow).
Logical error (This is due to deviation from the logic of
the program what supposed to follow)
Execution error (This is generally happens when you are
executing the same program, that time you get it.)
Defect: When an error found by the test engineer (testing
department) then it is called defect
Bug: if the defect is agreed by the developer then it
converts into bug, which has to fix by the developer or post pond to next
version.
15 :: There are two sand clocks(timers) one complete totally
in 7 minutes and other in 9-minutes we have to calculate with this timers and
bang the bell after completion of 11- minutes!plz give me the solution.
1. Start both clocks
2. When 7 min clock complete, turn it so that it restarts.
3. When 9 min clock finish, turn 7 min clocks (It has 2
mints only).
4. When 7 min clock finishes, 11 min complete.
16 :: What are the technical reviews?
For each document, it should be reviewed. Technical Review
in the sense, for each screen, developer will write a Technical Specification.
It should be reviewed by developer and tester. There are functional
specification review, unit test case review and code review etc.
17 :: Explain ETVX concept?
E- Entry Criteria
T- Task
V- Validation
X- Exit Criteria
ENTRY CRITERIA: Input with 'condition' attached.
e.g. Approved SRS document is the entry criteria for the
design phase.
TASK: Procedures.
e.g. Preparation of HLD, LLD etc.
VALIDATION: Building quality & Verification activities
e.g. Technical reviews
EXIT CRITERIA: Output with 'condition' attached.
e.g Approved design document
It is important to follow ETVX concept for all phases in
SDLC.
18 :: If the client identified some bugs to whom did he
reported?
He will report to the Project Manager. Project Manager will
arrange a meeting with all the leads (Dev. Manager, Test Lead and Requirement
Manager) then raise a Change Request and then, identify which all the screens
are going to be impacted by the bug. They will take the code and correct it and
send it to the Testing Team.
19 :: At what phase tester role starts?
In SDLC after complition of FRS document the test lead
prepare the use case document and test plan document, then the tester role is
start.
20 :: Actually how many positive and negetive testcases will
write for a module?
That depends on the module and complexity of logic. For
every test case, we can identify +ve and -ve points. Based on the criteria, we
will write the test cases, If it is crucial process or screen. We should check
the screen,in all the boundary conditions.
* StumbleUpon
* Digg
* Delicious
* Twitter
* FaceBook
* LinkedIn
* Google
* Yahoo
* MySpace
* Tell Your Friend
21 :: What are the main bugs which were identified by you
and in that how many are considered as real bugs?
If you take one screen, let's say, it has got 50 Test
conditions, out of which, I have identified 5 defects which are failed. I
should give the description defect, severity and defect classfication. All the
defects will be considered.
Defect Classification are:
GRP : Graphical Representation
LOG : Logical Error
DSN : Design Error
STD : Standard Error
TST : Wrong Test case
TYP : Typographical Error (Cosmotic Error)
22 :: What is Six sigma?
Six Sigma:
A quality discipline that focuses on product and service
excellence to create a culture that demands perfection on target, every time.
Six Sigma quality levels
Produces 99.9997% accuracy, with only 3.4 defects per
million opportunities.
Six Sigma is designed to dramatically upgrade a company's
performance, improving quality and productivity. Using existing products,
processes, and service standards,
They go for Six Sigma MAIC methodology to upgrade
performance.
MAIC is defined as follows:
Measure: Gather the right data to accurately assess a
problem.
Analyze: Use statistical tools to correctly identify the
root causes of a problem
Improve: Correct the problem (not the symptom).
Control: Put a plan in place to make sure problems stay
fixed and sustain the gains.
Key Roles and Responsibilities:
The key roles in all Six Sigma efforts are as follows:
Sponsor: Business executive leading the organization.
Champion: Responsible for Six Sigma strategy, deployment, and
vision.
Process Owner: Owner of the process, product, or service
being improved responsible for long-term sustainable gains.
Master Black Belts: Coach black belts expert in all
statistical tools.
Black Belts: Work on 3 to 5 $250,000-per-year projects; create
$1 million per year in value.
Green Belts: Work with black belt on projects.
23 :: What are cookies? Tell me the advantage and
disadvantage of cookies?
Cookies are messages that web servers pass to your web
browser when you visit Internet sites. Your browser stores each message in a
small file. When you request another page from the server, your browser sends
the cookie back to the server. These files typically contain information about
your visit to the web page, as well as any information you've volunteered, such
as your name and interests. Cookies are most commonly used to track web site
activity. When you visit some sites, the server gives you a cookie that acts as
your identification card. Upon each return visit to that site, your browser
passes that cookie back to the server. In this way, a web server can gather
information about which web pages are used the most, and which pages are
gathering the most repeat hits. Only the web site that creates the cookie can
read it. Additionally, web servers can only use information that you provide or
choices that you make while visiting the web site as content in cookies.
Accepting a cookie does not give a server access to your computer or any of
your personal information. Servers can only read cookies that they have set, so
other servers do not have access to your information. Also, it is not possible
to execute code from a cookie, and not possible to use a cookie to deliver a
virus.
24 :: What is stub? Explain in testing point of view?
Stub is a dummy program or component, the code is not ready
for testing, it's used for testing...that means, in a project if there are 4
modules and last is remaining and there is no time then we will use dummy
program to complete that fourth module and we will run whole 4 modules also.
The dummy program is also known as stub.
25 :: Define Brain Stromming and Cause Effect Graphing?
BS:
A learning technique involving open group discussion
intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising,
daily, weekly and bi-monthly brainstorming sessions are held by various work
groups within the firm. Our monthly I-Power brainstorming meeting is attended
by the entire agency staff.
OR
Brainstorming is a highly structured process to help
generate ideas. It is based on the principle that you cannot generate and
evaluate ideas at the same time. To use brainstorming, you must first gain
agreement from the group to try brainstorming for a fixed interval (eg six
minutes).
CEG:
A testing technique that aids in selecting, in a systematic
way, a high-yield set of test cases that logically relates causes to effects to
produce test cases. It has a beneficial side effect in pointing out
incompleteness and ambiguities in specifications.
26 :: Password is having 6 digit alphanumeric then what are
the possible input conditions?
Including special characters also Possible input conditions
are:
1) Input password as = 6abcde (ie number first)
2) Input password as = abcde8 (ie character first)
3) Input password as = 123456 (all numbers)
4) Input password as = abcdef (all characters)
5) Input password less than 6 digit
6) Input password greater than 6 digits
7) Input password as special characters
8) Input password in CAPITAL ie uppercase
9) Input password including space
10) (SPACE) followed by alphabets /numerical
/alphanumerical/
27 :: Explain agile testing?
Agile testing is used whenever customer requirements are
changing dynamically
If we have no SRS, BRS but we have test cases does you
execute the test cases blindly or do you follow any other process.
Test case would have detail steps of what the application is
supposed to do.
1) Functionality of application.
2) In addition you can refer to Backend, is mean look into
the Database. To gain more knowledge of the application.
28 :: What is deferred status in defect life cycle?
Deferred status means the developer accepted the bus, but it
is scheduled to rectify in the next build.
29 :: Verification and validation?
Verification is static. No code is executed. Say, analysis
of requirements etc.
Validation is dynamic. Code is executed with scenarios
present in test cases.
30 :: What is test plan and explain its contents?
Test plan is a document which contains the scope for testing
the application and what to be tested, when to be tested and who to test.
Do you have any collection of Interview Questions and
interested to share with us!!
Please send that collection to iq@GlobalGuideline.Com along
with the category and sub category information
31 :: What is mean by release notes?
It's a document released along with the product which
explains about the product. It also contains about the bugs that are in
deferred status.
32 :: Give an example of high priority and low severity, low
priority and high severity?
Severity level:
The degree of impact the issue or problem has on the
project. Severity 1 usually means the highest level requiring immediate
attention. Severity 5 usually represents a documentation defect of minimal
impact.
Severity is levels:
* Critical: the software will not run
* High: unexpected fatal errors (includes crashes and data
corruption)
* Medium: a feature is malfunctioning
* Low: a cosmetic issue
Severity levels
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems;
product crashes in obscure cases.
3. Bug causes minor functionality problems, may affect
"fit anf finish".
4. Bug contains typos, unclear wording or error messages in
low visibility fields.
Severity levels
* High: A major issue where a large piece of functionality
or major system component is completely broken. There is no workaround and
testing cannot continue.
* Medium: A major issue where a large piece of functionality
or major system component is not working properly. There is a workaround,
however, and testing can continue.
* Low: A minor issue that imposes some loss of
functionality, but for which there is an acceptable and easily reproducible
workaround. Testing can proceed without interruption.
Severity and Priority
Priority is Relative: the priority might change over time.
Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the
schedule draws closer to the release and as the test team finds even more
heinous errors. Priority is a subjective evaluation of how important an issue
is, given other tasks in the queue and the current schedule. It’s relative. It
shifts over time. And it’s a business decision.
Severity is an absolute: it’s an assessment of the impact of
the bug without regard to other work in the queue or the current schedule. The
only reason severity should change is if we have new information that causes us
to re-evaluate our assessment. If it was a high severity issue when I entered
it, it’s still a high severity issue when it’s deferred to the next release.
The severity hasn’t changed just because we’ve run out of time. The priority
changed.
Severity Levels can be defined as follow:
S1 - Urgent/Showstopper. Like system crash or error message
forcing to close the window.
Tester's ability to operate the system either totally
(System Down), or almost totally, affected. A major area of the users system is
affected by the incident and it is significant to business processes.
S2 - Medium/Workaround. Exist like when a problem is
required in the specs but tester can go on with testing. Incident affects an
area of functionality but there is a work-around which negates impact to
business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might
be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent
S3 - Low. This is for minor problems, such as failures at
extreme boundary conditions that are unlikely to occur in normal use, or minor
errors in
layout/formatting. Problems do not impact use of the product
in any substantive way. These are incidents that are cosmetic in nature and of
no or very low impact to business processes.
33 :: Diff. between STLC and SDLC?
STLC is software test life cycle it starts with
* Preparing the test strategy.
* Preparing the test plan.
* Creating the test environment.
* Writing the test cases.
* Creating test scripts.
* Executing the test scripts.
* Analyzing the results and reporting the bugs.
* Doing regression testing.
* Test exiting.
SDLC is software or system development life cycle, phases
are...
* Project initiation.
* Requirement gathering and documenting.
* Designing.
* Coding and unit testing.
* Integration testing.
* System testing.
* Installation and acceptance testing. " Support or
maintenance.
34 :: What is Test Data Collection?
Test data is the collection of input data taken for testing
the application. Various types and size of input data will be taken for testing
the applications. Sometimes in critical application the test data collection
will be given by the client also.
35 :: What are non-functional requirements?
The non-functional requirements of a software product are:
reliability, usability, efficiency, delivery time, software development
environment, security requirements, standards to be followed etc.
36 :: Why we perform stress-testing, resolution-testing and
cross- browser testing?
Stress Testing: - We need to check the performance of the
application.
Def: Testing conducted to evaluate a system or component at
or beyond the limits of its specified requirements
Resolution Testing: - Some times developer created only for
1024 resolution, the same page displayed a horizontal scroll bar in 800 x 600
resolutions. No body can like the horizontal scroll appears in the screen. That
is reason to test the Resolution testing.
Cross-browser Testing: - This testing some times called
compatibility testing. When we develop the pages in IE compatible, the same page
is not working in Fairfox or Netscape properly, because
most of the scripts are not supporting to other than IE. So
that we need to test the cross-browser Testing
37 :: What is the minimum criteria for white box?
We should know the logic, code and the structure of the
program or function. Internal knowledge of the application how the system works
what's the logic behind it and structure how it should react to particular
action.
38 :: In what basis you will write test cases?
I would write the Test cases based on Functional
Specifications and BRDs and some more test cases using the Domain knowledge.
39 :: What are the main key components in Web applications
and client and Server applications? (differences)?
For Web Applications: Web application can be implemented
using any kind of technology like Java, .NET, VB, ASP, CGI& PERL. Based on
the technology,We can derive the components.
Let's take Java Web Application. It can be implemented in 3
tier architecture. Presentation tier (jsp, html, dthml,servlets, struts).
Busienss Tier (Java Beans, EJB, JMS) Data Tier(Databases like Oracle, SQL
Server etc., )
If you take .NET Application, Presentation (ASP, HTML,
DHTML), Business Tier (DLL) & Data Tier ( Database like Oracle, SQL Server
etc.,)
Client Server Applications: It will have only 2 tiers. One
is Presentation (Java, Swing) and Data Tier (Oracle, SQL Server). If it is
client Server architecture, the entire application has to be installed on the
client machine. When ever you do any changes in your code, Again, It has to be
installed on all the client machines. Where as in Web Applications, Core
Application will reside on the server and client can be thin Client(browser).
Whatever the changes you do, you have to install the application in the server.
NO need to worry about the clients. Because, You will not install any thing on
the client machine.
40 :: What is the formal technical review?
Technical review should be done by the team of members. The
document, which is going to be reviewed, who has prepared and reviewers should
sit together and do the review of that document. It is called Peer Review. If
it is a technical document, It can be called as formal Technical review, I
guess. It varies depends on the company policy.
41 :: Explain Software metrics?
Measurement is fundamental to any engineering discipline
Why Metrics?
- We cannot control what we cannot measure!
- Metrics helps to measure quality
- Serves as dash-board
The main metrices are :size,shedule,defects.In this there
are main sub metrices.
Test Coverage = Number of units (KLOC/FP) tested / total
size of the system
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of
defects located
Defects detected in testing (in %) = Defects detected in testing
/ total system defects*100
Acceptance criteria tested = Acceptance criteria tested /
total acceptance criteria
42 :: What is Software reliability?
It is the probability that software will work without
failure for a specified period of time in a specified environment.Reliability
of software is measured in terms of Mean Time Between Failure (MTBF). For eg if
MTBF = 10000 hours for an average software, then it should not fail for 10000
hours of continous operation.
43 :: What the main use of preparing a traceability matrix?
Traceability matrix is prepared in order to cross check the
test cases designed against each requirement, hence giving an opportunity to
verify that all the requirements are covered in testing the application.
(Or)
To Cross verify the prepared test cases and test scripts
with user requirements. To monitor the changes, enhance occurred during the
development of the project.
44 :: What is TRM?
TRM means Test Responsibility Matrix.
TRM: --- It indicates mapping between test factors and development
stages...
Test factors like:
Ease of use, reliability, portability, authorization, access
control, audit trail, ease of operates, maintainable... Like dat...
Development stages...
Requirement gathering, Analysis, design, coding, testing,
and maintenance
45 :: What is the difference between Product-based Company
and Projects-based Company?
Product based company develops the applications for Global
clients i.e. there is no specific clients. Here requirements are gathered from
market and analyzed with experts.
Project based company develops the applications for the
specific client. The requirements are gathered from the client and analyzed
with the client.
Testing Glossary
Testing Glossary
Acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its
acceptance criteria—enables an end user to determine whether or not to accept
the system.
Affinity Diagram:
A group process that takes large amounts of language data, such as a list
developed by brainstorming, and divides it into categories.
Alpha Testing:
Testing of a software product or system conducted at the developer’s site by
the end user.
Audit:
An inspection/assessment activity that verifies compliance with plans,
policies, and procedures, and ensures that resources are conserved. Audit is a
staff function; it serves as the “eyes and ears” of management.
Automated Testing:
That part of software testing that is assisted with software tool(s) that does
not require operator input, analysis, or evaluation.
Beta Testing:
Testing conducted at one or more end user sites by the end user of a delivered
software product or system.
Black-box Testing:
Functional testing based on requirements with no knowledge of the
internal program structure or data. Also known as closed-box testing. Black
box testing indicates whether or not a program meets required specifications by
spotting faults of omission -- places where the specification is not fulfilled.
Bottom-up Testing:
An integration testing technique that tests the low-level components first
using test drivers for those components that have not yet been developed to
call the low-level components for test.
Boundary Value Analysis: A test data selection technique in which values are
chosen to lie along data extremes. Boundary values include maximum, mini-mum,
just inside/outside boundaries, typical values, and error values.
Brainstorming:
A group process for generating creative and diverse ideas.
Branch Coverage Testing: A test method satisfying coverage criteria that
requires each decision point at each possible branch to be executed at least
once.
Bug: A design flaw
that will result in symptoms exhibited by some object (the object under test or
some other object) when an object is subjected to an appropriate test.
Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem
by representing the relationship between some effect and its possible cause.
Cause-effect Graphing: A testing technique that aids in selecting, in a
systematic way, a high-yield set of test cases that logically relates causes to
effects to produce test cases. It has a beneficial side effect in pointing out
incompleteness and ambiguities in specifications.
Checksheet:
A form used to record data as it is gathered.
Clear-box Testing:
Another term for white-box testing. Structural testing is sometimes referred to
as clear-box testing, since “white boxes” are considered opaque and do not
really permit visibility into the code. This is also known as glass-box or
open-box testing.
Client:
The end user that pays for the product received, and receives the benefit from
the use of the product.
Control Chart:
A statistical method for distinguishing between common and special cause
variation exhibited by processes.
Customer (end user):
The individual or organization, internal or external to the producing
organization, that receives the product.
Cyclomatic Complexity:
A measure of the number of linearly independent paths through a program module.
Data Flow Analysis:
Consists of the graphical analysis of collections of (sequential) data
definitions and reference patterns to determine
constraints that can be placed on data values at various points of executing
the source program.
Debugging:
The act of attempting to determine the cause of the symptoms of malfunctions
detected by testing or by frenzied user complaints.
Defect:
NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.
Defect Analysis:
Using defects as data for continuous quality improvement. Defect analysis
generally seeks to classify defects into categories and identify possible
causes in order to direct process improvement efforts.
Defect Density:
Ratio of the number of defects to program length (a relative number).
Desk Checking:
A form of manual static analysis usually performed by the originator. Source
code documentation, etc., is visually checked against requirements and standards.
Dynamic Analysis:
The process of evaluating a program based on execution of that program. Dynamic
analysis approaches rely on executing a piece of software with selected test
data.
Dynamic Testing:
Verification or validation performed which executes the system’s code.
Error:
1) A discrepancy between a computed, observed, or measured value or condition
and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.
2) a mental mistake made by a programmer that may result in a program fault.
Error-based Testing:
Testing where information about programming style, error-prone language
constructs, and other programming knowledge is applied to select test data
capable of detecting faults, either a specified class of faults or all possible
faults.
Evaluation:
The process of examining a system or system component to determine the extent
to which specified properties are present.
Execution:
The process of a computer carrying out an instruction or instructions of a
computer.
Exhaustive Testing:
Executing the program with all possible combinations of values for program
variables.
Failure:
The inability of a system or system component to perform a required function
within specified limits. A failure may be produced when a fault is encountered.
Failure-directed Testing: Testing based on the knowledge of the types of errors
made in the past that are likely for the system under test.
Fault:
A manifestation of an error in software. A fault, if encountered, may cause a
failure.
Fault Tree Analysis:
A form of safety analysis that assesses hardware safety to provide failure
statistics and sensitivity analyses that indicate the possible effect of
critical failures.
Fault-based Testing:
Testing that employs a test data selection strategy designed to generate test
data capable of demonstrating the absence of a set of pre-specified faults,
typically, frequently occurring faults.
Flowchart:
A diagram showing the sequential steps of a process or of a workflow around a
product or service.
Formal Review:
A technical review conducted with the end user, including the types of reviews
called for in the standards.
Function Points: A consistent measure of software size based on user
requirements. Data components include inputs, outputs, etc. Environment
characteristics include data communications, performance, reusability,
operational ease, etc. Weight scale: 0 = not present; 1 = minor influence, 5 =
strong influence.
Functional Testing:
Application of test data derived from the specified functional requirements
without regard to the final program structure. Also known as black-box testing.
Heuristics Testing:
Another term for failure-directed testing.
Histogram:
A graphical description of individual measured values in a data set that is
organized according to the frequency or relative frequency of occurrence. A
histogram illustrates the shape of the distribution of individual values in a
data set along with information regarding the average and variation.
Hybrid Testing:
A combination of top-down testing combined with bottom-up testing of
prioritized or available components.
Incremental Analysis:
Incremental analysis occurs when (partial) analysis may be performed on an
incomplete product to allow early feedback on the development of that product.
Infeasible Path:
Program statement sequence that can never be executed.
Inputs:
Products, services, or information needed from suppliers to make a process
work.
Inspection:
1) A formal evaluation technique in which software requirements, design, or
code are examined in detail by a person or group other than the author to
detect faults, violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
2) A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
Instrument:
To install or insert devices or instructions into hardware or software to
monitor the operation of a system or component.
Integration:
The process of combining software components or hardware components, or both,
into an overall system.
Integration Testing:
An orderly progression of testing in which software components or hardware
components, or both, are combined and tested until the entire system has been
integrated.
Interface:
A shared boundary. An interface might be a hardware component to link two
devices, or it might be a portion of storage or registers accessed by two or
more computer programs.
Interface Analysis:
Checks the interfaces between program elements for consistency and adherence to
predefined rules or axioms.
Intrusive Testing:
Testing that collects timing and processing information during program
execution that may change the behavior of the software from its behavior in a
real environment. Usually involves additional code embedded in the software
being tested or additional processes running concurrently with software being
tested on the same platform.
IV&V:
Independent verification and validation is the verification and validation of a
software product by an organization that is both technically and managerially
separate from the organization responsible for developing the product.
Life Cycle:
The period that starts when a software product is conceived and ends when the
product is no longer available for use. The software life cycle typically
includes a requirements phase, design phase, implementation (code) phase, test
phase, installation and checkout phase, operation and maintenance phase, and a
retirement phase.
Manual Testing:
That part of software testing that requires operator input, analysis, or
evaluation.
Mean:
A value derived by adding several qualities and dividing the sum by the number
of these quantities.
Measurement:
1) The act or process of measuring. A figure, extent, or amount obtained by
measuring.
Metric:
A measure of the extent or degree to which a product possesses and exhibits a
certain quality, property, or attribute.
Mutation Testing:
A method to determine test set thoroughness by measuring the extent to which a
test set can discriminate the program from slight variants of the program.
Non-intrusive Testing:
Testing that is transparent to the software under test; i.e., testing that does
not change the timing or processing characteristics of the software under test
from its behavior in a real environment. Usually involves additional hardware
that collects timing or processing information and processes that information
on another platform.
Operational Requirements: Qualitative and quantitative parameters that specify
the desired operational capabilities of a system and serve as a basis for
deter-mining the operational effectiveness and suitability of a system prior to
deployment.
Operational Testing:
Testing performed by the end user on software in its normal operating
environment.
Outputs:
Products, services, or information supplied to meet end user needs.
Path Analysis:
Program analysis performed to identify all possible paths through a program, to
detect incomplete paths, or to discover portions of the program that are not on
any path.
Path Coverage Testing:
A test method satisfying coverage criteria that each logical path through the
program is tested. Paths through the program often are grouped into a finite
set of classes; one path from each class is tested.
Peer Reviews:
A methodical examination of software work products by the producer’s peers to
identify defects and areas where changes are needed.
Policy:
Managerial desires and intents concerning either process (intended objectives)
or products (desired attributes).
Problem:
Any deviation from defined standards. Same as defect.
Procedure:
The step-by-step method followed to ensure that standards are met.
Process:
The work effort that produces a product. This includes efforts of people and
equipment guided by policies, standards, and procedures.
Process Improvement:
To change a process to make the process produce a given product faster, more
economically, or of higher quality. Such changes may require the product to be
changed. The defect rate must be maintained or reduced.
Product:
The output of a process; the work product. There are three useful classes of
products: manufactured products (standard and custom), administrative/
information products (invoices, letters, etc.), and service products (physical,
intellectual, physiological, and psychological). Products are defined by a
statement of requirements; they are produced by one or more people working in a
process.
Product Improvement:
To change the statement of requirements that defines a product to make the
product more satisfying and attractive to the end user (more competitive). Such
changes may add to or delete from the list of attributes and/or the list of
functions defining a product. Such changes frequently require the process to be
changed. NOTE: This process could result in a totally new product.
Productivity:
The ratio of the output of a process to the input, usually measured in the same
units. It is frequently useful to compare the value added to a product by a
process to the value of the input resources required (using fair market values
for both input and output).
Proof Checker:
A program that checks formal proofs of program properties for logical
correctness.
Prototyping:
Evaluating requirements or designs at the conceptualization phase, the
requirements analysis phase, or design phase by quickly building scaled-down
components of the intended system to obtain rapid feedback of analysis and
design decisions.
Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to
demonstrate that the software meets its specified requirements.
Quality:
A product is a quality product if it is defect free. To the producer a product
is a quality product if it meets or conforms to the statement of requirements
that defines the product. This statement is usually shortened to “quality means
meets requirements. NOTE: Operationally, the work quality refers to products.
Quality Assurance (QA):
The set of support activities (including facilitation, training, measurement,
and analysis) needed to provide adequate confidence that processes are established
and continuously improved in order to produce products that meet specifications
and are fit for use.
Quality Control (QC):
The process by which product quality is compared with applicable standards; and
the action taken when nonconformance is detected. Its focus is defect detection
and removal. This is a line function, that is, the performance of these tasks
is the responsibility of the people working within the process.
Quality Improvement:
To change a production process so that the rate at which defective products
(defects) are produced is reduced. Some process changes may require the product
to be changed.
Random Testing:
An essentially black-box testing approach in which a program is tested by
randomly choosing a subset of all possible input values. The distribution may
be arbitrary or may attempt to accurately reflect the distribution of inputs in
the application environment.
Regression Testing:
Selective retesting to detect faults introduced during modification of a system
or system component, to verify that modifications have not caused unintended
adverse effects, or to verify that a modified system or system component still
meets its specified requirements.
Reliability:
The probability of failure-free operation for a specified period.
Requirement:
A formal statement of: 1) an attribute to be possessed by the product or a
function to be performed by the product; the performance standard for the
attribute or function; or 3) the measuring process to be used in verifying that
the standard has been met.
Review: A way to use the diversity and power of a group of
people to point out needed improvements in a product or confirm those parts of
a product in which improvement is either not desired or not needed. A review is
a general work product evaluation technique that includes desk checking,
walkthroughs, technical reviews, peer reviews, formal reviews, and inspections.
Run Chart:
A graph of data points in chronological order used to illustrate trends or
cycles of the characteristic being measured for the purpose of suggesting an
assignable cause rather than random variation.
Scatter Plot
(correlation diagram): A graph designed to show whether there is a relationship
between two changing factors.
Semantics:
1) The relationship of characters or a group of characters to their meanings,
independent of the manner of their interpretation and use.
2) The relationships between symbols and their meanings.
2) The relationships between symbols and their meanings.
Software Characteristic: An inherent, possibly accidental, trait, quality, or
property of software (for example, functionality, performance, attributes,
design constraints, number of states, lines of branches).
Software Feature:
A software characteristic specified or implied by requirements documentation
(for example, functionality, performance, attributes, or design constraints).
Software Tool:
A computer program used to help develop, test, analyze, or maintain another
computer program or its documentation; e.g., automated design tools, compilers,
test tools, and maintenance tools.
Standards:
The measure used to evaluate products and identify nonconformance. The basis
upon which adherence to policies is measured.
Standardize:
Procedures are implemented to ensure that the output of a process is maintained
at a desired level.
Statement Coverage Testing: A test method satisfying coverage criteria that
requires each statement be executed at least once.
Statement of Requirements: The exhaustive list of requirements that define a
product. NOTE: The statement of requirements should document requirements
proposed and rejected (including the reason for the rejection) during the
requirements determination process.
Static Testing:
Verification performed without executing the system’s code. Also called static
analysis.
Statistical Process Control: The use of statistical techniques and tools to measure
an ongoing process for change or stability.
Structural Coverage:
This requires that each pair of module invocations be executed at least once.
Structural Testing:
A testing method where the test data is derived solely from the program
structure.
Stub:
A software component that usually minimally simulates the actions of called
components that have not yet been integrated during top-down testing.
Supplier:
An individual or organization that supplies inputs needed to generate a
product, service, or information to an end user.
Syntax:
1) The relationship among characters or groups of characters independent of
their meanings or the manner of their interpretation and use;
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.
System:
A collection of people, machines, and methods organized to accomplish a set of
specified functions.
System Simulation:
Another name for prototyping.
System Testing:
The process of testing an integrated hardware and software system to verify
that the system meets its specified requirements.
Technical Review:
A review that refers to content of the technical material being reviewed.
Test Bed:
1) An environment that contains the integral hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test of a
logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
2) A suite of test programs used in conducting the test of a component or system.
Test Case:
The definition of test case differs from company to company, engineer to
engineer, and even project to project. A test case usually includes an
identified set of information about observable states, conditions, events, and
data, including inputs and expected outputs.
Test Development:
The development of anything required to conduct testing. This may include test
requirements (objectives), strategies, processes, plans, software, procedures,
cases, documentation, etc.
Test Executive:
Another term for test harness.
Test Harness:
A software tool that enables the testing of software components that links test
capabilities to perform specific tests, accept program inputs, simulate missing
components, compare actual outputs with expected outputs to determine
correctness, and report discrepancies.
Test Objective:
An identified set of software features to be measured under specified
conditions by comparing actual behavior with the required behavior described in
the software documentation.
Test Plan:
A formal or informal plan to be followed to assure the controlled testing of
the product under test.
Test Procedure:
The formal or informal procedure that will be followed to execute a test. This
is usually a written document that allows others to execute the test with a
minimum of training.
Testing:
Any activity aimed at evaluating an attribute or capability of a program or
system to determine that it meets its required results. The process of
exercising or evaluating a system or system component by manual or automated
means to verify that it satisfies specified requirements or to identify
differences between expected and actual results.
Top-down Testing:
An integration testing technique that tests the high-level components first
using stubs for lower-level called components that have not yet been integrated
and that stimulate the required actions of those components.
Unit Testing:
The testing done to show whether a unit (the smallest piece of software that
can be independently compiled or assembled, loaded, and tested) satisfies its
functional specification or its implemented structure matches the intended
design structure.
User:
The end user that actually uses the product received.
V- Diagram (model):
a diagram that visualizes the order of testing activities and their
corresponding phases of development
Validation:
The process of evaluating software to determine compliance with specified
requirements.
Verification:
The process of evaluating the products of a given software development activity
to determine correctness and consistency with respect to the products and
standards provided as input to that activity.
Walkthrough:
Usually, a step-by-step simulation of the execution of a procedure, as when
walking through code, line by line, with an imagined set of inputs. The term
has been extended to the review of material that is not procedural, such as
data descriptions, reference manuals, specifications, etc.
White-box Testing:
Testing approaches that examine the program structure and derive test data from
the program logic. This is also known as clear box testing, glass-box or
open-box testing. White box testing determines if
program-code structure and logic is faulty. The test is accurate only if the
tester knows what the program is supposed to do. He or she can then see if the
program diverges from its intended goal. White box testing does not account for
errors caused by omission, and all visible code must also be readable.
Tester Roles and Responsibilities
9.1 Test Manager
§ Single point contact between Wipro onsite and offshore team
§ Prepare the project plan
§ Test Management
§ Test Planning
§ Interact with Wipro onsite lead, Client QA manager
§ Team management
§ Work allocation to the team
§ Test coverage analysis
§ Co-ordination with onsite for issue resolution.
§ Monitoring the deliverables
§ Verify readiness of the product for release through release review
§ Obtain customer acceptance on the deliverables
§ Performing risk analysis when required
§ Reviews and status reporting
§ Authorize intermediate deliverables
and patch releases to customer.
9.2 Test Lead
§ Resolves technical issues for the product group
§ Provides direction to the team members
§ Performs activities for the respective product group
§ Review and Approve of Test Plan / Test cases
§ Review Test Script / Code
§ Approve completion of Integration testing
§ Conduct System / Regression tests
§ Ensure tests are conducted as per plan
§ Reports status to the Offshore Test Manager
9.3 Tester
Engineer
§ Development of Test cases and Scripts
§ Test Execution
§ Result capturing and analysing
§ Follow the test plans, scripts etc. as documented.
§ Check tests are correct before reporting s/w faults
§ Defect Reporting and Status reporting
§ Assess risk objectively
§ Prioritize what you report
§ Communicate the truth.
9.4 How to Prioritize Tests:
§ We
can’t test every thing.
§ There
is never enough time to do all testing you would like.
§ So
Prioritize Tests.
Tips
§ Possible
ranking criteria ( all risk based)
§ Test
where a failure would be most serve.
§ Test
where failures would be most visible.
§ Take
the help of customer in understanding what is most important to him.
§ What
is most critical to the customers business.
§ Areas
changed most often.
§ Areas
with most problems in the past.
§ Most
complex areas, or technically critical.
Note: If you follow above, whenever you stop
testing, you have done the best testing in the time available.
How can we improve
the efficiency in testing?
How can we improve the efficiency in testing?
§ In
the recent year it has show lot of outsourcing in testing area.
§ Its
right time to think and create process to improve the efficiency of testing
projects.
§ The
best team will result in the efficient deliverables.
§ The
team should contain 55% hard core test engineers, 30 domain knowledge engineers
and 15% technology engineers.
How did we arrive to this figures?
The past projects has shown 50-60 percent of the test cases
are written on the basis of testing techniques, 28-33% of test cases are
resulted to cover the domain oriented business rules and 15-18% technology
oriented test cases.
Software
testability is simply how easily a computer program can be tested.
A set
of program characteristics that lead to testable software:
Testing
8.1
Test Strategy
§ Test strategy is statement of overall approach of testing to meet the
business and test objectives.
§ It is a plan level document and has to be prepared in the requirement
stage of the project.
§ It identifies the methods, techniques and tools to be used for testing .
§ It can be a project or an organization specific.
§ Developing a test strategy which effectively meets the needs of the
organization/project is critical to the success of the software development
§ An effective strategy has to meet the project and business objectives
§ Defining the strategy upfront before the actual testing helps in planning
the test activities
A
test strategy will typically cover the following aspects
§ Definition of test objective
§ Strategy to meet the specified objective
§ Overall testing approach
§ Test Environment
§ Test Automation requirements
§ Metric Plan
§ Risk Identification, Mitigation and Contingency plan
§ Details of Tools usage
§ Specific Document templates used in testing
8.2
Testing Approach
§ Test approach will be based on the objectives set for testing
§ Test approach will detail the way the testing to be carried out
§ Types of testing to be done viz Unit, Integration and system testing
§ The method of testing viz Black–box, White-box etc.,
§ Details of any automated testing to be done
8.3 Test Environment
§ All the Hardware and Software requirements for carrying out testing shall
be identified in detail.
§ Any specific tools required for testing will also be identified
§ If the testing is going to be done remotely, then it has to be considered
during estimation
8.4
Risk Analysis
§ Risk analysis should carried out for testing phase
§ The risk identification will be accomplished by identifying
causes-and-effects or effects-and-causes
§ The identified Risks are classified into to Internal and External Risks.
-
The internal risks are things that the test team can control or influence.
- The external risks are
things beyond the control or influence of the test team
§ Once Risks are identified and classified, the following activities will
be carried out
-
Identify the probability of occurrence
-
Identify the impact areas – if the risk were to occur
-
Risk mitigation plan – how avoid this risk?
-
Risk contingency plan – if the risk were to occur what do we do?
8.5
Testing Limitations
§ You cannot test a program completely
§ We can only test against system requirements
- May not detect errors in the
requirements.
- Incomplete or ambiguous
requirements may lead to inadequate or incorrect testing.
§ Exhaustive (total) testing is impossible in present scenario.
§ Time and budget constraints normally require very careful planning of the
testing effort.
§ Compromise between thoroughness and budget.
§ Test results are used to make business decisions for release dates.
§ Even if you do find the last bug, you’ll never know it
§ You will run out of time before you run out of test cases
§ You cannot test every path
§ You cannot test every valid input
§ You cannot test every invalid input
8.6
Testing Objectives
§ You cannot prove a program correct (because it isn’t!)
§ The purpose of testing is to find problems
§ The purpose of finding problems is to get them corrected
8.7
Testing Metrics
- Time
–
Time per test case
–
Time per test script
–
Time per unit test
–
Time per system test
- Sizing
–
Function points
–
Lines of code
- Defects
–
Numbers of defects
–
Defects per sizing measure
–
Defects per phase of testing
–
Defect origin
–
Defect removal efficiency
Number
of defects found in producer testing
Defect Removal Efficiency =
Number
of defects during the life of the product
Actual Size-Planed Size
Size Variance =
Planed
Size
Actual
end date – Planed end date
Delivery Variance =
Planed end date – Planed start date
Actual effort – Planed
effort
Effort =
Planed effort
Effort
Productivity =
Size
No defect found during the
review time
Review efficiency =
Effort
8.7
Test Stop Criteria:
§ Minimum
number of test cases successfully executed.
§ Uncover
minimum number of defects (16/1000 stm)
§ Statement
coverage
§ Testing
uneconomical
§ Reliability
model
8.8
Six Essentials of Software Testing
- The quality of the test process determines the success
of the test effort.
- Prevent defect migration by using early life-cycle
testing techniques.
- The time for software testing tools is now.
- A real person must take responsibility for improving
the testing process.
- Testing is a professional discipline requiring trained,
skilled people.
- Cultivate a positive team attitude of creative
destruction.
8.9
What are the five common problems in s/w development process?
Poor
Requirements: If the requirements are unclear,
incomplete, too general and not testable there will be problem.
Unrealistic
Schedules: If too much work is creamed in too
little time. Problems are inventible.
Inadequate
Testing: No one will know weather the system is
good or not. Until the complains system crash
Futurities:
Requests to pile on new features after development is
underway. Extremely common
Miscommunication:
If the developers don’t know what is needed (or) customers
have erroneous expectations, problems are guaranteed.
8.10 Give me five common problems that
occur during software development.
Solid
requirements :Ensure the requirements are solid,
clear, complete, detailed, cohesive, attainable and testable.
Realistic
Schedules: Have schedules that are realistic.
Allow adequate time for planning, design, testing, bug fixing, re-testing,
changes and documentation. Personnel should be able to complete the project
without burning out.
Adequate
Testing: Do testing that is adequate. Start
testing early on, re-test after fixes or changes, and plan for sufficient time
for both testing and bug fixing.
Firm
Requirements: Avoid new features. Stick to initial
requirements as much as possible.
Communication.
Communicate Require walk-thorough and inspections when
appropriate
8.11
What Should be done no enough time for testing
Risk analysis to determine where testing should be
focused
§ Which functionality is most important to the project's intended purpose?
§ Which functionality is most visible to the user?
§ Which functionality has the largest safety impact?
§ Which functionality has the largest financial impact on users?
§ Which aspects of the application are most important to the customer?
§ Which aspects of the application can be tested early in the development
cycle?
§ Which parts of the code are most complex and thus most subject to errors?
§ Which parts of the application were developed in rush or panic mode?
§ Which aspects of similar/related previous projects caused problems?
§ Which aspects of similar/related previous projects had large maintenance
expenses?
§ Which parts of the requirements and design are unclear or poorly thought
out?
§ What do the developers think are the highest-risk aspects of the
application?
§ What kinds of problems would cause the worst publicity?
§ What kinds of problems would cause the most customer service complaints?
§ What kinds of tests could easily cover multiple functionalities?
§ Which tests will have the best high-risk-coverage to time-required ratio?
8.12
How do you know when to stop testing?
Common factors in deciding when to stop are...
§ Deadlines, e.g. release deadlines, testing deadlines;
§ Test cases completed with certain percentage passed;
§ Test budget has been depleted;
§ Coverage of code, functionality, or requirements reaches a specified
point;
§ Bug rate falls below a certain level; or
§ Beta or alpha testing period ends.
8.14
Why does the software have Bugs?
§ Miscommunication or No communication
§ Software Complexity
§ Programming Errors
§ Changing Requirements
§ Time Pressures
§ Poorly Documented Code
8.15
Different Type of Errors in Software
- User Interface Errors
- Error Handling
- Boundary related errors
- Calculation errors
- Initial and Later states
- Control flow errors
- Errors in Handling or Interpreting Data
- Race Conditions
- Load Conditions
- Hardware
·
What is Automation?
·
·
What is Automation?
·
·
§ A software program that is used to test another software program, This is
referred to as “automated software testing”.
·
7.2 Why Automation
·
§ Avoid the errors that humans make when they get tired after multiple
repetitions.
·
§ The test program won’t skip any test by mistake.
·
§ Each future test cycle will take less time and require less human intervention.
·
§ Required for regression testing.
·
7.3 Benefits of Test Automation:
·
§ Allows more testing to happen
·
§ Tightens / Strengthen Test Cycle
·
§ Testing is consistent, repeatable
·
§ Useful when new patches released
·
§ Makes configuration testing easier
·
§ Test battery can be continuously improved.
·
7.4 False Benefits:
·
§ Fewer tests will be needed
·
§ It will be easier if it is automate
·
§ Compensate for poor design
·
§ No more manual testing.
·
7.5 What are the different tools
available in the market?
·
§ Rational Robot
·
§ WinRunner
·
§ SilkTest
·
§ QA Run
·
§ WebFT
Defect Handling
Defect Handling
What
is Defect?
· Defect
is a coding error in a computer program.
· A
software error is present when the program does not do what its end user
expects it to do.
Who
can report a Defect?
Anyone who has involved in
software development life cycle and who is using the software can report a
Defect. In most of the cases defects are reported by Testing Team.
A short list of people expected to report bugs:
- Testers / QA Engineers
- Developers
- Technical Support
- End Users
- Sales and Marketing Engineers
Defect Reporting
- Defect or Bug Report is the medium of communication
between the tester and the
programmer
- Provides clarity
to the management, particularly at the summary level
- Defect Report
should be accurate, concise, thoroughly-edited, well conceived,
high-quality technical document
- The problem
should be described in a way that maximizes the probability that it will be fixed
- Defect Report
should be non-judgmental and should not point finger at the programmer
- Crisp Defect
Reporting process improves the test team’s communications with the senior
and peer management
Defect Life Cycle
§ Defect
Life Cycle helps in handling defects efficiently.
§ This
DLC will help the users to know the status of the defect.
Software Development
Life Cycles
Software Development Life Cycles
Life cycle:
Entire duration of a project, from inception to termination
Different
life cycle models
2.1. Code-and-fix model:
- Earliest
software development approach (1950s)
- Iterative,
programmers' approach
- Two phases: 1. coding, 2. fixing the code
No
provision for:
- Project planning
- Analysis
- Design
- Testing
- Maintenance
Problems
with code-and-fix model:
- After several
iterations, code became very poorly structured; subsequent fixes became
very expensive
- Even
well-designed software often very poorly matched users’ requirements: were
rejected or needed to be redeveloped (expensively!)
- Changes to code
were expensive, because of poor testing and maintenance practices
Solutions:
- Design before
coding
- Requirements
analysis before design
- Separate testing
and maintenance phases after coding
2.2. Waterfall model:
- Also called the classic
life cycle
- Introduced in
1956 to overcome limitations of code-and-fix model
- Very structured,
organized approach and suitable for planning
- Waterfall model
is a linear approach, quite inflexible
- At each phase,
feedback to previous phases is possible (but is discouraged in practice)
- Still is the most
widespread model today
Main phases:
- Requirements
- Analysis
- Design (overall
design & detailed design)
- Coding
- Testing (unit
test, integration test, acceptance test)
- Maintenance
Approaches
The standard
waterfall model for systems development is an approach that goes through the
following steps:
1. Document System Concept
2. Identify System Requirements and Analyze them
3. Break the System into Pieces (Architectural Design)
4. Design Each Piece (Detailed Design)
5. Code the System Components and Test Them Individually
(Coding, Debugging, and Unit Testing)
6. Integrate the Pieces and Test the System (System
Testing)
7. Deploy the System and Operate It
Waterfall Model Assumption
§
The
requirements are knowable in advance of implementation.
§
The
requirements have no unresolved, high-risk implications
-- e.g., risks due to COTS choices, cost,
schedule, performance, safety,
security, user interface, organizational impacts
§
The nature of
the requirements are compatible with all the key system stakeholders’ expectations
-- e.g., users, customer, developers,
maintainers, investors
§
The right
architecture for implementing the requirements is well understood.
§
There is enough
calendar time to proceed sequentially.
Advantages of Waterfall Model
§
Conversion of
existing projects in to new projects.
§
For proven
platforms and technologies, it works fine.
§
Suitable for
short duration projects.
§
The waterfall
model is effective when there is no change in the requirements, and the requirements are fully known .
§
If there is no
Rework, this model build a high quality product.
§
The stages are
clear cut
§
All R&D
done before coding starts, implies better quality program design
Disadvantages with Waterfall Model:
§
Testing is
postponed to later stage till coding completes.
§
Not suitable
for large projects
§
It assumes
uniform and orderly sequence of steps.
§
Risk in certain
project where technology itself is a risk.
§
Correction at
the end of phase need correction to the previous phase, So rework is more.
§
Real projects
rarely flow in a sequential process.
§
It is difficult
to define all requirements at the beginning of a project.
§
The model has
problems adapting to change.
§
A working
version of the system is not seen until late in the project's life.
§
Errors are
discovering later (repairing problem further along the lifecycle becomes
progressively more expensive).
§
Maintenance
cost can be as much as 70% of system costs.
- Delivery only at
the end (long wait)
2.3. Prototyping model:
- Introduced to
overcome shortcomings of waterfall model
- Suitable to
overcome problem of requirements definition
- Prototyping
builds an operational model of the planned system, which the
customer can evaluate
Main phases:
- Requirements
gathering
- Quick design
- Build prototype
- Customer
evaluation of prototype
- Refine prototype
- Iterate steps 4.
and 5. to "tune" the prototype
- Engineer product
Note: Mostly,
the prototype is discarded after step 5. and the actual
system is
built from scratch in step 6. (throw-away prototyping)
Possible
problems:
- Customer may
object to prototype being thrown away and may demand "a few
changes" to make it working (results in poor software quality and
maintainability)
- Inferior,
temporary design solutions may become permanent after a while, when the
developer has forgotten that they were only intended to be temporary
(results in poor software quality)
Advantages
§
Helps counter
the limitations of waterfall model
§
After prototype
is developed, the end user and the client are permitted to use the application and further modifications are
done based on their feedback.
§ User
oriented
- What the user sees
- Not enigmatic diagrams
- Quicker error feedback
- Earlier training
- Possibility of developing a system that closely
addresses users' needs and expectations
Disadvantages
- Development costs are high.
- User expectations
- Bypass analysis
- Documentation
- Never ending
- Managing the prototyping process is difficult because of
its rapid, iterative nature
- Requires feedback on the prototype
- Incomplete prototypes may be regarded as complete
systems
2.4 Incremental:
During the first one-month phase, the development
team worked from static visual designs to code a prototype. In focus group meetings, the team discussed
users’ needs and the potential features of the product and then showed a
demonstration of its prototype. The excellent feedback from these focus groups
had a large impact on the quality of the product.
Main
phases:
- Define outline
Requirements 4. Develop
- Assign
requirements to increments 5. Integrate
- Design system
architecture 6. Validate
After the second
group of focus groups, the feature set was frozen and the product definition
complete. Implementation consisted of four-to-six-week cycles, with software
delivered for beta use at the end of each cycle. The entire release took 10
months from definition to manufacturing release. Implementation lasted 4.5
months. The result was a world-class product that has won many awards and has
been easy to support.
2.5
V-Model:
Verification à (Static System – Doing Right Job) To test the system correctness as to
whether the system is being functioning as per specifications.
§ Typically involves in Reviews and Meetings to evaluate documents, plans,
code, requirements and specifications.
§ This can be done with checklists, issue lists, walkthroughs and
inspection meetings.
Validation à (Dynamic System - Job Right)
Testing the system in a real environment i.e, whether software is catering the
customers requirements.
Typically involves in actual testing and take place after verifications
are completed
|
Advantages
- Reduces the cost
of defect repair (·.· Every
document is verified by tester )
- No Ideal time
for Testers
- Efficiency of
V-model is more when compare to Waterfall Model
- Change
management can be effected in V-model
Disadvantages
- Risk management
is not possible
- Applicable of
medium sized projects
2.6 Spiral model:
- Objective: overcome
problems of other models, while combining their advantages
- Key component:
risk management (because traditional models often fail when risk is
neglected)
- Development is
done incrementally, in several cycles _ Cycle as often as necessary to
finish
Main phases:
- Determine
objectives, alternatives for development, and constraints for the portion
of the whole system to be developed in the current cycle
- Evaluate
alternatives, considering objectives and constraints; identify and resolve
risks
- Develop the
current cycle's part of the system, using evolutionary or conventional
development methods (depending on remaining risks); perform validation at
the end
- Prepare plans
for subsequent phases
Spiral Model
This model is very
appropriate for large software projects. The model consists of four main parts,
or blocks, and the process is shown by a continuous loop going from the outside
towards the inside. This shows the progress of the project.
§ Planning
This phase is where
the objectives, alternatives, and constraints are determined.
§ Risk Analysis
What happens here is
that alternative solutions and constraints are defined, and risks are identified and analyzed. If
risk analysis indicates uncertainty in the requirements, the prototyping model
might be used to assist the situation.
§ Engineering
Here the customer
decides when the next phase of planning and risk analysis occur. If it is determined that the risks are to
high, the project can be terminated.
§ Customer Evaluation
In this phase, the
customer will assess the engineering results and make changes if necessary.
Spiral model flexibility
- Well-understood
systems (low technical risk) - Waterfall model. Risk analysis phase is
relatively cheap
- Stable
requirements and formal specification. Safety criticality - Formal
transformation model
- High UI risk,
incomplete specification - prototyping model
- Hybrid models
accommodated for different parts of the project
Advantages
of spiral model:
- Good for large
and complex projects
- Customer
Evaluation allows for any changes deemed necessary, or would allow for new
technological advances to be used
- Allows customer
and developer to determine and to react to risks at each evolutionary
level
- Direct
consideration of risks at all levels greatly reduces problems
Problems
with spiral model:
- Difficult to
convince customer that this approach is controllable
- Requires
significant risk assessment expertise to succeed
- Not yet widely
used efficacy not yet proven
- If a risk is not
discovered, problems will surely occur
2.7 RAD Model
- RAD refers to a development life cycle designed to give
much faster development and higher quality systems than the traditional
life cycle.
- It is designed to take advantage of powerful development software like
CASE tools, prototyping tools and code generators.
- The key objectives of RAD are: High Speed, High Quality
and Low Cost.
- RAD is a people-centered and incremental development
approach.
- Active user involvement, as well as collaboration and
co-operation between all stakeholders are imperative.
- Testing is integrated throughout the development life
cycle so that the system is tested and reviewed by both developers and
users incrementally.
Problem Addressed By RAD
- With conventional methods, there is a long delay before
the customer gets to see any results.
- With conventional methods, development can take so long
that the customer's business has fundamentally changed by the time the
system is ready for use.
- With conventional methods, there is nothing until 100%
of the process is finished, then 100% of the software is delivered
Bad Reasons For
Using RAD
- To prevent cost overruns
(RAD needs a team already
disciplined in cost management)
- To prevent runaway schedules
(RAD needs a team already
disciplined in time management)
Good Reasons for using RAD
- To converge early toward a design acceptable to the
customer and feasible for the developers
- To limit a project's exposure to the forces of change
- To save development time, possibly at the expense of
economy or product quality
RAD in SDLC
- Mapping between System Development Life Cycle (SDLC) of
ITSD and RAD stages is depicted as follows.
Essential Ingredients of RAD
- RAD has four essential ingredients:
¨ Tools
¨ Methodology
¨ People
¨ Management.
The
following benefits can be realized in using RAD:
- High quality system will be delivered because of
methodology, tools and user
- involvement;
- Business benefits can be realized earlier;
- Capacity will be utilized to meet a specific and urgent
business need;
- Standards and consistency can be enforced through the
use of CASE tools.
In the long run, we will also achieve
that:
- Time required to get system developed will be reduced;
- Productivity of developers will be increased.
Advantages of
RAD
- Buying may save money compared to building
- Deliverables sometimes easier to port
- Early visibility
- Greater flexibility (because developers can redesign
almost at will)
- Greatly reduced manual coding (because of wizards, code
generators, code reuse)
- Increased user involvement (because they are
represented on the team at all times)
- Possibly reduced cost (because time is money, also
because of reuse)
Disadvantages of RAD
- Buying may not save money compared to building
- Cost of integrated toolset and hardware to run it
- Harder to gauge progress (because there are no classic
milestones)
- Less efficient (because code isn't hand crafted)
- More defects
- Reduced features
- Requirements may not converge
- Standardized look and feel (undistinguished, lackluster
appearance)
- Successful efforts difficult to repeat
- Unwanted features
Areas of
Testing:
§ Black Box Testing
§ White Box Testing
§ Gray Box Testing
Black Box Testing
§ Test the correctness of
the functionality with the help of Inputs and Outputs.
§ User doesn’t require
the knowledge of software code.
§ Black box testing is
also called as Functionality Testing.
It
attempts to find errors in the following categories:
§ Incorrect or missing
functions.
§ Interface errors.
§ Errors in data
structures or external data base access.
§ Behavior or performance
based errors.
§ Initialization or
termination errors.
Approach:
Equivalence Class:
- For each piece of the specification, generate one or more equivalence Class
- Label the classes as “Valid” or “Invalid”
- Generate one test case for each Invalid Equivalence class
- Generate a test case that covers as many Valid Equivalence Classes as possible
An input
condition for Equivalence
Class
- A specific numeric value
- A range of values
- A set of related values
- A Boolean condition
Equivalence
classes can be defined using the following guidelines
- If an input condition specifies a range, one valid and two invalid equivalence class are defined.
- If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
- If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined.
- If an input condition is Boolean, one valid and one invalid classes are defined.
Boundary Value Analysis
- Generate test cases for the boundary values.
- Minimum Value, Minimum Value + 1, Minimum Value -1
- Maximum Value, Maximum Value + 1, Maximum Value - 1
Error
Guessing.
- Generating test cases against to the specification.
White Box Testing
§
Testing the Internal program logic
§
White box testing is also called as Structural
testing.
§
User does require the knowledge of software
code.
Purpose
§
Testing all loops
§
Testing Basis paths
§
Testing conditional statements
§
Testing data structures
§
Testing Logic Errors
§
Testing Incorrect assumptions
Structure = 1
Entry + 1 Exit with certain Constraints, Conditions and Loops.
Logic Errors and incorrect assumptions
most are likely to be made while coding for
“special cases”. Need to ensure these
execution paths are tested.
Approach
Basic Path Testing (Cyclomatic Complexity(Mc
Cabe Method)
- Measures the logical complexity of a procedural
design.
- Provides flow-graph notation to identify independent
paths of processing
- Once paths are identified - tests can be developed
for - loops, conditions
- Process guarantees that every statement will get
executed at least once.
Structure Testing:
- Condition Testing
All logical conditions contained
in the program module should be tested.
- Data Flow Testing
Selects test paths according
to the location of definitions and use of variables.
- Loop Testing
¨ Simple Loops
¨ Nested
Loops
¨ Concatenated
Loops
¨ Unstructured
Loops
Gray Box Testing.
§
It is just a combination of both Black box
& white box testing.
§
It is platform independent and language
independent.
§
Used to test embedded systems.
§
Functionality and behavioral parts are tested.
§
Tester should have the knowledge of both the
internals and externals of the function
§
If you know something about how the product
works on the inside, you can test it better from the outside.
Gray box testing is especially
important with Web and Internet applications, because the Internet is built
around loosely integrated components that connect via relatively well-defined
interfaces. Unless you understand the architecture of the Net, your testing
will be skin deep.
Why is
Software Testing?
1.
To discover defects.
2. To avoid user detecting
problems
3. To prove that the software has
no faults
4. To learn about the reliability
of the software.
5. To avoid being sued by
customers
6. To ensure that product works
as user expected.
7. To stay in business
8. To detect defects early, which
helps in reducing the cost of defect fixing?
Cost of Defect Repair
Phase
|
% Cost
|
Requirements
|
0
|
Design
|
10
|
Coding
|
20
|
Testing
|
50
|
Customer
Site
|
100
|
How exactly Testing is different from QA/QC
Testing is often
confused with the processes of quality control and quality assurance.
Testing
§ It is the process of
Creating, Implementing and Evaluating tests.
§ Testing measures
software quality
§ Testing can find
faults. When they are removed, software quality is improved.
Quality Control (QC)
§ It is the process of
Inspections, Walk-troughs and Reviews.
§ Measures the quality of
the product
§ It is a Detection
Process
Quality Analysis (QA )
§ Monitoring and
improving the entire SDLC process
§ Make sure that all
agreed-upon standards and procedures are followed
§ Ensuring that problems
are found and addressed.
§ Measures the quality of
the process used to create good quality Product
§ It is a Prevention
Process
Why should we need an approach
for testing?
Yes, we definitely need an
approach for testing.
To over come
following problems, we need a formal approach for Testing.
Incomplete functional coverage: Completeness of
testing is difficult task for testing team with out a formal approach. Team
will not be in a position to announce the percentage of testing completed.
No risk management --
this is no way to measure overall risk issues regarding code coverage and
quality metrics. Effective quality assurance measures quality over time and
starting from a known base of evaluation.
Too little emphasis on user tasks -- because testers will focus on ideal paths instead of
real paths. With no time to prepare, ideal paths are defined according to best
guesses or developer feedback rather than by careful consideration of how users
will understand the system or how users understand real-world analogues to the
application tasks. With no time to prepare, testers will be using a very
restricted set input data, rather than using real data (from user activity
logs, from logical scenarios, from careful consideration of the concept
domain).
Inefficient over the long term -- quality
assurance involves a range of tasks. Effective quality assurance programs
expand their base of documentation on the product and on the testing process
over time, increasing the coverage and granularity of tests over time. Great
testing requires good test setup and preparation, but success with the kind
Test plan-less approach described in this essay may reinforce bad project and
test methodologies. A continued pattern of quick-and-dirty testing like this is
a sign that the product or application is unsustainable in the long run.
The Software life
cycle
What Manual Testing?
Testing activities performed by people without the help of
Software Testing Tools.
What is Software Quality?
It is reasonably bug free delivered on time with in the
budget, meets all requirements and it is maintainable.
1. The Software life cycle
All the stages
from start to finish that take place when developing a new Software.
· Feasibility Study
– What exactly is this system supposed to do?
· Analysis
– Determine and list out the details of
theproblem.
· Design
– How will the system solve the problem?
· Coding
– Translating the design into the actual system.
· Testing
– Does the system solve the problem?
– Have the requirements been satisfied?
– Does the system work properly in all
situations?
· Maintenance
– Bugfixes
|
§ The software life-cycle is a
description of the events that occur between the birth and death of a software
project inclusively.
§ SDLC
is separated into phases (steps, stages)
§ SLDC
also determines the order of the phases, and the criteria for transitioning
from phase to phase
Change Requests on
Requirement Specifications
Why Customer ask Change Requests
§ Different
users/customers have different requirements.
§ Requirements
get clarified/ known at a later date
§ Changes
to business environment
§ Technology
changes
§ Misunderstanding
of the stated requirements due to lack of domain knowledge
How to Communicate the Change Requests
to team
§ Formal
indication of Changes to requirements
§ Joint
Review Meetings
§ Regular
Daily Communication
§ Queries
§ Defects
reported by Clients during testing.
§ Client
Reviews of SRS, SDD, Test plans etc.
§ Across
the corridor/desk (Internal Projects)
§ Presentations/
Demonstrations
Analyzing the Changes
§ Classification
-
Specific
-
Generic
§ Categorization
-
Bug
-
Enhancement
-
Clarification etc.
§ Impact
Analysis
-
Identify the Items that will be effected
-
Time estimations
-
Any other clashes / open issues raised due to this?
Benefits
of accepting Change Requests
1.
Direct Benefits
•
Facilitates Proper Control and Monitoring
•
Metrics Speak for themselves
•
You can buy more time.
•
You may be able to bill more.
2.
Indirect Benefits:
•
Builds Customer Confidence.
What can be done if
the requirements are changing continuously?
§
Work with
project stakeholders early on to understand how the requirements might change.
So that alternative test plans and strategies can be worked out in advance.
§
It is helpful
if the application initial design has some adaptability. So that later changes
do not require redoing the application from scratch.
§
If the code is
well commented and well documented. Then it is easy to make changes for
developers.
§
Use rapid
prototyping whenever possible to help customers feel sure of their requirements
and minimize changes.
§
Negotiate to
allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.