1. What is difference
between BRS, SRS, and FRS?
BRS/Search
is a full-text database and information retrieval system. BRS/Search uses a
fully-inverted indexing system to store, locate, and retrieve unstructured
data. It was the search engine that in 1977 powered Bibliographic Retrieval
Services (BRS) commercial operations with 20 databases (including the first
national commercial availability of MEDLINE); it has changed ownership several
times during its development and is currently sold as Live link ECM Discovery
Server by Open Text Corporation.
A Software Requirements Specification (SRS) is a complete description of the behavior of the system to be developed. It includes a set of use cases that describe all the interactions the users will have with the software. Use cases are also known as functional requirements. In addition to use cases, the SRS also contains non-functional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation (such as performance engineering requirements, quality standards, or design constraints).
When the File Replication Service (FRS) detects a change to a file, such as the creation of a new file or the modification to an existing file, it replicates it to other servers in the group. To deal with conflicts (when two copies of the files are edited at the same time on different servers) the service resolves any issues by using the file with latest date and time.
One of the main uses of FRS is for the SYSVOL directory share. The SYSVOL directory share is particularly important in a Microsoft network as it is used to distribute files supporting Group Policy and scripts to client computers on the network. Since Group Policies and scripts are run each time a user logs on to the system, it is important to have reliability. Having multiple copies of the SYSVOL directory increases the resilience and spreads the workload for this essential service.
A Software Requirements Specification (SRS) is a complete description of the behavior of the system to be developed. It includes a set of use cases that describe all the interactions the users will have with the software. Use cases are also known as functional requirements. In addition to use cases, the SRS also contains non-functional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation (such as performance engineering requirements, quality standards, or design constraints).
When the File Replication Service (FRS) detects a change to a file, such as the creation of a new file or the modification to an existing file, it replicates it to other servers in the group. To deal with conflicts (when two copies of the files are edited at the same time on different servers) the service resolves any issues by using the file with latest date and time.
One of the main uses of FRS is for the SYSVOL directory share. The SYSVOL directory share is particularly important in a Microsoft network as it is used to distribute files supporting Group Policy and scripts to client computers on the network. Since Group Policies and scripts are run each time a user logs on to the system, it is important to have reliability. Having multiple copies of the SYSVOL directory increases the resilience and spreads the workload for this essential service.
2.
Difference between BRS and FRS
The
main difference between b+rs and frs is that a brs tells the whole
requirement (story) whereas the frs tells the sequence of operations to
be performed by a single process.
BRS is actually a document that covers the business aspect of a requirement on a broad level. For eg: lets consider that you want develop a new website. Your BRS would address what business is your website being built for. Lets say it is a website like ebay and it allows people to shop online. This would be your business requirement covered in the BRS.
Now the FRS would actually address each function that the website provides in order to make the shopping experience of the people visiting the website efficient and easy. Not just this it would also address issues of security etc that may need to be built into this wedsite.
Both the BR and FR can actually be addressed in the same document. However, this depends on the organization.
Both BRS and FRS are made by the BA who captures the requirements from the end user. A developer would be involved in making a technical document which would address the technical design of the website which the BA may or may not concern himself with.
requirement (story) whereas the frs tells the sequence of operations to
be performed by a single process.
BRS is actually a document that covers the business aspect of a requirement on a broad level. For eg: lets consider that you want develop a new website. Your BRS would address what business is your website being built for. Lets say it is a website like ebay and it allows people to shop online. This would be your business requirement covered in the BRS.
Now the FRS would actually address each function that the website provides in order to make the shopping experience of the people visiting the website efficient and easy. Not just this it would also address issues of security etc that may need to be built into this wedsite.
Both the BR and FR can actually be addressed in the same document. However, this depends on the organization.
Both BRS and FRS are made by the BA who captures the requirements from the end user. A developer would be involved in making a technical document which would address the technical design of the website which the BA may or may not concern himself with.
3.
Regression
vs. Retesting
You must retest fixes to ensure that issues have been resolved
before development can progress.
So, retesting is the act of repeating a test to verify that a found defect
has been correctly fixed.
Regression testing on the other hand is the act of repeating other tests in
'parallel' areas to ensure
that the applied fix or a change of code has not introduced other
errors or unexpected behaviour.
For example, if an error is detected in a particular file handling
routine then it might be corrected
by a simple change of code. If that code, however, is utilised in
a number of different places
throughout the software, the effects of such a change could be
difficult to anticipate. What appears
to be a minor detail could affect a separate module of code
elsewhere in the program. A bug fix
could in fact be introducing bugs elsewhere.
You would be surprised to learn how common this actually is. In
empirical studies it has been
estimated that up to 50% of bug fixes actually introduce
additional errors in the code. Given this,
it's a wonder that any software project makes its delivery on
time.
Better QA processes will reduce this ratio but will never
eliminate it. Programmers risk
introducing casual errors every time they place their hands on the
keyboard. An inadvertent slip of
a key that replaces a full stop with a comma might not be detected
for weeks but could have
serious repercussions.
Regression testing attempts to mitigate this problem by assessing
the ‘area of impact’ affected by a
change or a bug fix to see if it has unintended consequences. It
verifies known good behaviour
after a change.
1.
It
is quite common for regression
Below are the differences between Regression Testing and Retesting.
a) Retesting is carried out to verify defect fix / fixes. Regression testing is done to check if the defect fix / fixes have not impacted other functionality of the application that was working fine before applying the code changes.
b) Retesting is planned based for the defect fixes listed in Build Notes. Regression testing is generic and may not be always specific to any defect fix or code change and can be planned as regional or full regression testing.
c) Retesting involves executing test cases that were failed earlier and regression testing involves executing test cases that were passed earlier build i.e., functionality that was working in earlier builds.
d) Retesting will involve rerunning failed test cases that are associated with defect(s) fixes being verified. Regression testing does not involve verifying defect fix but only executing regression test cases.
e) Retesting always takes higher priority over Regression testing i.e., Regression testing is done after completing Retesting. In some projects where there are ample testing resources, Regression testing is carried out in parallel with retesting.
f) Though Retesting and regression testing have different objectives and priorities, they equally important for project’s success.
Retest
Regression Testing
1
Retest
is the process of checking whether the reported bugs are been fixed or not by
the development team
Regression
Testing: Testing conducted for the purpose of evaluating whether or not a
change to the system (all CM items) has introduced a new failure
2
Purpose:
To identify whether the given bugs /issues/ defects are fixed or not.
Purpose:
To identify whether on fixation of the issues / bugs / defects new issues get
introduced into the system.
3
To
carry out the retest activity the issue report given by testing team with
developer comments is enough. (I.e. whether the issue is fixed or not commented
by the developer against the developer comments column)
To
carry out the regression testing, issue report along with that the requirements
tracability matrix need to be given.
4.
Software Development Life Cycle (SDLC) :
Specifies the various stages of software
development.
- System Requirements Analysis.
- Feasibility study
- Systems Analysis and Design
- Code Generation
- Testing
- Maintenance
- Implementation
Software Testing Life Cycle
(STLC): Specifies the various stages of testing.
1. Requirements stage
a. Requirement Specification documents
b. Functional Specification documents
c. Use case Documents
d. Test Trace-ability Matrix for identifying Test Coverage
2.Test Plan
a. Test Scope, Test Environment
b. Different Test phase and Test Methodologies
c. Manual and Automation Testing
d. Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
3.Test Design
a. Test Case preparation.
b.Test Traceability Matrix for identifying Test Cases
c.Test case reviews and Approval
4.Test Execution
a. Executing Test cases
b. Capture, review and analyze Test Results
5.Defect Tracking
a.Find the defect & tracking for its closure.
6. Bug Reporting
a. Report the defect on tool/Excels
7.Regression/retesting
1. Requirements stage
a. Requirement Specification documents
b. Functional Specification documents
c. Use case Documents
d. Test Trace-ability Matrix for identifying Test Coverage
2.Test Plan
a. Test Scope, Test Environment
b. Different Test phase and Test Methodologies
c. Manual and Automation Testing
d. Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
3.Test Design
a. Test Case preparation.
b.Test Traceability Matrix for identifying Test Cases
c.Test case reviews and Approval
4.Test Execution
a. Executing Test cases
b. Capture, review and analyze Test Results
5.Defect Tracking
a.Find the defect & tracking for its closure.
6. Bug Reporting
a. Report the defect on tool/Excels
7.Regression/retesting
Bug Life cycle
Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.
Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.Activities
- Identify types of tests to be performed.
- Gather details about testing priorities and focus.
- Prepare Requirement Traceability Matrix (RTM).
- Identify test environment details where testing is supposed to be carried out.
- Automation feasibility analysis (if required).
Deliverables
- RTM
- Automation feasibility report. (if applicable)
Test Planning
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.Activities
- Preparation of test plan/strategy document for various types of testing
- Test tool selection
- Test effort estimation
- Resource planning and determining roles and responsibilities.
- Training requirement
Deliverables
- Test plan /strategy document.
- Effort estimation document.
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.Activities
- Create test cases, automation scripts (if applicable)
- Review and baseline test cases and scripts
- Create test data (If Test Environment is available)
Deliverables
- Test cases/scripts
- Test data
Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.Activities
- Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.
- Setup test Environment and test data
- Perform smoke test on the build
Deliverables
- Environment ready with test data set up
- Smoke Test Results.
Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.Activities
- Execute tests as per plan
- Document test results, and log defects for failed cases
- Map defects to test cases in RTM
- Retest the defect fixes
- Track the defects to closure
Deliverables
- Completed RTM with execution status
- Test cases updated with results
- Defect reports
Test Cycle Closure
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.Activities
- Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
- Prepare test metrics based on the above parameters.
- Document the learning out of the project
- Prepare Test closure report
- Qualitative and quantitative reporting of quality of the work product to the customer.
- Test result analysis to find out the defect distribution by type and severity.
Deliverables
- Test Closure report
- Test metrics
Finally, summary of STLC along with Entry and Exit Criteria
STLC Stage
|
Entry Criteria
|
Activity
|
Exit Criteria
|
Deliverables
|
Requirement
Analysis |
Requirements
Document available (both functional and non functional) Acceptance criteria defined. Application architectural document available. |
Analyse
business functionality to know the business modules and module specific
functionalities. Identify all transactions in the modules. Identify all the user profiles. Gather user interface/authentication, geographic spread requirements. Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required). |
Signed
off RTM Test automation feasibility report signed off by the client |
RTM Automation feasibility report (if applicable) |
Test
Planning |
Requirements
Documents Requirement Traceability matrix. Test automation feasibility document. |
Analyze
various testing approaches available Finalize on the best suited approach Preparation of test plan/strategy document for various types of testing Test tool selection Test effort estimation Resource planning and determining roles and responsibilities. |
Approved
test plan/strategy document. Effort estimation document signed off. |
Test
plan/strategy document. Effort estimation document. |
Test
case development |
Requirements
Documents RTM and test plan Automation analysis report |
Create
test cases, automation scripts (where applicable) Review and baseline test cases and scripts Create test data |
Reviewed
and signed test Cases/scripts Reviewed and signed test data |
Test
cases/scripts Test data |
Test
Environment setup |
System
Design and architecture documents are available Environment set-up plan is available |
Understand
the required architecture, environment set-up Prepare hardware and software requirement list Finalize connectivity requirements Prepare environment setup checklist Setup test Environment and test data Perform smoke test on the build Accept/reject the build depending on smoke test result |
Environment
setup is working as per the plan and checklist Test data setup is complete Smoke test is successful |
Environment
ready with test data set up Smoke Test Results. |
Test
Execution |
Baselined
RTM, Test Plan , Test case/scripts are available Test environment is ready Test data set up is done Unit/Integration test report for the build to be tested is available |
Execute
tests as per plan Document test results, and log defects for failed cases Update test plans/test cases, if necessary Map defects to test cases in RTM Retest the defect fixes Regression testing of application Track the defects to closure |
All
tests planned are executed Defects logged and tracked to closure |
Completed
RTM with execution status Test cases updated with results Defect reports |
Test
Cycle closure |
Testing
has been completed Test results are available Defect logs are available |
Evaluate
cycle completion criteria based on - Time, Test coverage , Cost , Software
Quality , Critical Business Objectives Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity |
Test
Closure report signed off by client |
Test
Closure report Test metrics |
5. What is
systems development life cycle (SDLC)?
(SDLC
is also an abbreviation for Synchronous Data Link Control.)
The
systems development life cycle (SDLC) is a conceptual model used in project management that describes the
stages involved in an information system development project, from an initial
feasibility study through maintenance of the completed application.
Various
SDLC methodologies have been developed to guide the processes involved,
including the waterfall model (which was the
original SDLC method); rapid application development (RAD); joint application
development (JAD); the fountain
model; the spiral model; build and fix; and synchronize-and-stabilize. Frequently, several
models are combined into some sort of hybrid methodology. Documentation is
crucial regardless of the type of model chosen or devised for any application,
and is usually done in parallel with the development process. Some methods work
better for specific types of projects, but in the final analysis, the most
important factor for the success of a project may be how closely the particular
plan was followed.
In
general, an SDLC methodology follows the following steps:
- The existing system is evaluated. Deficiencies are identified. This can be done by interviewing users of the system and consulting with support personnel.
- The new system requirements are defined. In particular, the deficiencies in the existing system must be addressed with specific proposals for improvement.
- The proposed system is designed. Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues.
- The new system is developed. The new components and programs must be obtained and installed. Users of the system must be trained in its use, and all aspects of performance must be tested. If necessary, adjustments must be made at this stage.
- The system is put into use. This can be done in various ways. The new system can phased in, according to application or location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut down the old system and implement the new system all at once.
- Once the new system is up and running for a while, it should be exhaustively evaluated. Maintenance must be kept up rigorously at all times. Users of the system should be kept up-to-date concerning the latest modifications and procedures.
Smoke testing:
A smoke test is scripted, either using a
written set of tests or an automated test
A Smoke test is designed to touch every part
of the application in a cursory way. It’s shallow and wide.
Smoke testing is conducted to ensure whether
the most crucial functions of a program are working, but not bothering with
finer details. (Such as build verification).
Smoke testing is
normal health check up to a build of an application before taking it to testing
in depth.
Sanity Testing
- A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
- A sanity test is usually unscripted.
- A Sanity test is used to determine a small section of the application is still working after a minor change.
- Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
- Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
Functinal testing
Functional testing is a type of black box testing that bases its
test cases on the specifications of the software component under test.
Functions are tested by feeding them input and examining the output, and
internal program structure is rarely considered (not like in white-box testing).Functional testing differs from system testing in that functional testing a program by checking it against ... design document(s) or specification(s)", while system testing " a program by checking it against the published user or system requirements" (Kaner, Falk, Nguyen 1999, p. 52).
Functional testing typically involves five steps
- The identification of functions that the software is expected to perform
- The creation of input data based on the function's specifications
- The determination of output based on the function's specifications
- The execution of the test case
- The comparison of actual and expected outputs
System Testing
Testing
the behavior of the whole software/system as defined in software requirements
specification(SRS) is known as system testing, its main focus is to verify that
the customer requirements are fulfilled.
System testing is
done after integration testing is complete. System testing should test
functional and non functional requirements of the software.
Following types of
testing should be considered during system testing cycle. The test types
followed in system testing differ from organization to organization however
this list covers some of the main testing types which need to be covered in
system testing.
Usability Testing
Usability means the software's capability to be learned and understood easily and how attractive it looks to the end user.Usability Testing is a black box testing technique.
Usability Testing tests the following features of the software.
1. How easy it is to use the software.
2. How easy it is to learn the software.
3. How convenient is the software to end user.
2. How easy it is to learn the software.
3. How convenient is the software to end user.
Acceptance Testing
Acceptance testing is performed after system testing is done and all or most of the major defects have been fixed. The goal of acceptance testing is to establish confidence in the delivered software/system that it meets the end user/customers requirements and is fit for use Acceptance testing is done by user/customer and some of the project stakeholders.Acceptance testing is done in production kind of environment.
For Commercial off the shelf (COTS) software's that are meant for the mass market testing needs to be done by the potential users, there are two types of acceptance testing for COTS software's.
Alpha Testing
Alpha testing is mostly applicable for software's developed for mass market i.e. Commercial off the shelf(COTS), feedback is needed from potential users. Alpha testing is conducted at developers site, potential users, members or developers organization are invited to use the system and report defects.
Beta Testing
Beta testing is also know as field testing, it is done by potential or existing users/customers at an external site without developers involvement, this test is done to determine that the software satisfies the end users/customers needs. This testing is done to acquire feedback from the market.
Unit Testing
Unit
is the smallest testable part of the software system. Unit testing is done to
verify that the lowest independent entities in any software are working fine.
The smallest testable part is isolated from the remainder code and tested to
determine whether it works correctly.
Why is Unit Testing important
Suppose you have two units and you do not want to test the units individually but as an integrated system to save your time.
Once the system is
integrated and you found error in an integrated system it becomes difficult to
differentiate that the error occurred in which unit so unit testing is
mandatory before integrating the units.
When developer is
coding the software it may happen that the dependent modules are not completed
for testing,in such cases developers use stubs and drivers to simulate the
called(stub) and caller(driver) units. Unit testing requires stubs and drivers,
stubs simulates the called unit and driver simulates the calling unit.
Lets explain STUBS
and DRIVERS in detail.
STUBS:
Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we need to test it, but module A calls functions from Module B and C which are not ready, so developer will write a dummy module which simulates B and C and returns values to module A. This dummy module code is known as stub.
DRIVERS:
Now suppose you have modules B and C ready but module A which calls functions from module B and C is not ready so developer will write a dummy piece of code for module A which will return values to module B and C. This dummy piece of code is known as driver.
Integration Testing
In
integration testing the individual tested units are grouped as one and the
interface between them is tested. Integration testing identifies the problems
that occur when the individual units are combined i.e it detects the problem in
interface of the two units. Integration testing is done after unit testing.
There are mainly
three approaches to do integration testing.
Top-down Approach
Top down approach tests the integration from top to bottom, it follows the architectural structure.
Example: Integration can
start with GUI and the missing components will be substituted by stubs and
integration will go on.
Bottom-up approach
In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers
Big bang approach
In big bang approach most or all of the developed modules are coupled together to form a complete system and then used for integration testing.
Black Box Testing
Blackbox
testing tests functional and non-functional characteristics of the software
without referring to the internal code of the software.
BlackBox
testing doesn't requires knowledge of internal code/structure of the
system/software.
It
uses external descriptions of the software like SRS(Software Requirements
Specification), Software Design Documents to derive the test cases.
Blackbox Test Design Techniques
Typically Blackbox Test Design Techniques include:
Equivalance
Partitioning
Boundary Value Analysis
State Transition Testing
Usecase Testing
Decision Table Testing
Boundary Value Analysis
State Transition Testing
Usecase Testing
Decision Table Testing
Stress Testing
Stress
testing tests the software with a focus to check that the software does not
crashes if the hardware resources(like memory,CPU,Disk Space) are not
sufficient.
Stress
testing puts the hardware resources under extensive levels of stress in order
to ensure that software is stable in a normal environment.
In
stress testing we load the software with large number of concurrent
users/processes which can not be handled by the systems hardware resources.
Stress
Testing is a type of performance testing and it is a non-functional testing.
Examples:
1. Stress Test of the
CPU will be done by running software application with 100% load for some days
which will ensure that the software runs properly under normal usage
conditions.
2. Suppose you have
some software which has minimum memory requirement of 512 MB RAM then the
software application is tested on a machine which has 512 MB memory with
extensive loads to find out the system/software behavior.
Regression Testing
Regression
Testing is done to find out the defects that arise due to code changes made in
existing code like functional enhancements or configuration changes.
The
main intent behind regression testing is to ensure that any code changes made
for software enhancements or configuration changes has not introduced any new
defects in the software.
Anytime
the changes are made to the existing working code, a suite of test cases is
executed to ensure that the new changes have not introduced any bugs in the
software.
It
is necessary to have a regression test suite and execute that suite after every
new version of software is available.
Regression
test suite is the ideal candidate for automation because it needs to be
executed after every new version.
Security Testing
Security
Testing tests the ability of the system/software to prevent unauthorized access
to the resources and data.
Security
Testing needs to cover the six basic security concepts: confidentiality,
integrity, authentication, authorization, availability and non-repudiation.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
Integrity
A measure intended to allow the receiver to determine that the information which it is providing is correct.
Integrity
schemes often use some of the same underlying technologies as confidentiality
schemes, but they usually involve adding additional information to a
communication to form the basis of an algorithmic check rather than the
encoding all of the communication.
Authentication
The process of establishing the identity of the user.
Authentication
can take many forms including but not limited to: passwords, biometrics, radio
frequency identification, etc.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation.
Access
control is an example of authorization.
Availability
Assuring information and communications services will be ready for use when expected.
Information
must be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc.
In
communication terms this often involves the interchange of authentication
information combined with some form of provable time stamp.
Load Testing
Load
testing tests the software or component with increasing load, number of
concurrent users or transactions is increased and the behavior of the system is
examined and checked what load can be handled by the software.
The
main objective of load testing is to determine the response time of the
software for critical transactions and make sure that they are within the
specified limit.
It
is a type of performance testing.
Load
Testing is non-functional testing.
Ad hoc testing
Ad hoc
testing
is a commonly used term for software testing performed without
planning and documentation (but can be applied to early scientific experimental
studies).The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is the least formal test method. As such, it has been criticized because it is not structured and hence defects found using this method may be harder to reproduce (since there are no written test cases). However, the strength of ad hoc testing is that important defects can be found quickly.
It is performed by improvisation: the tester seeks to find bugs by any means that seem appropriate. Ad hoc testing can be seen as a light version of error guessing, which itself is a light version of exploratory testing.