Kiểm thử phần mềm (English)

Chia sẻ bởi Ngoc Son | Ngày 01/05/2019 | 39

Chia sẻ tài liệu: Kiểm thử phần mềm (English) thuộc Power Point

Nội dung tài liệu:

SOFTWARE TESTING
“Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements”

Testing is a process used to help identify the correctness, completeness and quality of developed computer software.

On a whole, testing objectives could be summarized as:
     Testing is a process of executing a program with the intent of finding an error.

·    A good test is one that has a high probability of finding an as yet
undiscovered error.

·    A successful test is one that uncovers an as yet undiscovered error.
Testing is required to ensure that the applications meets the objectives related
to the applications’ functionality, performance, reliability, flexibility,
ease of use, and timeliness of delivery.

Developers hide their mistakes
To reduce the cost of rework by detecting defects at an early stage.
Avoid project overruns by following a defined test methodology.
Ensure the quality and reliability of the software to the users.

Why do we need Testing?

Test early and test often.
Integrate the application development and testing life cycles. You`ll get better results
and you won`t have to mediate between two armed camps in your IT shop.
Formalize a testing methodology; you`ll test everything the same way and you`ll get
uniform results.
Develop a comprehensive test plan; it forms the basis for the testing methodology.
Use both static and dynamic testing.
Define your expected results.
Understand the business reason behind the application. You`ll write a better
application and better testing scripts.
Use multiple levels and types of testing (regression, systems, integration, stress and
load).
Review and inspect the work, it will lower costs.
Don`t let your programmers check their own work; they`ll miss their own errors
Software Testing 10 Rules
A Good Test Engineer has a ‘test to break’ attitude.
An ability to take the point of view of the customer.
A strong desire for Quality.
Gives more attention to minor details.
Tact & diplomacy are useful in maintaining a co-operative relationship
with developers
Ability to communicate with both technical & non-technical people is
useful.
Judgment skills are needed to assess high-risk areas of an application on
which to focus testing efforts when time is limited.
A GOOD TEST ENGINEER
 

A Project company survives on the number of contacts that the company has
and the number of Projects that the company gets from other forms. Whereas a
Product based company’s existence depends entirely on how it’s product does in
the market.
  A Project Company will have the specifications made from the customer
as to how the Application should be. Since a Project company will be doing the same
kind of Project for some other Companies, they get to be better and know
what are the issues and can handle them.
  A Product company needs to develop it’s own specification and make
sure that they are generic. Also it has to be made sure that the Application is
compatible with other Applications. In a Product company, the application
created will always be new in some way or the other, causing the application to be more
vulnerable in terms of bugs. When upgrades are made for the different functionalities,
care has to be taken that it will not cause any other module to not function.
Testing: - Product Company vs. Project Company:
Automated Vs Manual Testing
Manual Testing Automated Testing

Prone to human errors More reliable

Time Consuming Time Conserving

Skilled man power required No human intervention required once started

Tests have to be performed Batch testing can be done
individually
WHEN TO STOP TESTING
This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are...
·         Deadlines, e.g. release deadlines, testing deadlines;
·         Test cases completed with certain percentage passed;
·         Test budget has been depleted;
·         Coverage of code, functionality, or requirements reaches a specified
point;
·         Bug rate falls below a certain level; or Beta or alpha testing period
ends.
ISO – International Standards for Organisation
SEI CMM –Software Engineering Institute Capability maturity module
CMMI - Capability maturity module Integration
TMM – Testing Maturity Model (Testing Dept)
PCMM – People Capability maturity module (Hr. Dept)
SIX SIGMA – Zero defect oriented product. (out of 1 million product 3.4% can be defect tolerance) Presently in india WIPRO holds the certification
SOME BRANDED STANDARDS
The Five Levels of Software Process Maturity

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
It is oriented to `prevention`.
In simple words, it is a review with a goal of improving the process as well as the deliverable
QA:for entire life cycle
QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity,.
QC is corrective process.
QC:for testing part in SDLC
QA & QC
Coherent sets of activities for specifying, designing, implementing and testing software systems

Objectives

To introduce software lifecycle models

To describe a number of different lifecycle models and when they may be used

To describe outline process models for requirements engineering, software development, testing and evolution
SOFWARE PROCESS
A project using the waterfall model moves down a series of steps starting from an initial idea to a final product. At the end of each step the project team holds a review to determine if they are ready to move to the next step. If the product isn’t ready to progress, it stays at that level until it’s ready.
WATER FALLS MODEL
Notice three important things about the waterfall model:
·        There’s no way to back up. As soon as you’re on a step, you need to complete the tasks for that step and then move on-you can’t go back.
·        The steps are discrete; there’s no overlap
·        Note that development or coding is only a single block.
Disadvantages : are More rework and changes will be more, if any error occurs, Time frame will be more, More people will be idle during the initial time Inflexible partitioning of the project into distinct stages makes it difficult to respond to changing customer requirements.
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a systems development method (SDM) used in information technology (IT).
This model of development combines the features of the prototyping model and the waterfall model.

The spiral model is favored for large, expensive, and complicated projects
SPIRAL MODEL
ADVANTAGES :
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses, because important issues are discovered earlier.

It is more able to cope with the (nearly inevitable) changes that software development generally entails.

Software engineers (who can get restless with protracted design processes) can get their hands in and start working on a project earlier
Each time around the spiral involves six steps:
1.      Determine the objectives, alternatives and constraints
2.      Identify and Resolve Risks
3.      Evaluate alternatives
4.      Develop and Test the Current level
5.      Plan the Next level
6.      Decide on the approach for the next level
The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side.
In fact, the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirements analysis, high-level design, detailed design and coding. The waterfall model did considerable damage by supporting the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline development activities. Many managers still believe this, even though testing usually takes up half of the project time
`V` shape model describes about the process about the construting the application at a time all the Analysing, designing, coding and testing will be done at a time. i.e once coding finishes it`ll go to tester to test for bugs if we got OK form tester we can immediately start coding after coding again send to tester he`ll check for BUGS and will send back to programmer then he,programmer can finish up by implementing the project.

V- Model
it is the model what is using by most of the companies.
v model is model in which testing is done prallelly with development.left side of v model ,reflect development input for the corresponding testing activities.
It is a parallel activity which would give the tester the domain knowledge and perform more value added,high quality testing with greater efficiency. Also it reduces time since the test plans,test cases.test strategy are prepared during the development stage itself.


URS
Extreme Programming.
New approach to development based on the development and delivery of very small increments of functionality

Relies on constant code improvement, user involvement in the development team and pair wise programming
Static testing, the review, inspection and validation of development requirements, is the most effective and cost efficient way of testing. A structured approach to testing should use both dynamic and static testing techniques.

Static testing is the most effective and cost efficient way of testing
A structured approach to testing should use both dynamic and static testing techniques
Dynamic Testing
Testing that is commonly assumed to mean executing software and finding errors is dynamic testing.
Two types : Structural and Functional Testing.
STATIC & DYNAMIC TESTING
Unit Testing
Require knowledge of code
High level of detail
Deliver thoroughly tested components to integration
Stopping criteria
Code Coverage
Quality
Strategies
Bottom-up, start from bottom and add one at a time
Top-down, start from top and add one at a time
Big-bang, everything at once
Simulation of other components
Stubs receive output from test objects
Drivers generate input to test objects
Integration Testing
Driver: It is a calling program. It provides facility to invoke a sub module instead of main modules.

Stub: It is a called program. This temporary program called by main module instead of sub module.
Top down Approach:
MAIN

Sub1 stub

Sub2
Bottom up Approach
Main

Driver

Sub1
Sub2
In Integration Testing
Functional testing
Test end to end functionality, Testing against complete requirement.
Requirement focus
Test cases derived from specification
Use-case focus
Test selection based on user profile
System Testing
User (or customer) involved
Environment as close to field use as possible
Focus on:
Building confidence
Compliance with defined acceptance criteria in the contract
Acceptance Testing
WHITE BOX TESTING TECHNIQUES
Statement coverage : Execute each & every statement of the code is called Statement coverage.
Decision Coverage : Execute each decision direction at least once
Conditional Coverage : Execute each & every condition
Loop coverage : Execute each & every loop.
Definition

This technique is used to ensure that every statement / decision in the program is executed at least once.
Program Sample
//statement 1
//statement 2
If((A > 1) and (B=0))
//sub-statement 1
Else
//sub-statement 2
Test Conditions
Statement1
Statement2
1. (A > 1) and (B = 0)
2. (A<=1) and (B NOT = 0)
3. (A<=1) and (B=0)
4. (A>1) and (B NOT= 0)

Description
Statement coverage requires only that the if … else statement be executed once – not that sub-statements 1 and 2 be executed.
Minimum level of Structural Coverage achieved
Helps to identify unreachable Code and its removal if required
“Null else” problem: It does not ensure exercising the statements completely. Example: ..if x<5 then x= x+3;
x>5 decision not enforced. Paths not covered
Statement coverage
Definition

A test case design technique in which test cases are designed to execute all the outcomes of every decision
Program Sample

IF Y > 1 THEN Y = Y + 1
IF Y > 9 THEN Y = Y + 1
ELSE
Y = Y + 3
END
Y = Y + 2
ELSE
Y = Y + 4
END
Decision Coverage
No. Of Paths = 3
Test Cases:
1 (Y > 1) and (Y > 9)
2 (Y > 1) and (Y <= 9)
3 (Y < = 1)
Definition
Both parts of the predicate are tested
Program Sample shows that all 4 test conditions are tested
Conditions Table ( 2 n )
Condition Coverage - AND
Program Sample

If((A > 1) AND (B=0)
{
//sub-statement 1
}
Else
{
//sub-statement 2
}
Test Conditions

1. (A > 1) AND (B = 0)
2. (A > 1) AND (B NOT = 0)
3. (A<=1) AND (B NOT = 0)
4. (A<=1) AND (B = 0)

Definition
Both parts of the predicate are tested
Program Sample shows that all 4 test conditions are tested
Conditions Table ( 2 n )
Condition Coverage - OR
Program Sample

If((A > 1) OR (B=0)
{
//sub-statement 1
}
Else
{
//sub-statement 2
}
Test Conditions

1. (A > 1) OR (B = 0)
2. (A<=1) OR (B NOT = 0)
3. (A<=1) OR (B=0)
4. (A>1) OR (B NOT= 0)

Loop Coverage
Simple
Nested Loops
Serial / Concatenated Loops
Unstructured Loops (Goto)
Coverage
Boundary value tests
Cyclomatic Complexity
Loop Coverage
Example of CC
for ( I=1 ; Iprintf (“Simple Loop”);

E=5 , N=5
CC = E-N+2
CC = 2
Types of Testing
Black Box Testing
White Box Testing
Unit Testing
Incremental Integration Testing
Integration Testing
Functional Testing
System Testing
End-to-End Testing
Sanity Testing
Regression Testing
Acceptance Testing
Load Testing
Stress Testing
Performance Testing
List of the different types of testing that can be implemented are listed below which will be followed by explanations of the same
Usability Testing
Install / Uninstall Testing
Recovery Testing
Security Testing
Compatibility Testing
Exploratory Testing
Ad-hoc Testing
Comparison Testing
Alpha Testing
Beta Testing
Mutation Testing
Conformance Testing
Quality Assurance Testing
Black Box Testing
It can also be termed as functional testing
Tests that examine the observable behavior of software as evidenced by its outputs without referencing to internal functions is black box testing
It is not based on any knowledge of internal design or code and tests are based on requirements and functionality
In object oriented programming environment, automatic code generation and code re-use becomes more prevalent, analysis of source code itself becomes less important and functional tests become more important
Test if a component conforms to specification
White Box Testing
It can also be termed as Structural Testing
Tests that verify the structure of the software and require complete access to the object’s source code is white box testing
It is known as white box as all internal workings of the code can be seen
White-box tests make sure that the software structure itself contributes to proper and efficient program execution
It is based in of the internal logic of an applications’ code and tests are based on coverage of code statements, branches, paths, conditions
In this type of testing code needs to be examined by highly skilled technicians
Test if a component conforms to design
Unit testing
This is the ‘micro’ scale testing and tests particular functions or code modules
It is always a combination of structural and functional tests and typically done by programmers and not by testers
Requires detailed knowledge of the internal program design and code and may require test driver modules or test harnesses
Unit tests are not always done easily done unless the application has a well designed architecture with tight code
Incremental Integration Testing
This is continuous testing of an application as new functionality is added
These tests require that the various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed
Can be tested by programmers or testers
Integration Testing
This is testing of combined parts of an application to ensure that they function together correctly
The parts can be code modules, individual applications, client and server applications on a network, etc.
More relevant to client/server and distributed systems
Functional Testing
It is black-box testing geared to functional requirements and should be done by testers
Testing done to ensure that the product functions the way it is designed to according to the design specifications and documentation
This testing can involve testing of product’s user interface, database management, security, installation, networking, etc.
System Testing
This is like black-box testing that is based on over-all requirements specifications
This testing begins once the modules are integrated enough to perform tests in a whole system environment
This testing can be done parallel with integration testing
This testing covers all combined parts of a system
End-to-End testing
This is the ‘macro’ end of the test scale and similar to system testing
This would involve testing of a complete application environment as in a real world use, such as interaction with the database, using network communications, or interacting with other hardware, applications, or systems
Sanity Testing
Initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort.
For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a `sane` enough condition to warrant further testing in its current state.
Regression Testing
This is re-testing of the product/software to ensure that all reported bugs have been fixed and implementation of changes has not affected other functions
It is always difficult to the amount of re-testing required, especially when the software is at the end of the development cycle
These tests apply to all phases wherever changes are being made
This testing also ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process
Acceptance Testing
This can be told as the final testing which is based on specifications of the end-user or the customer
It can also be based on use by end-users/customers over some limited period of time
This testing is more used in Web environment, where “virtual clients” perform typical tasks such as browsing, purchasing items and searching databases contained within your web site
“probing clients” start recording the exact server response times, where this testing is efficiently used
Load Testing
Testing an application under heavy loads
For example, testing of a Web site under a range of loads to determine at what point the system’s response time degrades or fails
Accurate pre-determination of performance can be determined through this testing
Stress Testing
This term is more often used interchangeably with ‘load’ and ‘performance’ testing.
It is system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system
Always aimed at finding the limits at which the system will fail through abnormal quantity or frequency of inputs.
Examples could be:-
higher rates of inputs
data rates an order of magnitude above ‘normal’
test cases that require maximum memory or other resources
test cases that cause ‘thrashing’ in a virtual operating system
test cases that cause excessive ‘hunting’ for data on disk systems
This testing can also attempt to determine combinations of otherwise normal inputs can cause improper processing
Performance Testing
This term is more often used interchangeably with ‘stress’ and ‘load’ testing
To understand the applications’ scalability, or to benchmark the performance in a environment or to identify the bottlenecks in high hit-rate Web sites, this testing can be used
This testing checks the run-time performance in the context of the integrated system
This may require special software instrumentation
Ideally, these types of testing are defined in requirements documentation or QA or Test Plans.
Usability Testing
This testing is testing for ‘user-friendliness’
The target will always be the end-user or customer
Techniques such as interviews, surveys, video recording of user sessions can be used in this type of testing
Programmers and Testers are not appropriate as usability testers
Install / Uninstall testing
Testing of full, partial, or upgrade install/uninstall processes.
Recovery testing
Testing that is performed to know how well a system recovers from crashes, hardware failures or other catastrophic problems
This is the forced failure of the software in a variety of ways to verify for the recovery
Systems need to be fault tolerant - at the same time processing faults should not cause overall system failure
Security Testing
This testing is performed to know how well the system protects against unauthorized internal or external access, willful damage, etc; this can include :
attempted penetration of the system by ‘outside’ individuals for fun or personal gain
disgruntled or dishonest employees
During this testing the tester plays the role of the individual trying to penetrate into the system.
Large range of methods include:
attempt to acquire passwords through external clerical means
use custom software to attack the system
overwhelm the system with requests
cause system errors and attempt to penetrate the system during recovery
browse through insecure data
Compatibility Testing
Testing whether the software is compatible in particular hardware / software / operating system / network / etc. environment
Exploratory testing
Tests based on creativity
Informal software tests that is not based on any formal test plans or test cases;
By this type of testing, tester would learn the software as they test it
Ad-hoc Testing
Similar to Exploratory testing
The only difference is that, these tests are taken to mean that the testers have adequate understanding of the software before testing it
Comparison Testing
This testing is comparing software weaknesses and strengths to competing products
For some applications reliability is critical, redundant hardware and software may be used, independent versions can be used
Testing is conducted for each version with same test data to ensure all provide identical output
All the versions are run with a real-time comparison of results
When outputs of versions differ, investigations are made to determine if there is a defect which is Comparison Testing
Alpha Testing
This is testing of an application when development is nearing completion;mostly testing conducted at the developer’s site by a customer
The customer uses the software with the developer ‘looking over the shoulder’ and recording errors and usage problems
Testing is conducted in a controlled environment
Minor design changes can be still made as a result of this testing
Typically conducted by end-users or customers and not by programmers or testers
Beta Testing
Testing conducted when development and testing are completed and bugs and problems need to be found before final release
It is ‘live’ testing in an environment not controlled by the developer.
Customer records the errors / problems reports difficulties at regular intervals
Testing is conducted at one or more customer sites
Typically conducted by end-users or customers and not by programmers or testers
Mutation Testing
Method of determining if a set of test date or test cases is useful
Various code changes (‘bugs’) are deliberately introduced and retested with the original test date/cases to determine whether the bugs are detected
Proper implementation requires large computational resources
A mutated program differs from the original
The mutants are tested until the results differ from those obtained from the original program
The mutant is killed
Conformance Testing
Testing conducted to verify the implementation in conformance to the industry standards
Producing tests for the behavior of an implementation to be sure that it provides the portability, interoperability, and/or compatibility a standard defines.
Economics of Continuous Testing
Traditional Testing Continuous Testing
Accumulated Accumulated Accumulated Accumulated
Test Cost Error Remaining Error Remaining Cost

0 20 10 $10

0 40 15 $25

0 60 18 $42

$480 12 4 $182

$1690 0 0 $582
Requirement
Code
Test
Production
Design
Cost $1
Cost $1
Cost $1
Cost $10
Cost $100

Error:

"Is an undesirable deviation from requirements?"
Any problem or cause for many problems which stops the system to perform its functionality is referred as Error


Bug:
Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug.
"Is an error found BEFORE the application goes into production?"
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality


Defect:

A defect is a variance from the desired attribute of a system or application.
"Is an error found AFTER the application goes into production?"
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation.

Failure:
Any Expected action that is supposed to happen if not can be referred as failure or we can say absence of expected response for any request.

Fault:
This generally referred in hardware terminologies. A Problem, which cause the system not to perform its task or objective.

http://www.exforsys.com
http://www.testing-post.com/testing/
http://en.wikipedia.org/wiki/Main_Page
www.sureshkumar.net
http://www.aptest.com/glossary.html
http://www.adstag.com/
http://www.softwaretestinghub.com
http://www.itquestionbank.com

Some of the links
STLC (Testing Life cycle)
Test Plan
Test Design
Test Execution
Test Log
Defect Tracking
Report Generation.

A set of test data and test programs (test scripts) and their expected results. A test case validates one or more system requirements and generates a pass or fail
Test Case
Test Scenario
A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one
Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.
E.g.: Verify a credit limit within a given range(1,000 – 2,000). Here we can identify 3 conditions
< 1000
Between 1,000 and 2,000
>2000
Error Guessing
E.g.: Date Input – February 30, 2000
Decimal Digit – 1.99.

Boundary Value Analysis
BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.


Age
BVA (Boundary Value Analysis )( Here we can define for Size and Range)
Size: three
Range:

Min  Pass
Min-1Fail
Min+1Pass
Max Pass
Max-1Pass
Max+1Fail

16 to 80
Test Scenarios - Sample
FS Reference: 3.2.1.Deposit
An order capture for deposit contains fields like Client Name, Amount, Tenor and interest for the deposit.
Business Rule:
If tenor is great than 10 months interest rate should be greater than 10% else a warning should be given by application.
If Tenor greater than 12 months, then the order should not proceed.
Test Cases
Test Cases will be defined, which will form the basis for mapping the Test cases to the actual transaction types that will be used for the integrated testing.
Test cases gives values / qualifiers to the attributes that the test condition can have.
Test cases is the end state of a test condition, i.e., it cannot be decomposed or broken down further.
Test Cases contains the Navigation Steps, Instructions, Data and Expected Results required to execute the test case(s).
It covers transfer of control between components.
It covers transfer of data between components (in both directions)
It covers consistency of use of data across components.
Test Data
Test Data could be related to both inputs and maintenance that are required to execute the application. Data for executing the test scenarios should be clearly defined.
Test Team can prepare this with the database team and Domain experts support or Revamp the existing production Data.
Example:
Business rule, if the Interest to be Paid is more than 8 % and the Tenor of the deposit exceeds one month, then the system should give a warning.

To populate an Interest to be Paid field of a deposit, we can give 9.5478 and make the Tenor as two months for a particular deposit.

This will trigger the warning in the application.
Test Conditions
A Test Condition is all possible combinations and validations that can be attributed to a requirement in the specification.The importance’s of determining the conditions are:
Deciding on the architecture of testing approach
Evolving design of the test scenarios
Ensuring Test coverage
The possible condition types that can be built are
Positive condition: Polarity of the value given for test is to comply with the condition existence.
Negative condition: Polarity of the value given for test is not to comply with the condition existence.
Boundary condition: Polarity of the value given for test is to assess the extreme values of the condition
User Perspective condition: Polarity of the value given for test is to analyse the practical usage of the condition
A defect is an improper program condition that is generally the result of an error. Not all errors produce program defects, as with incorrect comments or some documentation errors. Conversely, a defect could result from such nonprogrammer causes as improper program packaging or handling
Software Defects
Defect Categories
Wrong
Missing
Extra
The specifications have been
implemented incorrectly.
A requirement incorporated
into the product that was not specified.
A specified requirement
is not in the built product.
Step 1:Identify the module for which the Use Case belongs.
Step 2:Identify the functionality of the Use Case with respect to the overall functionality of the system.
Step 3:Identify the Actors involved in the Use Case.
Step 4:Identify the pre-conditions.
Step 5:Understand the Business Flow of the Use Case.
Step 6:Understand the Alternate Business Flow of the Use Case.
Step 7:Identify the any post-conditions and special requirements.
Step 8:Identify the Test Conditions from Use Case / Business Rule’s and make a Test Condition Matrix Document – Module Wise for each and every Use Case.
Step 9:Identify the main functionality of the module and document a complete Test scenario Document for the Business Flow (include any actions made in the alternate business flow if applicable)
Step 10:For every test scenarios, formulate the test steps based on a navigational flow of the application with the test condition matrix in a specific test case template.
Designing Test Cases from Use cases
Role of Documentation in Testing
Testing practices should be documented so that they are repeatable
Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented
Change management for documentation should be used if possible
Ideally a system should be developed for easily finding and obtaining documents and determining what documentation will have a particular piece of information
Under the condition where the bug report is invalid.
The question is
what are the comments that the developer left to indicate that it is
invalid? If there are none, you need to discuss this with the developer.

The reasons that they may have are many:
1) You didn`t understand the system under test correctly because
1a) the requirements have changed
1b) you don`t get the whole picture
2) You were testing against the wrong version of software, or
configuration, with the wrong OS, or wrong browser
3) You made an assumption that was incorrect
4) Your bug was not repeatable (in which case they may mark it as
"works for me"), or if it was repeatable it was because the memory
was already corrupted after the first instance, but you can`t
reproduce it on a clean machine (again, could be a "works for me" bug).

Just remember that a bug report isn`t you writing a law that the
developers must conform to, it`s a form of communication. If you
didn`t communicate the bug correctly, the bug report being in this
state is just as much your fault as it is the developer`s. Also since
it`s a communication, use it to communicate, not accuse or indict.

Traceability Matrix
Traceability Matrix ensures that each requirement has been traced to a specification in the Use Cases and Functional Specifications to a test condition/case in the test scenario and Defects raised during Test Execution, thereby achieving one-to-one test coverage.
The entire process of Traceability is a time consuming process. In order to simplify, Rational Requisite Pro / Test Director a tool, which will maintain the specifications of the documents. Then these are mapped correspondingly. The specifications have to be loaded into the system by the user.
Even though it is a time consuming process, it helps in finding the ‘ripple’ effect on altering a specification. The impacts on test conditions can immediately be identified using the trace matrix.
Traceability matrix should be prepared between requirements to Test cases.

Simplifying the above, A = Business Requirement, B = Functional Specification, C = Test Conditions. I.e., A = B, B = C, Therefore A = C 
What is Test Management?
Test management is a method of organizing application test assets and artifacts — such as
Test requirements
Test plans
Test documentation
Test scripts
Test results
To enable easy accessibility and reusability.Its aim is to deliver quality applications in less time.
Test management is firmly rooted in the concepts of better organization, collaboration and information sharing.
Test Strategy
Scope of Testing
Types of Testing
Levels of Testing
Test Methodology
Test Environment
Test Tools
Entry and Exit Criteria
Test Execution
Roles and Responsibilities
Risks and Contingencies
Defect Management
Test Deliverables
Test Milestones
Test Requirements
Test Team gathers the test requirements from the following Base Lined documents.
Customer Requirements Specification(CRS)
Functional Specification (FS) – Use Case, Business Rule, System Context
Non – Functional Requirements (NFR)
High Level Design Document (HLD)
Low Level Design Document (LLD)
System Architecture Document
Prototype of the application
Database Mapping Document
Interface Related Document
Other Project related documents such as e-mails, minutes of meeting.
Knowledge Transfer Sessions from the Development Team
Brainstorming sessions between the Test Team
Configuration Management
Software Configuration management is an umbrella activity that is applied
throughout the software process. SCM identifies controls, audits and reports
modifications that invariably occur while software is being developed and after it has
been released to a customer. All information produced as part of software
engineering becomes of software configuration. The configuration is organized in a
manner that enables orderly control of change.

The following is a sample list of Software Configuration Items:
Management plans (Project Plan, Test Plan, etc.)
Specifications (Requirements, Design, Test Case, etc.)
Customer Documentation (Implementation Manuals, User Manuals, Operations Manuals, On-line help Files)
Source Code (PL/1 Fortran, COBOL, Visual Basic, Visual C, etc.)
Executable Code (Machine readable object code, exe`s, etc.)
Libraries (Runtime Libraries, Procedures, %include Files, API`s, DLL`s, etc.)
Databases (Data being Processed, Data a program requires, test data, Regression test data, etc.)
Production Documentation
Automated Testing Tools
Win Runner, Load Runner, Test Director from Mercury Interactive
QARun ,QA Load from Compuware
Rational Robot, Site Load and SQA Manager from Rational
SilkTest, SilkPerformer from Segue
e-Tester, e-Load and e-Monitor from RSW Software
Test attributes
To different degrees, good tests have these attributes:
• Power. When a problem exists, the test will reveal it.
• Valid. When the test reveals a problem, it is a genuine problem.
• Value. It reveals things your clients want to know about the product or project.
• Credible. Your client will believe that people will do the things that are done in this test.
• Representative of events most likely to be encountered by the user. (xref. Musa`s Software
Reliability Engineering).
• Non-redundant. This test represents a larger group that address the same risk.
• Motivating. Your client will want to fix the problem exposed by this test.
• Performable. It can be performed as designed.
• Maintainable. Easy to revise in the face of product changes.
• Repeatable. It is easy and inexpensive to reuse the test.
• Pop. (short for Karl Popper) It reveal things about our basic or critical assumptions.
• Coverage. It exercises the product in a way that isn`t already taken care of by other tests.
• Easy to evaluate.
• Supports troubleshooting. Provides useful information for the debugging programmer.
• Appropriately complex. As the program gets more stable, you can hit it with more complex tests
and more closely simulate use by experienced users.
• Accountable. You can explain, justify, and prove you ran it.
• Cost. This includes time and effort, as well as direct costs.
• Opportunity Cost. Developing and performing this test prevents you from doing other work
Test Project Manager
Customer Interface
Master Test Plan
Test Strategy
Project Technical Contact
Interaction with Development Team
Review Test Artifacts
Defect Management
Test Lead
Module Technical Contact
Test Plan Development
Interaction with Module Team
Review Test Artifacts
Defect Management
Test Execution Summary
Defect Metrics Reporting
Test Engineers
Prepare Test Scenarios
Develop Test Conditions/Cases
Prepare Test Scripts
Test Coverage Matrix
Execute Tests as Scheduled
Defect Log
Test Tool Specialist
Prepare Automation Strategy
Capture and Playback Scripts
Run Test Scripts
Defect Log
Roles & Responsibilities
Support Group for Testing
Domain Expert, Development Team, Software Quality Assurance Team, Software Configuration, Support Group – Technology, Architecture and Design Team
* Một số tài liệu cũ có thể bị lỗi font khi hiển thị do dùng bộ mã không phải Unikey ...

Người chia sẻ: Ngoc Son
Dung lượng: | Lượt tài: 0
Loại file:
Nguồn : Chưa rõ
(Tài liệu chưa được thẩm định)