Software Design
and Architecture
What Is Design?
• Requirements specification was
about the WHAT
the system will do
• Design is about the HOW the
system will perform
its functions
– provides the overall
decomposition of the system
– allows to split the work among
a team of developers
– also lays down the groundwork
for achieving nonfunctional
requirements (performance,
maintainability,
reusability, etc.)
– takes target technology into account (e.g., kind
of middleware, database design, etc.)
Characteristics
of a Good Software Design
:
Correctness: A good design
should correctly implement all the functionalities identified in the SRS
document.
Understandability: A good design is
easily understandable.
Efficiency: It should be
efficient.
Maintainability: It should be
easily amenable to change.
1.2. Current Design Approaches
Most researchers and engineers agree
that a good software design implies clean decomposition of the problem into
modules, and the neat arrangement of these modules in a hierarchy. The primary
characteristics of neat module decomposition are high cohesion and low
coupling.
1.2.1. Cohesion
.
Cohesion
is a measure of functional strength of a module. A
module having high cohesion and low coupling is said to be functionally
independent of other modules. By the term functional independence, we mean that
a cohesive module performs a single task or function. The different classes of
cohesion that a module may possess
Coincidental cohesion: A module is said
to have coincidental cohesion, if it performs a set of tasks that relate to
each other very loosely, if at all. In this case, the module contains a random
collection of functions. It is likely that the functions have been put in the
module out of pure coincidence without any thought or design.
Logical cohesion: A module is said
to be logically cohesive, if all elements of the module perform similar
operations, e.g. error handling, data input, data output, etc. An example of
logical cohesion is the case where a set of print functions generating
different output reports are arranged into a single module.
Temporal cohesion: When a module
contains functions that are related by the fact that all the functions must be
executed in the same time span, the module is said to exhibit temporal cohesion.
The set of functions responsible for initialization, start-up, shutdown of some
process, etc. exhibit temporal cohesion
Diagram:……………
.
Procedural cohesion: A
module is said to possess procedural cohesion, if the set of functions of the
module are all part of a procedure (algorithm) in which a certain sequence of
steps have to be carried out for achieving an objective, e.g. the algorithm for
decoding a message.
Communicational
cohesion: A module is said to have communicational cohesion, if all
functions of the module refer to or update the same data structure, e.g. the
set of functions defined on an array or a stack.
Sequential cohesion: A
module is said to possess sequential cohesion, if the elements of a module form
the parts of sequence, where the output from one element of the sequence is
input to the next.
Functional cohesion: Functional
cohesion is said to exist, if different elements of a module cooperate to
achieve a single function. For example, a module containing all the functions
required to manage employees’ pay-roll displays functional cohesion. Suppose a
module displays functional cohesion, and we are asked to describe what the
module does, then we would be able to describe it using a single sentence.
1.2.2. Coupling
Coupling between two modules is a measure of the degree of
interdependence or interaction between the two modules.
A module having high cohesion and low coupling is said to be functionally
independent of other modules. If two modules interchange large amounts of data,
then they are highly interdependent. The degree of coupling between two modules
depends on their interface complexity. The interface complexity is basically
determined by the number of types of parameters that are interchanged while
invoking the functions of the module. Even if no techniques to precisely and
quantitatively estimate the coupling between two modules exist today,
classification of the different types of coupling will help to quantitatively
estimate the degree of coupling between two modules. Five types of coupling can
occur between any two modules as shown in fig. 36.2.
Stamp Coupling: Two
modules are stamped coupled, if they communicate using a composite data item
such as a record in PASCAL or a structure in C.
Control coupling: Control
coupling exists between two couples, if data from one module is used to direct
the order of instructions execution in another. An example of control coupling
is a flag set in one module and tested in another module.
Common coupling: Two
modules are common coupled, if they share some global data items.
Content coupling: Content
coupling exists between two modules, if their code is shared, e.g. a branch
from one module into another module. High Low Date Stamp Control Common
Content Fig. 36.2 Classification of coupling
Levels of Design
• Architectural design (also:
high-level design)
– architecture - the overall
structure: main modules and their
connections
– design that covers the main
use-cases of the system
– addresses the main
non-functional requirements (e.g., throughput,
reliability)
– hard to change
• Detailed design (also:
low-level design)
– the inner structure of the main
modules
– may take the target programming
language into account
– detailed enough to be implemented in the
programming language
List of Design Goals
• Reliability
• Modifiability
• Maintainability
• Understandability
• Adaptability
• Reusability
• Efficiency
• Portability
• Traceability of requirements
• Fault tolerance
• Backward-compatibility
• Cost-effectiveness
• Robustness
• High-performance
Good documentation
• Well-defined interfaces
• User-friendliness
• Reuse of components
• Rapid development
• Minimum # of errors
• Readability
• Ease of learning
• Ease of remembering
• Ease of use
• Increased productivity
• Low-cost
• Flexibility
WHAT IS SOFTWARE TESTING?
Software testing is a process of verifying and validating
that a software application or program
1. Meets the business and technical requirements that guided
its design and development, and
2. Works as expected.
Software testing has three main purposes:
verification, validation, and defect finding.
♦ The verification process
confirms that the software meets its technical specifications. A
“specification” is a description of a function in terms of a measurable output
value given a specific input value under specific preconditions. A simple
specification may be along the line of “a SQL query retrieving data for a
single account against the multi-month account-summary table must return these
eight fields <list> ordered by month within 3 seconds of submission.”
♦ The validation process
confirms that the software meets the business requirements. A simple example of
a business requirement is “After choosing a branch office name, information
about the branch’s customer account managers will appear in a new window. The
window will present manager identification and summary information about each
manager’s customer base: <list of data elements>.” Other requirements
provide details on how the data will be summarized, formatted and displayed.
♦ A defect is a variance
between the expected and actual result. The defect’s ultimate source may be
traced to a fault introduced in the specification, design, or development
(coding) phases.
Testing
Objectives:
1. Testing is a process of executing a program
with the intent of finding an error.
2. A good test case is one that has a high
probability of finding an as-yet
undiscovered error.
3.
A successful test is one that uncovers an
as-yet-undiscovered error.
Testing
Principles
Before
applying methods to design effective test cases, a software engineer must
understand
the basic principles that guide software testing.
• All tests should be traceable to customer requirements. As we have
seen,
the objective of software testing is to uncover errors. It follows that the most
severe defects (from the customer’s point of view) are those that cause the
program to fail to meet its requirements.
• Tests should be planned long before testing begins. Test planning
can
begin as soon as the requirements model is complete.
• The Pareto principle applies to software
testing. Stated simply, the
Pareto
principle implies that 80 percent of all errors uncovered during testing will
likely be traceable to 20 percent of all program components. The problem,of
course, is to isolate these suspect components and to thoroughly test them.
• Testing should begin “in the small” and progress toward testing
“in the large.” The first tests planned and executed
generally focus on individual components. As testing progresses, focus shifts in an attempt to
find errors in integrated
clusters of components and ultimately in the entire system
• Exhaustive testing is not possible. The number of path permutations for even a moderately sized
program is exceptionally large further .For this reason, it is impossible to
execute every combination
of
paths during testing. It is possible, however, to adequately cover
program
logic and to ensure that all conditions in the component-level
design
have been exercised.
• To be most effective, testing should be conducted by an
independent third party. By most effective, we mean
testing that has the highest probability of finding errors (the primary objective of testing). the software engineer who created the system
is not the best
person
to conduct all tests for the software.
BASIS PATH TESTING
Basis path testing is a
white-box testing technique first proposed by Tom McCabe. The basis path method enables the test case designer to
derive a logical complexity measure of a procedural design and use this measure
as a guide for defining a basis set of execution paths.
Test cases derived to exercise the basis set
are guaranteed
to
execute every statement in the program at least one time during testing.
Cyclomatic Complexity
Cyclomatic complexity is
a software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of the basis path testing method,
the value computed for cyclomatic complexity defines the number of independent paths
in the basis set of a program and provides us with an upper bound for
the
number of tests that must be conducted to ensure that all statements have been executed
at least once.
An
independent path is any path through the program that introduces at least one new
set of processing statements or a new condition.
BLACK-BOX TESTING
Black-box testing, also
called behavioral testing, focuses on the functional requirements of the software. That is,
black-box testing enables the software engineer to derive sets of input conditions
that will fully exercise all functional requirements for a program.
Black-box
testing is not an alternative to white-box techniques. Rather,
it
is a complementary approach that is likely to uncover a different class of
errors
than
white-box methods.
Black-box
testing attempts to find errors in the following categories:
(1)
incorrect or missing functions,
(2) interface errors,
(3)
errors in data structures or external data
base
access,
(4) behavior or performance errors, and
(5) initialization and terminationerrors.
Unlike
white-box testing, which is performed early in the testing process, blackbox
testing
tends to be applied during later stages of testing (see Chapter 18). Because
black-box
testing purposely disregards control structure, attention is focused on the
information
domain. Tests are designed to answer the following questions:
White-Box
Testing
White-box testing is a
verification technique software engineers can use to examine if their
code works as expected.:
• a method for
writing a set of white-box test cases that exercise the paths in the code
• the use of
equivalence partitioning and boundary value analysis to manage the number of
test cases that need to be
written and to examine error-prone/extreme “corner” test cases
• how to measure
how thoroughly the test cases exercise the code
White-box
testing is testing that takes into account the internal mechanism of a
system orcomponent
.
White-box testing is
also known as structural testing, clear box
testing,
and glass box testing .
The connotations of “clear box”
and “glassbox” appropriately indicate that you have full visibility of the
internal workings of the
software product, specifically,
the logic and the structure of the code.
Using the white-box testing
techniques , a software engineer can
design test cases that
(1) exercise independent paths
within a module or unit;
(2) exercise
logical decisions on both their
true and false side;
(3) execute loops at their
boundaries and
within their operational bounds;
and
(4) exercise internal data
structures to ensure their validity
There are six basic types of
testing: unit, integration, function/system, acceptance, regression,
and beta. White-box testing is
used for three of these six types
Unit
testing, which is testing of individual hardware or software units or
groups of
related
units
Integration
testing, which is testing in which software components, hardware
components,
or both are combined and tested to evaluate the interaction between them
Regression
testing, which is selective retesting of a system or component to verify
that
modifications
have not caused unintended effects and that the system or component still
complies
with its specified requirements
UNIT TESTING
Unit
testing focuses verification effort on the smallest unit of software design—the
software
component or module.
Using
the component-level design description as a guide, important control paths are
tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors is limited by the
constrained
scope established for unit testing.
The
unit test is white-box oriented,and the step can be conducted in parallel for
multiple components.
Unit
Test Considerations
The
tests that occur as part of unit tests are illustrated schematically in Figure
18.4.
The
module interface is tested to ensure that information properly flows into and
out of the program unit under test. The local data structure is examined to
ensure that data stored temporarily maintains its integrity during all steps in
an algorithm's execution.
Boundary
conditions are tested to ensure that the module operates properly
at
boundaries established to limit or restrict processing. All independent paths
(basis paths) through the control structure are exercised to ensure that all
statements in a module have been executed at least once. And finally, all error
handling paths are
tested.
Integration testing
Integration
testing is a systematic technique for constructing the program structure
while
at the same time conducting tests to uncover errors associated with
interfacing.
The
objective is to take unit tested components and build a program structure
that
has been dictated by design.
Top-down
Integration
Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning
with the main control module (main program). Modules subordinate (and
ultimately subordinate) to the main control module are incorporated into the structure
in either a depth-first or breadth-first manner.
Bottom-up
Integration
Bottom-up integration testing, as its name implies, begins construction and testing
with
atomic modules (i.e., components at the lowest levels in the program structure).
Because
components are integrated from the bottom up, processing required for
components subordinate to a given level is always available and the need for
stubs is eliminated.
A
bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into
clusters (sometimes called builds)
that
perform a specific software subfunction.
2. A driver (a control program for testing) is
written to coordinate test case
input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are
combined moving upward in the program
structure.
Regression Testing
Each
time a new module is added as part of integration testing, the software
changes.
New
data flow paths are established, new I/O may occur, and new control logic is
invoked.
These changes may cause problems with functions that previously worked flawlessly.
In the context of an integration test strategy, regression testing is the re
execution of some subset of tests that have already been conducted to ensure
that changes have not propagated unintended side effects
Regression
testing is the activity that helps to ensure that changes (due
to
testing or for other reasons) do not introduce unintended behavior or
additional
errors.
Regression
testing may be conducted manually, by re-executing a subset of all test
cases
or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture
test cases and results for subsequent playback and comparison.
Smoke Testing
Smoke testing is an
integration testing approach that is commonly used when “shrinkwrapped” software
products are being developed. It is designed as a pacing mechanism for
time-critical projects, allowing the software team to assess its project on a frequent
basis. In essence, the smoke testing approach encompasses the following activities:
1. Software components that have been
translated into code are integrated into
a
“build.” A build includes all data files, libraries, reusable modules, and
engineered
components
that are required to implement one or more product
functions.
2. A series of tests is designed to expose
errors that will keep the build from
properly
performing its function. The intent should be to uncover “show stopper”
errors
that have the highest likelihood of throwing the software project
behind
schedule.
3. The build is integrated with other builds
and the entire product (in its current
form)
is smoke tested daily. The integration approach may be top down or
bottom
up.
Alpha and Beta Testing
It
is virtually impossible for a software developer to foresee how the customer
will
really
use a program.
Instructions for use may be misinterpreted;
1.
strange combinations of data may be regularly used;
2.output
that seemed clear to the tester may be unintelligible to a user in the field.
When
custom software is built for one customer, a series of acceptance tests are
conducted
to enable the customer to validate all requirements. Conducted by the end user rather
than software engineers, an acceptance test can range from an informal "test drive" to a planned and
systematically executed series of tests.
In
fact, acceptance testing can be conducted over a period of weeks or months,
thereby uncovering cumulative errors that might degrade the system over time.
If
software is developed as a product to be used by many customers, it is
impractical
to
perform formal acceptance tests with each one.
Most
software product builders use a process called alpha
and beta testing to uncover errors that only the end-userseems able to
find.
The alpha test is conducted at the developer's site by a customer. The software is
used
in a natural setting with the developer "looking over the shoulder"
of the user
and
recording errors and usage problems. Alpha tests are
conducted in a controlled
environment.
The beta test is conducted at one or more customer sites by the
end-user of the
software. Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment
that cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
to the developer at regular intervals. As a result of problems reported during
beta tests, software engineers make modifications and then prepare for release
of the software product to the entire customer base.
THE TEST
PLAN
The test plan is
a mandatory document. You can’t test without one. For simple, straight-forward
projects the plan doesn’t have to be elaborate but it must address certain
items. As identified by the “American National Standards Institute and
Institute for Electrical and Electronic Engineers Standard 829/1983 for
Software Test Documentation”, the following components should be covered in a
software test plan
Test
case/data
A set of test
inputs, execution conditions, and expected results developed for a particular
objective,
such as to exercise a particular program path or to verify compliance with a
specific
requirement.
A test case is
a pair consisting of test data to be input to the program and the expected
output. The
test data is a set of values, one for each input variable.
A test set is
a collection of zero or more test cases.
A test case in
software engineering is a set of conditions or variables under which a tester
will determine
whether an application or software system is working correctly or not. The
mechanism for
determining whether a software program or system has passed or failed
such a test is
known as a test oracle. In some settings, an oracle could be a requirement or
use case,
while in others it could be a heuristic. It may take many test cases to
determine
that a
software program or system is functioning correctly. Test cases are often
referred
to as test
scripts, particularly when written. Written test cases are usually collected
into
test
suites.
No comments:
Post a Comment