The most important structural properties that SDMetrics can measure for your software designs are: size, coupling, inheritance, complexity, and cohesion. Below, we summarize each property, providing the following information:
- Definition: a short definition of the property.
- Impact on quality: the system quality attributes the property is hypothesized to impact, and why.
- Empirical results: qualitative summary of results from empirical studies investigating the usefulness of measures of the structural property. The summary is based on a literature survey and comprehensive empirical investigation of design metrics, see [BW02] for full details.
Design size metrics measure the size of design elements, typically by counting the elements contained within. For example, the number of operations in a class, the number of classes in a package, and so on.
Impact on quality
Size metrics are good candidates for developing cost or effort estimates for implementation, review, testing, or maintenance activities. Such estimates are then used as input for project planning purposes and the allocation of personnel.
In addition, large design elements (big classes or packages) may suffer from poor design. In an iterative development process, more and more functionality is added to a class or package over time. The danger is that, eventually, many unrelated responsibilities are assigned to a design element. As a result, it has low functional cohesion. This in turn negatively impacts the understandability, reusability, and maintainability of the design element.
Therefore, interfaces and implementations of large classes or packages should be reviewed for functional cohesion. If there is no justification for the large size, the design element should be considered for refactoring, for instance, extract parts of the functionality to separate, more cohesive classes.
Empirical studies consistently confirm the importance of size as the main cost driver in a software project. Size metrics are also consistently good indicators of fault-proneness: large methods/classes/packages contain more faults. However, since size metrics systematically identify large design elements as fault-prone, these metrics alone are not suitable to find elements with high fault density.Top of page
Coupling is the degree to which the elements in a design are connected.
Impact on quality
Coupling connections cause dependencies between design elements, which, in turn, have an impact on system qualities such as maintainability (a modification of a design element may require modifications to its connected elements) or testability (a fault in one design element may cause a failure in a completely different, connected element). Thus, a common design principle is to minimize coupling.
Most coupling dependencies are directed - the coupling usually defines a client-supplier relationship between the design elements. Therefore, it is useful to distinguish import coupling ("using", "fan-out") and export coupling ("used", "fan-in"), which we discuss in the following.
Import coupling measures the degree to which an element has knowledge of, uses, or depends on other design elements. High import coupling can have the following effects:
- Decreased maintainability: changes to the supplier may necessitate
follow-up changes (ripple effects) to the client.
The stability of the supplier is a factor to consider here. High coupling to elements that are not likely to change is less harmful than coupling to "hot spots".
- Decreased understandability, increased fault-proneness: elements with high import coupling operate in large context, developers need to know all the services the element relies on, and how to use them.
- Decreased reusability: To reuse a class or package with high import coupling in a new context, all the required services must also be made available in the new context.
Export coupling measures the degree to which an element is used by, depended upon, by other design elements. High export coupling is often observed for general utility classes (e.g., for string handling or logging services) that are used pervasively across all layers of the system. Thus, high export coupling is not necessarily indicative of bad design.
Again, an important issue to consider here is stability. High export coupling elements that are likely to change in the future can have a large impact on the system if the change affects the interface. Therefore, high export classes should be reviewed for anticipated changes, to ensure that these changes can implemented with minimal impact.
Coupling metrics have consistently been found to be good indicators of fault-proneness. It seems worthwhile to investigate different dimensions of coupling: import and export coupling, different coupling mechanisms, distinguishing coupling to COTS libraries and application-specific classes/packages. Coupling metrics are suitable to identify design elements with high fault density. Therefore, coupling metrics greatly help to identify small parts of a design that contain a large number of faults.Top of page
Inheritance-related metrics are concerned with aspects such as
- depth/width of the inheritance graph
- number of ancestors/descendents of a design element
- inherited size
- polymorphism, method overriding, etc.
Impact on quality
Deep inheritance structures are hypothesized to be more fault-prone. The information needed to fully understand a class situated deep in the inheritance tree is spread over several ancestor classes, thus more difficult to overview.
Similar to high export coupling, a modification to a design element with a large number of descendents can have a large effect on the system. Make sure the interface of the class is stable, or that anticipated modifications can be added without affecting the inheritance hierarchy at large.
Empirical studies show that effects of the use of inheritance on system qualities such as fault-proneness vary greatly. Depending on factors such as developer experience, system quality can benefit or suffer from the use of inheritance, or be unaffected by it.
Thus, inheritance metrics should not be relied on for decision making before their impact on system quality is not demonstrated in a given development environment. Extant inheritance metrics per se are not suitable to distinguish proper use of inheritance from improper use.
Also, inheritance is not very frequently used in designs. Typically, only a small percentage of the classes in a system will participate in inheritance relationships. As a consequence, inheritance-related metrics tend to have low variance and are difficult to use (see Descriptive Statistics).Top of page
Complexity measures the degree of connectivity between elements of a design unit. Whereas size counts the elements in a design unit, and coupling the relationships/dependencies leaving the design unit boundary, complexity is concerned with the relationships/dependencies between the elements in the design unit. For instance, counting the number method invocations among the methods within one class can be considered a measure of class complexity, or the number of transitions between the states in a state diagram.
Impact on quality
High complexity of interactions between the elements of a design unit can lead to decreased understandability and therefore increased fault-proneness. Also, testing such design units is more difficult.
In practice, complexity metrics are often strongly correlated with size measures. Large design units that contain many design elements within are also more likely to have a large number of connections between the design elements. Thus, while complexity metrics are good indicators of qualities such as fault-proneness, they rarely provide new insights in addition to size metrics.Top of page
Cohesion is the degree to which the elements in a design unit (package, class etc.) are logically related, or "belong together". As such, cohesion is a semantic concept.
Cohesion metrics have been proposed which attempt to approximate this semantic concept using syntactical criteria. Such metrics quantify the connectivity (coupling) between elements of the design unit: the higher the connectivity between elements, the higher the cohesion.
Cohesion metrics often are normalized to have a notion of minimum and maximum cohesion, usually expressed on a scale from 0 to 1. Minimum cohesion (0) is assumed when the elements are entirely unconnected, maximum cohesion (1) is assumed when each element is connected to every other element.
Not normalized metrics are based on counts of connections between design elements in a unit (e.g., method calls within a class). As such, not normalized metrics are conceptually similar to complexity metrics.
Impact on quality
A low cohesive design element has been assigned many unrelated responsibilities. Consequently, the design element is more difficult to understand and therefore also harder to maintain and reuse. Design elements with low cohesion should be considered for refactoring, for instance, by extracting parts of the functionality to separate classes with clearly defined responsibilities.
In practice, cohesion metrics are only of limited usefulness:
- Not normalized cohesion metrics often are strongly related to size metrics. This makes sense since, as discussed, large classes or packages may in fact suffer from low cohesion. Such cohesion metrics then are, of course, good quality indicators, but they are redundant with size metrics - they provide no new information about the design element.
- Normalized cohesion metrics do not consistently have a bearing on system quality. I.e., we cannot conclude from a high or low cohesion value that a class is, e.g., more or less fault-prone. Either, the theoretical negative impact of low cohesion on system quality is not always that critical in practice, or, the cohesion metrics simply fail to identify design elements with unrelated responsibilities.