Product Metrics SEII - Lecture 23
Class-oriented metrics
Weighted methods per class, depth of the inheritance tree, number of children, coupling, response for class, lack of cohesion
Component-level design metrics
Cohesion, coupling, and complexity
Operation-oriented metrics
Average operation size, operation complexity average number of parameters per operation
Design metrics for WebApps
Metrics for source code
Metrics for object-oriented testing
Metrics for maintenance
20 trang |
Chia sẻ: dntpro1256 | Lượt xem: 604 | Lượt tải: 0
Bạn đang xem nội dung tài liệu Product Metrics SEII - Lecture 23, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Product MetricsSEII-Lecture 23Dr. Muzafar KhanAssistant ProfessorDepartment of Computer ScienceCIIT, Islamabad.RecapMeasurement and quality assessmentFramework for product metricsMeasure, measurement, and metricsFormulation, collection, analysis, interpretation, feedbackPrinciples for metrics characterization and validationMetrics for requirements modelFunction-based metricsMetrics for specification qualityMetric for design modelArchitectural design metricsMetric for object-oriented design2Class-Oriented Metrics [1/3]Weighted methods per class (WMC)n methods of complexity c1, c2, cn for a class CWMC = ∑ci for i = 1 to nIf complexity increases, more efforts are requiredLimited reuseCounting methods apparently seems straightforwardConsistent counting approach is requiredDepth of the inheritance tree (DIT)Maximum length from the node to the rootIf DIT grows, lower-level classes inherit many methodsMany methods may be reusedLeads to design complexity3Class-Oriented Metrics [2/3]Number of Children (NOC)Immediate subordinate classesNOC growsReuse increases Abstraction of parent class may be dilutedTesting effort increasesCoupling between class objects (CBO)If coupling increasesReusability decreasesTesting and modification complicatedCBO as low as reasonable4Class-Oriented Metrics [3/3]Response for a class (RFC)Methods potentially executed in response to message received by a class objectRFC increases, design complexity and testing increasesLack of cohesion in methods (LCOM)Number of methods that access one or more of the same attributesIf no methods access same attribute, LCOM is zeroIf LCOM high, complexity of design increases5Component-Level Design Metrics [1/3]Metrics for conventional components focus on internal characteristics of a componentCohesion metricsData sliceBackward walk through a module to look data valuesData tokensVariables definedGlue tokensData tokens lies on data sliceSuperglue tokensData tokens common to every data sliceStickinessThe relative stickiness of glue token directly proportional to the number of data slices that it binds6Component-Level Design Metrics [2/3]Coupling metricsData and control flow couplingdi = number of input data parametersci = number of input control parametersdo = number of output data parametersco = number of output control data parametersGlobal couplinggd = number of global variable used as datagc = number of global variable used as controlEnvironmental couplingw = number of modules calledr = number of modules calling the modulemc = k/M where M = di + (a * ci) + do + (b * co) + gd + (c * gc) + w + r7Component-Level Design Metrics [3/3]Complexity metricsCyclomatic complexityNumber of independent logical pathsVariations of cyclomatic complexity8Operation-Oriented MetricsAverage operation size (OSavg)Number of lines of code or number of messages sent by the operationIf number of messages sent increases, most probably responsibilities are not well allocated within a classOperation complexity (OC)Complexity metrics for conventional softwareOC should be kept as low as possibleAverage number of parameters per operation (NPavg)Larger number of parameters, complex collaborationNPavg should be kept as low as possible9Design Metrics for WebApps – Interface Metrics [1/2]Layout complexityNumber of distinct regions defined for an interfaceLayout region complexityAverage number of distinct links per regionRecognition complexityAverage number of distinct items the user must look at before making navigation or data input decisionRecognition timeAverage time (in seconds) that it takes a user to select the appropriate action for a given taskTyping effortAverage number of key strokes required for a specific function10Interface Metrics [2/2]Mouse pick effortAverage number of mouse picks per functionSelection complexityAverage number of links that can be selected per pageContent acquisition timeAverage number of words of text per web pageMemory loadAverage number of distinct data items that the user must remember to achieve specific objective11Aesthetic Design Metrics [1/2]Word count Total number of words that appear on a pageBody text percentage Percentage of words that are body versus display text (i.e. headers)Emphasized body text %Portion of body text that is emphasized (e.g., bold, capitalized)Text cluster count Text areas highlighted with color, bordered regions, rules, or listsLink count Total links on a page12Aesthetic Design Metrics [2/2]Page sizeTotal bytes for the page as well as elements, graphics, and style sheetsGraphic percentage Percentage of page bytes that are for graphicsGraphics count Total graphics on a page (not including graphics specified in scripts, applets, and objects)Color count Total colors employedFont count Total fonts employed (i.e. face + size + bold + italic)13Content MetricsPage wait Average time required for a page to download at different connection speedsPage complexity Average number of different types of media used on page, not including textGraphic complexityAverage number of graphics media per pageAudio complexity Average number of audio media per pageVideo complexity Average number of video media per pageAnimation complexity Average number of animations per pageScanned image complexity Average number of scanned images per page14Navigation MetricsFor static pagesPage-linking complexity Number of links per pageConnectivity Total number of internal links, not including dynamically generated linksConnectivity density Connectivity divided by page count15Metrics for Source Coden1 = number of distinct operators that appear in a programn2 = number of distinct operands that appear in a programN1 = total number of operator occurrencesN2 = total number of operand occurrencesOverall program length (N) and volume (V)N = n1 log2 n1 + n2 log2 n2V = N log2 (n1 + n2)16Metrics for Object-Oriented Testing [1/2]Lack of cohesion in methods (LCOM)If LCOM is high, more states must be testedPercent public and protected (PAP)Percentage of public and protected class attributesHigh value for PAP increases the side effects among classesPublic access to data members (PAD)The number of classes (or methods) that access another class’s attributesViolation of encapsulation17Metrics for Object-Oriented Testing [2/2]Number of root classes (NOR)The number of distinct class hierarchiesTest should be developed for each root class and corresponding hierarchyIf NOR increases, testing effort also increasesFan-in (FIN)Indication of multiple inheritanceFIN > 1 should be avoidedNumber of children (NOC) and depth of inheritance tree (DIT)18Metrics for MaintenanceMT = number of modules in the current releaseFc = number of modules in the current release that have been changedFa = number of modules in the current release that have been addedFd = number of modules from the preceding release that were deleted in the current releaseSoftware maturity indexSMI = MT - (Fa + Fc + Fd) / MT1 indicates the product stability 19SummaryClass-oriented metricsWeighted methods per class, depth of the inheritance tree, number of children, coupling, response for class, lack of cohesionComponent-level design metricsCohesion, coupling, and complexityOperation-oriented metricsAverage operation size, operation complexity average number of parameters per operationDesign metrics for WebAppsMetrics for source codeMetrics for object-oriented testingMetrics for maintenance20
Các file đính kèm theo tài liệu này:
- lecture_23_csc392_dr_muzafar_khan_9381_2027033.pptx