Consolidated Product Software Process Assessment Ð Part 4: Guide To Conducting Assessments

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 32

Consolidated product Software Process Assessment Part 4 : Guide to conducting assessments Version 1.00 (Formerly PAG 1.

01)

ISO/IEC Software Process Assessment - Part 4: Guide to conducting assessments

Working Draft V1.00 Page 1

PREAMBLE
In January 1993 a program of work was approved by ISO/IEC JTC1 for the development of an international standard for software process assessment. In June 1993 the SPICE Project Organisation was established with a mandate from JTC1/SC7 to: assist the standardisation project in its preparatory stage by developing initial working drafts; undertake user trials in order to gain early experience data which will form the basis for revision of the Technical Report prior to publication as a full International Standard; create market awareness and take-up of the evolving standard. The SPICE Project Organisation completed its task of producing the set of working drafts in June 1995. The SPICE user trials commenced in January 1995. The working drafts have now been handed over to JTC1/SC7 for the normal process of standards development, commencing in July 1995. So far as can be determined, intellectual property rights for these documents reside with the individuals and organisations that contributed to their development. In agreeing to take part in the Project, participants agreed to abide by decisions of the Management Board in relation to the conduct of the Project. It is in accordance with this understanding that the Management Board has now agreed to release the baseline set of documents. This introductory statement sets out the terms and conditions under which this release is permitted. The documents as released are available freely from the SPICE Project File Server, sisyphus.cit.gu.edu.au, by anonymous ftp, or from approved mirrors of the server. A hypertext version of the documents is also available on the World Wide Web at URL http://wwwsqi.cit.gu.edu.au/spice/

ISO/IEC Software Process Assessment - Part 4: Guide to conducting assessments

Working Draft V1.00 Page 2

TERMS AND CONDITIONS


These terms and conditions apply to the set of documents developed by the SPICE Project, and published within the Project as Version 1.0, with the following titles: Part 1 : Concepts and introductory guide Part 2 : A model for process management Part 3 : Rating processes Part 4 : Guide to conducting assessment Part 5 : Construction, selection and use of assessment instruments and tools Part 6 : Qualification and training of assessors Part 7 : Guide for use in process improvement Part 8 : Guide for use in determining supplier process capability Part 9 : Vocabulary 1. You may copy and distribute verbatim copies of any or all of the Documents as you receive them, in any medium, provided that you conspicuously and appropriately publish with each copy a copy of these Terms and Conditions. You may charge a fee for the physical act of transferring a copy. 2. You may copy extracts from these documents in materials for internal or public use, providing you provide clear acknowledgment of the source of the material, by citation or other appropriate means. 3. You may not copy, modify, sub-license, or distribute the Documents except as expressly provided under these Terms and Conditions.

Released on the Authority of the SPICE Management Board:


Project Manager Technical Centre Managers: Europe Canada, Central and South America USA Asia Pacific Members: Harry Barker Jean-Normand Drouin Mark Paulk / Mike Konrad / Dave Kitson Terry Rout Catriona Mackie, Bob Smith, Emmanuel Lazinier, Jerome Pesant, Bob Rand, Arnoldo Diaz, Yossi Winograd, Mary Campbell, Carrie Buchman, Ali Azimi, Bruce Hodgen, Katsumi Shintani Alec Dorling

ISO/IEC Software Process Assessment - Part 4: Guide to conducting assessments

Working Draft V1.00 Page 3

Product Managers:
Part 1 : Concepts and introductory guide Product Manager: Terry Rout Part 2 : A model for process management Product Managers: Al Graydon, Mark Paulk Part 3 : Rating processes Product Manager: Harry Barker Part 4 : Guide to conducting assessment Product Manager: Harry Barker Part 5 : Construction, selection and use of assessment instruments and tools Product Managers: Mary Campbell, Peter Hitchcock, Arnoldo Diaz Part 6 : Qualification and training of assessors Product Manager: Ron Meegoda Part 7 : Guide for use in process improvement Product Managers: Adriana Bicego, Pasi Kuvaja Part 8 : Guide for use in determining supplier process capability Product Manager: John Hamilton Part 9 : Vocabulary Product Manager: Terry Rout

Acknowledgment:
Acknowledgment is made to all contributors of the SPICE project without whom the project could not have been conceived and carried through successfully. Note on document formatting Use the following margins for equivalent printing on A4 or US letter paper (these are NOT the SPICE standards)

Paper size Top margin Bottom margin Left margin Right margin

A4 34.1 mm or 1.34 inches 34.1 mm or 1.34 inches 25.4 mm or 1.0 inches 25.4 mm or 1.0 inches

US letter (imperial) 25.4 mm or 1.0 inches 25.4 mm or 1.0 inches 28.4 mm or 1.12 inches 28.4 mm or 1.12 inches

ISO/IEC Software Process Assessment - Part 4: Guide to conducting assessments

Working Draft V1.00 Page 4

SOFTWARE PROCESS ASSESSMENT Guide to conducting assessments

Part

Contents

PREAMBLE............................................................................................................................................ b TERMS AND CONDITIONS................................................................................................................... c Released on the Authority of the SPICE Management Board:.................................................... c Product Managers:....................................................................................................................... d Acknowledgment:......................................................................................................................... d Foreword................................................................................................................................................ 1 Introduction........................................................................................................................................... 2 1 Scope.................................................................................................................................................. 3 2 Normative references........................................................................................................................ 4 3 Definitions.......................................................................................................................................... 5 4 Overview of process assessment................................................................................................... 4.1 Context of process assessment............................................................................................. 4.2 Process rating scheme........................................................................................................... 4.3 Assessment approaches........................................................................................................ 4.4 Assessment stages................................................................................................................ 4.5 Success factors for process assessment.............................................................................. 5 Guidance on conducting assessments........................................................................................ 5.1 Reviewing the assessment inputs....................................................................................... 5.2 Selecting the process instances.......................................................................................... 5.3 Preparing for a team-based assessment............................................................................. 5.4 Collecting and verifying information..................................................................................... 5.5 Determining the Actual Ratings for Process Instances....................................................... 5.6 Determining derived ratings................................................................................................. 5.7 Validating the ratings............................................................................................................ 5.8 Presenting the assessment output...................................................................................... 6 6 7 8 9 9

11 11 12 13 17 18 22 25 26

Foreword
In June 1991, the fourth plenary meeting of ISO/IEC JTC1/SC7 approved a study period (resolution 144) to investigate the needs and requirements for a standard for software process assessment. The results, which are documented in a Study Report (JTC1/SC7 N944R, 11 June 1992), came to the following major conclusions: there is international consensus on the needs and requirements for a standard for process assessment; there is international consensus on the need for a rapid route to development and trialling to provide usable output in an acceptable timescale and to ensure the standard fully meets the needs of its users; there is international commitment to resource the project with an international project team staffed by full time resource, with development being coordinated through four technical development centres in Europe, N America (2) and Asia Pacific; the standard should initially be published as a Technical Report Type 2 to enable the developing standard to stabilise during the period of the user trials, prior to its issuing as a full International Standard. The new work item was approved in January 1993 by JTC1. In June 1993 the SPICE Project Organisation was established with a mandate from JTC1/SC7 to: assist the standardisation project in its preparatory stage to develop initial working drafts; undertake user trials in order to gain early experience data which will form the basis for revision of the published Technical Report prior to review as a full International Standard; create market awareness and take-up of the evolving standard. The SPICE Project Organisation completed its task of producing the set of working drafts in June 1995. These working drafts have formed the basis for this Technical Report Type 2. The period of SPICE user trials commenced in January 1995 and is synchronised in phases to allow feedback to the stages of the technical work. ISO/IEC Directives state that a Technical Report Type 2 may be used to publish a prospective standard for provisional application so that information and experience of its practical use may be gathered. This Technical Report Type 2 consists of the following parts, under the general title Software Process Assessment: Part 1 : Concepts and introductory guide Part 2 : A model for process management Part 3 : Rating processes Part 4 : Guide to conducting assessment Part 5 : Construction, selection and use of assessment instruments and tools Part 6 : Qualification and training of assessors Part 7 : Guide for use in process improvement Part 8 : Guide for use in determining supplier process capability Part 9 : Vocabulary This part of this International Standard (part 4) is not normative: it provides guidance on interpreting the requirements contained in part 3 within a team-based assessment. To aid readability, the requirements from part 3 are embedded at appropriate points in the text of part 4.

Introduction
Process assessment is a means of capturing information describing the current capability of an organizations processes and is initiated as a result of a desire to determine and/or improve the capability of these processes. This part of the International Standard provides guidance on interpreting the requirements set out in part 3 primarily for use in a team-based assessment. As an aid to understanding, the requirements from part 3 are embedded verbatim in italics at appropriate points within the text of this guide. Although the guidance in this part of the International Standard is directed at conducting a teambased assessment, the principles for rating processes are the same for a continuous, tool-based assessment. In a continuous assessment, however, the means of collecting data is different. This document is primarily aimed at: the assessment team, who use the document to prepare for the assessment; the participants in the assessment, who use the document to help understand the assessment and interpret the results; all staff within organizations who need to understand the details and benefits of performing process assessment; tool and method developers who wish to develop tools or methods supporting the process assessment model.

Scope

The purpose of this document is to provide guidance on meeting the requirements contained in part 3 of this International Standard during a team-based assessment. This document describes a process assessment framework that encourages self-assessment; takes account of the process context; produces a process rating profile rather than a pass/fail result; addresses adequacy of generic practices relative to purpose; is appropriate across all application domains and sizes of organization.

Process assessment is applicable in the following circumstances: a) by or on behalf of an organization with the objective of understanding the state of its own processes for process improvement b) by or on behalf of an organization with the objective of determining the suitability of its own processes for a particular requirement or class of requirements; c) by or on behalf of one organization with the objective of determining the suitability of another organization's processes for a particular contract or class of contracts.

Normative references

There are no normative references in this part of the International Standard.

Definitions

For the purposes of this part of this International Standard, the definitions in Software Process Assessment - Part 9 : Vocabulary apply.

Overview of process assessment

4.1 Context of process assessment


Process assessment may be invoked either by process improvement or by process capability determination which provide the inputs to assessment and use the output as illustrated in figure 1.
From process improvement or process cabapility determination Assessment input Assessment purpose Assessment scope Assessment constraints Assessment responsibilities Extended process definitions Additional information to be collected Assessment instrument Process indicators Process management indicators Process Assessment

Process model (Part 2 of this International Standard) Process purpose Practices

Assessment output Generic practice adequacy ratings Process capability level ratings Assessment record To process improvement or process capability determination

Figure 1 Process assessment context

4.1.1 Process assessment Process assessment is undertaken to understand an organizational unit's current processes. Process assessment deals potentially with all the software related processes (e.g. management, development, maintenance, support) used by an organization. This is accomplished by assessing the organizational unit's processes against the process model described in part 2 of this International Standard. The process model defines, for each process, a set of base practices essential to good software engineering, and a set of generic practices grouped into capability levels. The assessment output consists of a set of generic practice adequacy ratings and process capability level ratings for each process instance assessed together with the assessment record.

Although part 2 of this International Standard covers a range of processes applicable to the software process, in many instances of process assessment a subset of these processes may be selected. For instance the sponsor may wish to focus attention on particular critical processes or processes which are candidates for improvement actions. In process capability determination mode, an acquirer may wish to evaluate the capabilities of suppliers only for the processes related to the tender or contract requirements. The sponsor may wish a process to include additional base practices to those defined in part 2 of this International Standard or to define an entirely new process - for example to meet industry specific requirements. These are defined as extended processes (see Annex A of part 2 of this International Standard). The sophistication and complexity of the implemented generic practices for each process utilized within an organizational unit will be dependant upon the context of that process within the organizational unit. For instance, the planning required for a five person project team will be much less than for a fifty person team. This context, recorded in the process context, influences how a qualified assessor should judge an implemented generic practice when assessing its adequacy. The process context also influences the degree of comparability between process capability level ratings. 4.1.2 Process improvement The assessment output identifies the current process capability level ratings of an organizational unit's processes and forms the basis to plan, prepare, implement and evaluate specific improvement actions, as described in part 7 of this International Standard. 4.1.3 Process capability determination The assessment output allows an organizational unit to identify, analyse and quantify its strengths, weaknesses and risks, as described in part 8 of this International Standard.

4.2 Process rating scheme


The process rating scheme links process instances, capability levels and generic practices to the defined process purpose. The process assessment framework is based on assessing a specific process instance. A process instance is a singular instantiation of a process that is uniquely identifiable and about which information can be gathered in a repeatable manner. The actual ratings that are determined for the process instance are therefore repeatable by different qualified assessors. Each process instance has a set of five process capability level ratings (one at each capability level), each of which is an aggregation of the generic practice adequacy ratings of the one or more generic practices that belong to that level. Hence the generic practice adequacy ratings are the foundation for the rating system.

Generic practice adequacy determines the adequacy of the implemented generic practice at meeting its purpose. It is therefore as much an assessment of the 'effectiveness' of the implemented generic practice within the process context as it is an assessment of its 'conformance' to the process model defined in part 2 of this International Standard. From the actual ratings of process instances, a number of derived ratings can be determined which consist of aggregations of one or more actual ratings. These derived ratings take account of the variability of individual process instances and may provide better insight into the capability of a process within an organizational unit as a whole. In order to support the rating of the 'Performed-Informally' Capability Level, part 2 of this International Standard defines base practices for each process. These base practices are rated using the base practice adequacy or base practice existence ratings and the ratings are recorded as part of the assessment record.

4.3 Assessment approaches


4.3.1 Self-assessment A self-assessment is used by an organization to assess the capability of its software process. The sponsor of a self-assessment is always internal to the organization. A self-assessment may be either team-based or continuous. 4.3.1.1 Team-based assessment This approach establishes an assessment team from within the organization. An external expert may be brought into an organization: to assist the assessment team with the assessment; to help the organizational unit understand the concepts expressed by the standard; to explain how to use an assessment instrument.

4.3.1.2 Continuous assessment This approach involves the use of an assessment instrument that supports automated or semiautomated collection of data in the assessment of an organizational unit's process capability. An assessment instrument could be used continuously throughout the software development life cycle: at defined milestones to measure adherence to the process; to measure process improvement progress; or to gather data to facilitate a future assessment. When using the continuous assessment approach there may not be an assessment team as there is for a team-based assessment, but there is still a need for an identified qualified assessor to ensure conformance to the requirements for assessment.

4.3.2 Independent assessment An independent assessment is an assessment conducted by an assessor who is independent of the organizational unit being assessed. An independent assessment may be conducted, for example, by an organization on its own behalf as independent verification that its assessment programme is functioning properly or by an acquirer who wishes to have an independent assessment output. In general, the sponsor of an independent assessment will be external to the organizational unit being assessed. The degree of independence, however, may vary according to the purpose and circumstances of the assessment. When the independent assessment is conducted for an acquirer, the sponsor is external to the organization being assessed. If the assessment is being conducted by the organization on its own behalf, however, the sponsor will belong to the same organization as the organizational unit being assessed.

4.4 Assessment stages


Process assessment consists of eight stages (as described below) reviewing the assessment input;

selecting the process instances; preparing for assessment; collecting and verifying information on practices; determining the actual ratings for process instances; determining derived ratings; validating the ratings; presenting the assessment output.

4.5 Success factors for process assessment


The following factors are essential to a successful process assessment. 4.5.1 Commitment Both the sponsor and owner should commit themselves to the objectives established for an assessment to provide the authority to undertake the assessment within an organization. This commitment requires that the necessary resources, time and personnel are available to undertake the assessment. The commitment of the assessment team is fundamentally important to ensuring that the objectives are met.

4.5.2 Motivation The attitude of the organizations management, and the method by which the information is collected, has a significant influence on the outcome of an assessment. The organizations management, therefore, needs to motivate participants to be open and constructive. An assessment should be focused on the process, not on the organizational unit members implementing the process. The intent is to make the outcome more effective in an effort to support the defined business goals, not to allocate blame to individuals. Providing feedback and maintaining an atmosphere that encourages open discussion about preliminary findings during the assessment helps to ensure that the assessment output is meaningful to the organizational unit. The organization needs to recognize that the participants are a principal source of knowledge and experience about the process and that they are in a good position to identify potential weaknesses. 4.5.3 Confidentiality Respect for the confidentiality of the sources of information and documentation gathered during assessment is essential in order to secure that information. If discussion techniques are utilized, consideration should be given to ensuring that participants do not feel threatened or have any concerns regarding confidentiality. Some of the information provided might be proprietary to the organization. It is therefore important that adequate controls are in place to handle such information. 4.5.4 Relevance The organizational unit members should believe that the assessment will result in some benefits that are relevant to their needs. 4.5.5 Credibility The sponsor, and the management and staff of the organizational unit must all believe that the assessment will deliver a result which is objective and is representative of the assessment scope. It is important that all parties can be confident that the team selected to conduct the assessment has adequate experience in assessment; adequate understanding of the organizational unit and its business; sufficient impartiality.

Guidance on conducting assessments

The assessment consists of the eight stages shown in figure 2. Reviewing the assessment input Aligned to Presenting assessment output

Selecting the Validating the sample Validating process the instances ratings Preparing for assessment Collecting and verifying information Determining derived ratings Determining actual ratings

Assessment stages Figure 2 Assessment stages

5.1 Reviewing the assessment inputs


The assessment input shall be defined prior to an assessment. At a minimum, the assessment input shall define: the assessment purpose; the assessment scope; the assessment constraints; the identity of the qualified assessor and any other specific responsibilities for the assessment; the definition of any extended processes identified in the assessment scope; the identification of any additional information to be collected to support process improvement or process capability determination. [Software Process Assessment - Part 3: Rating processes, 4.2] The qualified assessor named in the assessment input shall be a member of the assessment team in a team-based assessment or shall oversee an assessment conducted using a continuous or tool-based approach. The qualified assessor shall ensure that the assessment is conducted in accordance with the requirements of this International Standard. [Software Process Assessment - Part 3: Rating processes, 4.3]

The qualified assessor should review the defined assessment purpose, scope and constraints to ensure that they are consistent and that the assessment purpose can be fulfilled. The qualified assessor should seek clarification from the sponsor as appropriate.

5.2 Selecting the process instances


5.2.1 Mapping the organizational unit processes to the process model
The qualified assessor shall ensure that the organizational units processes to be assessed, as defined in the assessment scope, are mapped to the corresponding processes in part 2 of this International Standard or are defined as extended processes. [Software Process Assessment - Part 3: Rating processes, 4.3]

In order to provide a consistent basis for assessment, part 2 of this International Standard establishes a process model that is representative of the software process as a whole. However, the processes within the model need not necessarily have a one-to-one mapping with the processes as performed by the organizational unit. The assessment scope details the organizational unit processes to be assessed and the mapping to the process model. If this mapping has not been performed as part of the assessment input then the qualified assessor will have to perform this mapping before it is possible to start to select specific process instances for assessment. This mapping should be agreed with the sponsor. NOTE 1: The mapping performed will also have to be taken into account in order to fully comprehend the assessment output which is in the form of the process model, not necessarily processes as performed by the organizational unit. 5.2.2 Process Instance Selection
The qualified assessor shall ensure that the set of process instances selected for assessment is adequate to meet the assessment purpose and will provide outputs that are representative of the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.3] The assessment shall include at least one process instance of each process identified in the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.4.1]

Process assessment determines the capability of processes as defined and implemented by the organizational unit. There may be a number of instances of the implementation of a process. The assessment purpose and scope will have been defined to support either process improvement or process capability determination (or both), and should give clear indication as to the expected depth and coverage that is required. The process instances assessed within the organizational unit will affect the circumstances under which the derived ratings can represent current capability or predict future capability. The larger the sample size of process instances and the more recent the information, the more representative the assessment outputs. NOTE 2: The assessment constraints may require particular process instances to be included or excluded.

5.3 Preparing for a team-based assessment


5.3.1 Selecting and preparing the assessment team 5.3.1.1 Choosing the assessment team size An assessment team should be appointed. In order to assist in maintaining balanced judgement, it is recommended that the team should consist of at least two members. The optimum size of the assessment team, however, will depend on many factors including the assessment scope;

the size of the organizational unit involved in the assessment; the skills and experience of the available resources; the cost/benefit trade-offs. 5.3.1.2 Defining assessment team roles The qualified assessor should appoint the assessment team leader and assessment team coordinator. The assessment team leader will be responsible for the overall conduct of the assessment. The assessment team co-ordinator will be responsible for the assessment logistics and interfacing with the organizational unit. Depending on the needs of the assessment, the qualified assessor may fulfil one or both of the roles described above. An assessment team leader who is not a qualified assessor may need to seek advice and guidance on aspects of the assessment from the nominated qualified assessor, who has overall responsibility for ensuring that the assessment meets the requirements of this International Standard. The assessment team may also have as many other members as appropriate. All assessment team members should have experience in software engineering and one or more should have specific experience in the processes under assessment and in the technologies used to support the processes. In choosing the assessment team, the aim is to ensure a suitable level of objectiveness and to minimize the risk of misunderstandings. The following should be considered: assessment team members should be sensitive to people and able to collect information in a clear and non-threatening way; NOTE 3: If any of the assessment team are managers of one or more of the participants then this might be difficult to achieve. at least one assessment team member should be from the organizational unit; the composition of the assessment team should ensure a balanced set of skills to meet the assessment purpose and scope.

The assessment team members can be drawn from a variety of sources including organizational unit members; elsewhere within the organization; internal or external experts in specific processes; internal or external experts in process assessment; customers; sponsors.

5.3.1.3 Preparing the assessment team Prior to the assessment, the qualified assessor should ensure that the assessment team has an understanding of the assessment inputs, purpose, constraints, output and process as described in this part of this International Standard; this International Standard, its requirements and relevant guidance.

5.3.2 Planning the assessment 5.3.2.1 Identifying risk factors The assessment team should identify to the sponsor any significant risk factors that could lead to a failure of the assessment. Factors that should be considered include changes in the commitment of the sponsor; unplanned changes to the structure of the assessment team (e.g. the qualified assessor becomes unavailable); organization changes; implementation or changeover to new standard processes; changes in the assessment purpose or scope (e.g. because of some sudden change in the business factors driving the motivation for the assessment); resistance or unwillingness to participate by organizational unit members; lack of financial or other resources required for the assessment; lack of confidentiality, either internal or external to the organization.

5.3.2.2 Selecting the assessment techniques The assessment team should select the appropriate assessment techniques to suit the assessment purpose and scope, the skills of the assessment team and the level of understanding of the organizational unit being assessed. These techniques may include on-line expert system; interviews;

individual discussions; group discussions; closed team sessions; documentation reviews; feedback sessions.

Feedback sessions should be used as appropriate during the assessment. They can be particularly useful to present preliminary findings. 5.3.2.3 Selecting the assessment instrument
An assessment instrument that conforms to the requirements set out in part 5 of this International Standard shall be used to support the assessment. [Software Process Assessment - Part 3: Rating processes, 4.4.7]

The assessment team should decide which assessment instrument will be used to assist the team in performing the assessment. The following aspects should be considered when defining the requirements for an appropriate assessment instrument the type of assessment instrument required; support for security and confidentiality; the level and detail of reporting; support for rating and analysis.

It is helpful if at least one assessment team member has experience of the particular assessment techniques and assessment instrument to be used. For more guidance on selecting or developing an assessment instrument see part 5 of this International Standard. 5.3.2.4 Developing the assessment plan The assessment team should develop an assessment plan detailing the assessment inputs; the role and responsibilities of all involved in the assessment activity; estimates for schedule, costs and resources; control mechanisms and checkpoints; the interface between the assessment team and the organizational unit; outputs expected; risk factors to be taken into account and appropriate contingent and preventive actions; the logistics which may include the rooms for discussions, presentations, appropriate audiovisual equipment, word processing facilities, escort requirements, access to facilities.

The assessment team should ensure that the assessment plan is able to meet the assessment purpose. The assessment plan should be formally accepted by the sponsor and owner. The assessment team should endeavour to ensure that the plan is acceptable to the participants and that it is realistic in terms of its impact on existing projects. 5.3.3 Preparing the organizational unit 5.3.3.1 Selecting the organizational unit co-ordinator An organizational unit may appoint an organizational unit co-ordinator to represent it in the assessment. The organizational unit co-ordinator is responsible for: supporting all assessment logistics for the organizational unit; interfacing with the assessment team; establishing the environment needed for the assessment activities.

5.3.3.2 Briefing of the Organizational Unit The organizational unit should be briefed on the assessment purpose, scope and constraints; the conduct of the assessment; how the assessment outputs can be used to provide the most benefit to the organization; what arrangements exist for confidentiality and ownership of the assessment outputs.

The briefing should be performed in co-operation with the assessment team. 5.3.3.3 Selecting the Participants The assessment team needs to capture information on every base practice and generic practice for each process instance to be assessed. The organizational unit should, in co-operation with the assessment team, select the participants that adequately represent the process instances chosen to ensure that the appropriate expertise will be available to allow for a satisfactory assessment. 5.3.4 Meeting confidentiality agreements The assessment team should ensure that any discussions held with participants and the use of the assessment outputs are subject to any confidentiality agreement defined in the assessment constraints. The assessment team should ensure that all participants fully understand the confidentiality agreement.

5.3.5 Gathering support documentation and records The assessment team may need access to support documentation and records (e.g. project plan, progress meeting minutes, deliverable review notes) on the process instances to be examined, either in advance of the assessment or to provide support during the assessment. This may be particularly important for an independent assessment. As part of the organizational unit briefings, the participants involved in the assessment will be aware of the processes to be assessed. Consequently, either they or the organizational unit co-ordinator should ensure that the support documentation and necessary records are made available for review. Much of this material will be local to the organizational unit being assessed but some may be shared with other organizational units or be held centrally within the organization. If necessary the organizational unit co-ordinator should ensure that access to such material is considered in the assessment plan. It helps the progress of the assessment if adequate time is provided for this information to be collated off-line from any ongoing discussions.

5.4 Collecting and verifying information


Although an assessment may be a self-assessment or an independent assessment, the principles behind the involvement of participants are the same; they are a primary source of information to be provided to the assessment team about the process instances being assessed. They may participate in informal, unstructured discussions that allow them to express their professional views about the processes in place, and any issues or problems facing the organization. They may also be involved in providing validation materials to the assessment team. During an assessment, it is typical to perform a series of information collection and analysis stages, where the scope of the information is refined and more detailed information is collected along the way. For example, the first stage of information collection and analysis may help the assessment team determine which processes should be investigated more thoroughly. Subsequent stages should then collect and analyse more detailed information regarding how individual practices are being applied and how the process purpose is being achieved. 5.4.1 Collecting information Collecting the information will rely on the assessment instrument and assessment techniques selected. Information has to be collected for each base practice and each generic practice for each process instance to be assessed. Certain base practices or generic practices may be implemented more than once within a single process instance e.g. multiple reviews. In this case the assessment team must use the selected assessment instrument and their judgement to choose a representative sample.

If, during information gathering, it becomes obvious to the assessment team that a particular process has no implemented generic practices within a particular common feature or capability level, the assessment team may choose not to attempt to gather any additional information for that common feature or capability level. The absence of any capability can often be determined by probing for information either at the capability level generally, or more specifically regarding a particular common feature. 5.4.2 Categories of information The categories of information that should be collected during an assessment include the degree of adequacy or existence of base practices; the adequacy of generic practices; the experiences of the participants where they observed problems associated with the current processes used, e.g. ideas for process improvement.

Typically, the assessment instrument is used to collect information for all the categories of information outlined above. Some or all of the detailed information collected will usually be recorded as part of the assessment record to be used to support process improvement or process capability determination. The assessment team should take adequate steps to protect any information collected that may be covered by a confidentiality agreement. 5.4.3 Verifying information Support documentation and records should be used as appropriate to verify the information collected during an assessment. The amount of support documentation and records examined depends upon the assessment team's knowledge of the organizational unit, the assessment purpose and the level of trust and confidentiality established for the assessment. Due to lack of familiarity with the process being examined, the assessment team in an independent assessment may need to collect more support documentation and records to establish the basis for determining ratings than is needed in a self-assessment.

5.5 Determining the Actual Ratings for Process Instances


A base practice adequacy rating or a base practice existence rating (see 4.4.3.2) shall be determined and validated for every base practice within each selected process instance for each process and/or extended process identified within the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.4.2.1] A generic practice adequacy rating shall be determined and validated for every generic practice within each selected process instance of each process and/or each extended process identified within the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.4.2.2]

These ratings, collected for every process instance assessed, are the actual ratings determined from the information collected about the process instance by the assessment team. 5.5.1 Determining the actual ratings for base practices 5.5.1.1 Base practice adequacy and existence
Base practice adequacy shall be rated using the base practice adequacy rating scale defined below. N; Not adequate: The base practice is either not implemented or does not to any degree contribute to satisfying the process purpose; P; Partially adequate: The implemented base practice does little to contribute to satisfying the process purpose; L; Largely adequate: The implemented base practice largely contributes to satisfying the process purpose; F; Fully adequate: The implemented base practice fully contributes to satisfying the process purpose. [Software Process Assessment - Part 3: Rating processes, 4.4.3.1] Base practice existence shall be rated using the base practice existence rating scale defined below: N; Non-Existent: The base practice is either not implemented or does not produce any identifiable work products; Y; Existent: The implemented base practice produces identifiable work products. [Software Process Assessment - Part 3: Rating processes, 4.4.3.2]

Base practices may be rated using either the base practice adequacy rating scale or the base practice existence scale. The same scale should be used for all base practices for a given assessment. The base practice ratings are determined for each process instance assessed from the information collected. The base practice ratings do not constitute a part of the process profile but rather are recorded as part of the assessment record. Their purpose is to provide a clear understanding of the extent to which the process is performed. The work product indicators that are defined in the assessment instrument are provided to indicate which points to consider to help to make consistent rating judgements. NOTE 4: There is no implied relationship between base practice adequacy and base practice existence ratings.

5.5.1.2 Base practice rating reference


A unique reference shall be generated for each base practice rating that includes the process category, the process within the process category, the base practice of the process, and a process instance reference. [Software Process Assessment - Part 3: Rating processes, 4.4.5.1]

In the examples that follow, the rating reference is of the form: PC.PR.BP[instance reference] where: PC is a process category; PR is a process within that process category; BP is a base practice of the process; [instance reference] is either the number of process instances in the rating or a complete list of the process instance references.

For example, for a single instance of process PRO.5 (Manage Quality) which we have given a process instance reference of 'A', then if base practice adequacy ratings are determined for each base practice we might have:
PRO.5.1[A] PRO.5.2[A] PRO.5.3[A] PRO.5.4[A] PRO.5.5[A] PRO.5.6[A] =L =F =F =P =F =P

5.5.2 Determining the actual ratings for generic practices 5.5.2.1 Generic practice adequacy
Generic practice adequacy shall be rated using the generic practice adequacy rating scale defined below. N; Not adequate: The generic practice is either not implemented or does not to any degree satisfy its purpose; P; Partially adequate: The implemented generic practice does little to satisfy its purpose; L; Largely adequate: The implemented generic practice largely satisfies its purpose; F; Fully adequate: The implemented generic practice fully satisfies its purpose. [Software Process Assessment - Part 3: Rating processes, 4.4.3.3]

The generic practice adequacy ratings are determined for each process instance assessed from the information collected.

Generic practice 1.1.1 (Perform the process ) is concerned with the performed process fulfilling its process purpose as described in the in Part 2 of this International Standard. The ratings for the individual base practices provide an indication of the extent to which the practices contribute to meeting the process purpose. The overall rating for generic practice 1.1.1, however, is a judgement of the extent to which the base practices, and possibly other practices that may be required for a given process context, perform together to achieve the overall process purpose. It is therefore possible, even though the ratings for each individual base practice are fully adequate or existent, that the process is not satisfying its process purpose . This may be the result of an inability of the base practices to operate effectively as a whole, or that key base practices not included in the process model are required to achieve the process purpose in a particular context. All of the other generic practices fulfil purposes that provide a process management infrastructure to support the performance of the base practices. The purpose of some of these generic practices is the process purpose of an associated process within the process model. For example, the purpose of generic practice 3.2.2 is the same as its associated process SUP.5 - "Perform Peer Reviews". The process management indicators that are included in the assessment instrument (see part 5 of this International Standard) are provided to indicate points to consider in making consistent judgements for the generic practices. 5.5.2.2 Generic practice rating reference
A unique reference shall be generated for each generic practice rating that includes the process category, the process within that process category, the capability level, the common feature within that capability level, the generic practice within that common feature, and a process instance reference. [Software Process Assessment - Part 3: Rating processes, 4.4.5.2]

In the examples that follow, the rating reference is of the form: PC.PR[instance reference];CL.CF.GP where: PC is a process category; PR is a process within that process category; [instance reference] is either the number of process instances in the rating or a complete list of the process instance references; CL is a capability level; CF is a common feature within that capability level; GP is a generic practice within that common feature.

For example, for a single instance of process PRO.5 ('Manage Quality') which we have given a process instance reference of 'A', then if a rating is determined for each generic practice at capability level 3, the well-defined level, we might have:

PRO.5[A];3.1.1 = L PRO.5[A];3.1.2 = P PRO.5[A];3.2.1 = L PRO.5[A];3.2.2 = N PRO.5[A];3.2.3 = P

5.5.2.3 Process capability level


An actual process capability level rating shall be determined for each process instance assessed by aggregating the generic practice adequacy ratings within each capability level. For each process instance, the actual process capability level ratings shall describe, for each capability level, the proportion of generic practices that were rated at each point on the generic practice adequacy scale in a clear and unambiguous way. [Software Process Assessment - Part 3: Rating processes, 4.4.2.3] Equal weighting shall be applied to each generic practice adequacy rating when aggregating or deriving ratings. [Software Process Assessment - Part 3: Rating processes, 4.4.4]

This actual process capability level rating can be represented in many different ways; within this part of this International Standard the following vector representation will be used:
[% Fully, % Largely, % Partially, % Not Adequate]

Within process instance 'A' of Process PRO.5, a rating can be determined for each capability level. The process capability level rating is determined by aggregating all the generic practice ratings for the generic practices associated with that capability level. From the above example it can be seen that none of the generic practices were fully adequate i.e. 0%, 2 of the 5 generic practices were largely adequate - 40%, 2 of the 5 generic practices were partially adequate - 40% and 1 of the 5 generic practices were not adequate - 20%. These generic practice adequacy ratings can be represented as a vector in the form:
PRO.5[A];3 = [0,40,40,20]

The process instance is represented by the five process capability level ratings determined for that process instance. For example, for the process instance 'A' of process PRO.5 we might have:
PRO.5[A];1 PRO.5[A];2 PRO.5[A];3 PRO.5[A];4 PRO.5[A];5 = [0,100,0,0] = [50,25,17,8] = [0,40,40,20] = [0,0,33,67] = [0,0,20,80]

It should be noted that capability level 1 is represented by the single generic practice 1.1.1. Hence the actual rating will always be 100% of one of fully, largely, partially or not adequate, and 0% for the other three.

5.6 Determining derived ratings


Equal weighting shall be applied to each generic practice adequacy rating when aggregating or deriving ratings.

[Software Process Assessment - Part 3: Rating processes, 4.4.4]

From the actual ratings, derived ratings may be determined which can help to gain further insight into the processes within the organizational unit as a whole. Derived ratings are based on sampling and are therefore subject to all the restrictions that apply to sampled ratings. The assessment team should decide which of the following derived ratings are useful in helping to ensure that the assessment purpose can best be fulfilled. Since any derived ratings are based on an aggregation of actual ratings for process instances, the assessment team has to ensure that traceability is provided from the derived ratings to the actual ratings. 5.6.1 Aggregation between process instances The ratings for generic practices and process capability levels for two or more process instances may be aggregated to determine a derived rating for the process. 5.6.1.1 Generic practice adequacy We may aggregate actual generic practice adequacy ratings between two or more process instances of a specific process to derive an aggregated rating for the generic practice. For example, if generic practice adequacy is rated for generic practice 3.3.1 of process instances 'A', 'B' and 'C' of process PRO.5 we might have:
PRO.5[A];3.1.1 = L PRO.5[B];3.1.1 = P PRO.5[C];3.1.1 = L

We can aggregate these three generic practice adequacy ratings to calculate a derived generic practice adequacy rating for the generic practice across of process instances:
PRO.5[A,B,C];3.1.1 = [0,67,33,0]

From this aggregated rating we can see that the derived rating consists of three process instances (A, B and C); for the sample taken of generic practice 3.1.1, it was largely adequate 67% of the time and partially adequate 33% of the time.

Optionally, rather than listing the process instance references for each process instance, if the references are not required for future analysis purposes then the number of process instances may be substituted and would be represented as:
PRO.5;3.1.1 = L,P,L or PRO.5[3];3.1.1 = [0,67,33,0]

5.6.1.2 Process capability level rating


A set of derived process capability level ratings shall be determined for each process identified in the assessment scope by aggregating the actual process capability ratings of the process instances. These derived ratings shall be sufficiently representative of the process capability levels of each process assessed to satisfy the assessment purpose. For each process identified in the assessment scope, the derived process capability level ratings shall describe, for each capability level, the proportion of generic practices that were rated at each point on the generic practice adequacy scale in a clear and unambiguous way. [Software Process Assessment - Part 3: Rating processes, 4.4.2.3]

By aggregating process capability level ratings between a sample of process instances of a specific process within an organizational unit, derived ratings for the capability levels are obtained for the process. For example, if the 'Well-Defined' capability level (level 3) is rated for process instances 'A', 'B' and 'C' of process PRO.5 then we might have:
PRO.5[A];3 = [0,40,40,20] PRO.5[B];3 = [0,20,40,40] PRO.5[C];3 = [0,20,20,60]

We can aggregate these three process capability level ratings to calculate an aggregated process capability level rating:
PRO.5[A,B,C];3 = [0,27,33,40] or PRO.5[3];3 = [0,27,33,40]

From this aggregated rating we can see that the derived process capability level rating consists of three instances of process PRO.5 (A, B and C); within the sample taken of capability level 3 of process PRO.5, 27% of the generic practices were largely adequate, 33% of the generic practices were partially adequate and 40% of the generic practices were not adequate.

5.6.2 Aggregation across processes We may aggregate any generic practice ratings between process instances across different processes. For example, for process category PRO, the 'Project Process Category', and for generic practice 3.1.1 we may have:
PRO.1[A,B,C];3.1.1 PRO.2[A,B,C];3.1.1 PRO.3[A,B,C];3.1.1 PRO.4[A,B,C];3.1.1 PRO.5[A,B,C];3.1.1 PRO.6[A,B,C];3.1.1 PRO.7[A,B,C];3.1.1 PRO.8[A,B,C];3.1.1 = [0,33,33,33] = [0,33,67,0] = [33,33,33,0] = [0,0,67,33] = [0,67,33,0] = [0,33,33,33] = [0,67,33,0] = [33,33,33,0]

We can aggregate these eight generic practice adequacy ratings to calculate an aggregated generic practice adequacy rating:
PRO[A,B,C];3.1.1 = [8,38,42,12]

From this rating for generic practice 3.1.1 of process instances 'A', 'B' and 'C' for all processes within process category PRO we can see that 8% of the generic practices were fully adequate, 38% of the generic practices were largely adequate, 42% of the generic practices were partially adequate and 12% of the generic practices were not adequate. This mechanism may be used to infer ratings for the generic practices within a specific capability level of a group of processes where a derived rating from a subset of those processes suggests that the implementation of that capability level is identical across the entire group e.g. because there is an organization-wide measurement programme.

5.7 Validating the ratings


A base practice adequacy rating or a base practice existence rating (see 4.4.3.2) shall be determined and validated for every base practice within each selected process instance for each process and/or extended process identified within the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.4.2.1] A generic practice adequacy rating shall be determined and validated for every generic practice within each selected process instance of each process and/or each extended process identified within the assessment scope. [Software Process Assessment - Part 3: Rating processes, 4.4.2.2]

The ratings should be validated to ensure that they are an accurate representation of the processes assessed. The validation should include assessing whether the sample size chosen is representative of the processes assessed and that it is capable of fulfilling the assessment purpose. The following mechanisms are useful in supporting validation: comparing results to those from previous assessments for the same organizational unit; looking for consistencies between connected or related processes; looking for proportional ratings across the capability levels e.g. higher ratings for higher levels than for lower ones; taking an independent sample of ratings and comparing them to the assessment team ratings; feedback sessions of preliminary findings to the organizational unit.

5.8 Presenting the assessment output


5.8.1 Preparing the assessment output Having determined the base practice, generic practice and process capability level ratings, the assessment outputs need to be prepared.
The qualified assessor shall ensure that all of the information required in the assessment output is recorded in a suitable format to fulfil the assessment purpose and that it meets the requirements of this International Standard. [Software Process Assessment - Part 3: Rating processes, 4.3] The process profile The ratings for the assessed process instances within the assessment scope shall be recorded as the process profile consisting of: the actual generic practice ratings and process capability level ratings for each process instance; derived generic practice ratings and process capability level ratings for each process within the scope of the assessment; [Software Process Assessment - Part 3: Rating processes, 4.5.1] The assessment record Any other information which is pertinent to the assessment and which may be helpful in understanding the output of the assessment shall be compiled and recorded as the assessment record. At a minimum, the assessment record shall contain: the assessment input; the assessment approach that was used; the assessment instrument used; the base practice ratings for each process instance assessed; the date of the assessment; the names of team who conducted the assessment; any additional information collected during the assessment that was identified in the assessment input to support process improvement or process capability determination; any assessment assumptions and limitations. [Software Process Assessment - Part 3: Rating processes, 4.5.2]

The assessment output will normally be used as a basis for developing an agreed improvement plan or determining capability and associated risk as appropriate. The guidance on how to perform this is provided in Part 7 and Part 8 of this International Standard.

5.8.2 Reporting the assessment output


In some circumstances it may be desirable to compare the outputs of the assessment of two or more organizational units, or for the same organizational unit at different times. Comparisons of assessment outputs shall be valid only if their process contexts are similar. [Software Process Assessment - Part 3: Rating processes, 4.4.6]

The presentation of the assessment output might simply be in the form of a simple presentation for an internal assessment or might be in the form of a detailed report for an independent assessment. In addition, other findings and proposed action plans may be prepared for presentation, depending upon the assessment purpose and whether this additional analysis is performed at the same time as the assessment. The ratings determined for generic practice adequacy provide the input for generation of the process capability level ratings. The use of summary ratings, for example for a process category, in addition to the detailed findings may be used to help to understand the findings. In some circumstances it may be useful to assign a weighting to the four points on the generic practice adequacy scale e.g. 100% for F, 75% for L, 25% for P and 0% for N. Any derived rating may then be represented as a single value rather than as a vector. This may assist with presentation of ratings at a summary level. These ratings should be used for summary purposes only and should not be used for comparison instead of the generic practice adequacy ratings. The presentation of the ratings may be in absolute terms (numbers, absolute scales) or relative terms (since last time, compared to benchmarks, compared to contract requirements, compared to business needs).

You might also like