Sil
Sil
Sil
Safety Integrity Level (SIL) is defined as a relative level of risk-reduction provided by a safety function, or to specify a target level of risk reduction. In simple terms, SIL is a measurement of performance required for a Safety Instrumented Function (SIF). The requirements for a given SIL are not consistent among all of the functional safety standards. In the European Functional Safety standards based on the IEC 61508 standard four SILs are defined, with SIL 4 being the most dependable and SIL 1 being the least. A SIL is determined based on a number of quantitative factors in combination with qualitative factors such as development process and safety life cycle management.
SIL Assignment
There are several methods used to assign a SIL. These are normally used in combination, and may include: Risk Matrices Risk Graphs Layers Of Protection Analysis (LOPA)
The assignment may be tested using both pragmatic and controllability approaches, applying guidance on SIL [1] assignment published by the UK HSE. SIL assignment processes that use the HSE guidance to ratify assignments developed from Risk Matrices have been certified to meet IEC EN 61508
These lead to such erroneous statements as, "This system is a SIL N system because the process adopted during its development was the standard process for the development of a SIL N system", or use of the SIL concept out of context such as, "This is a SIL 3 heat exchanger" or "This software is SIL 2". According to IEC 61508, the SIL concept must be related to the dangerous failure rate of a system, not just its failure rate or the failure rate of a component part, such as the software. Definition of the dangerous failure modes by safety [2] analysis is intrinsic to the proper determination of the failure rate. SIL is for electrical controls only and does not relate directly to the caT architecture in EN 62061. It appears to be a precursor to PL ratings that are now the new requirements which encompass hydraulic and pneumatic valves.
actual targets required vary depending on the likelihood of a demand, the complexity of the device(s), and types of redundancy used. PFD (Probability of Failure on Demand) and RRF (Risk Reduction Factor) of low demand operation for different SILs as defined in IEC EN 61508 are as follows:
SIL
PFD
PFD (power)
RRF
0.1-0.01
10
- 10
10-100
0.01-0.001
10
- 10
100-1000
0.001-0.0001
10
- 10
1000-10,000
0.0001-0.00001 10
- 10
10,000-100,000
For continuous operation, these change to the following. (Probability of Failure per Hour)
SIL
PFH
PFH (power)
RRF
0.00001-0.000001
10
- 10
100,000-1,000,000
0.000001-0.0000001
10
- 10
1,000,000-10,000,000
0.0000001-0.00000001
10
- 10
10,000,000-100,000,000
0.00000001-0.000000001 10
- 10
100,000,000-1,000,000,000
Hazards of a control system must be identified then analysed through risk analysis. Mitigation of these risks continues until their overall contribution to the hazard are considered acceptable. The tolerable level of these risks is specified as a safety requirement in the form of a target 'probability of a dangerous failure' in a given period of time, stated as a discrete SIL. Certification schemes are used to establish whether a device meets a particular SIL. The requirements of these schemes can be met either by establishing a rigorous development process, or by establishing that the device has sufficient operating history to argue that it has been proven in use. Electric and electronic devices can be certified for use in Functional Safety applications according to IEC 61508, providing application developers the evidence required to demonstrate that the application including the device is also compliant. IEC 61511 is an application-specific adaptation of IEC 61508 for the Process Industry sector. This standard is used in the petrochemical and hazardous chemical industries, among others.
[3]
ANSI/ISA S84 (Functional safety of safety instrumented systems for the process industry sector) IEC EN 61508 (Functional safety of electrical/electronic/programmable electronic safety related systems) IEC 61511 (Safety instrumented systems for the process industry sector) IEC 62061 (Safety of machinery) EN 50128 (Railway applications - Software for railway control and protection) EN 50129 (Railway applications - Safety related electronic systems for signalling EN 50402 (Fixed gas detection systems) MISRA, various (Guidelines for safety analysis, modelling, and programming in automotive applications) Defence Standard 00-56 Issue 2 - accident consequence
The use of a SIL in specific safety standards may apply different number sequences or definitions to those in IEC [4] EN 61508.
1.Introduction
IEC 61508 (ref. [1]) is a generic standard for the functional safety of electrical/electronic/programmable electronic safety-related systems. Right at the start, in Part 1, 1.1, it states that "a major objective of this standard is to facilitate the development of application sector international standards by the technical committees responsible for the application sector". In 1.2(j) it claims that it "provides general requirements for E/E/PE safety-related systems where no application sector standards exist". For railway applications, the European Committee for Electrotechnical Standardization, CENELEC, has produced a number of standards (resp. pre-standards) that address the functional safety of railway applications. To the extent that electrical, electronic or programmable electronic systems are involved, the CENELEC standards can be regarded as the "application sector standards" referred to in IEC 61508. This is the case for the family of standards EN 50126 "The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS)", prEN 50129 "Safety related electronic systems for signalling" and EN 50128 "Software for railway control and protection systems" (ref. [2] to [4]). For completeness, EN 50159 parts 1 and 2 "Communication, signalling and processing systems" (ref. [5] and [6]) should be mentioned, but since they relate very closely to the three previously mentioned standards, they will not be discussed in further detail here. IEC 61508 Part 1, 4.1, stipulates that "to conform to this standard it shall be demonstrated that the requirements have been satisfied to the required criteria...".
Now the foregoing statement applies to the E/E/PE system that is supposed to conform to the standard, but it could equally well be applied to the derived application specific standards. In most cases, such standards simply claim compliance with IEC 61508, usually by making it a normative reference, but there is no well defined process for actually demonstrating compliance of a derived standard with IEC 61508. So what do we do if inconsistencies or contradictions are discovered? The simplest solution is to decide which standard shall have preference in case of conflict. This is not usually stated explicitly, but general practice is to give the application specific standard priority because it is better focused to the needs and problems of that particular application. However, it would be better to actually demonstrate consistency between IEC 61508 and application specific "derivatives". Not only would this facilitate removing inconsistencies (and the expensive discussions they generate), it would also contribute to the "high level of consistency ... both within application sectors and across application sectors" that IEC 61508 aims at.
It then goes on to define a management process based on a system life cycle that has a total of 14 phases, from Concept to De-commissioning and Disposal, with detailed descriptions of the objectives, inputs, requirements, deliverables and verification of each phase. Finally, it has annexes giving an outline of a RAMS specification, an example of a RAMS programme, examples of railway parameters, examples of risk acceptance principles and a guideline for responsibilities within the RAMS process. All of the annexes are "informative", i.e. they are not of a normative character, so compliance with them does not have to be demonstrated.
2.2 prEN 50129 As previously mentioned, prEN 50129 is the standard that explicitly claims to be "the sector specific interpretation of IEC 61508", which is possibly a reason why it, at the
time of writing, has not yet been finally adopted! It defines "conditions that shall be satisfied in order that a safety-related electronic railway system/subsystem/equipment can be accepted...", requiring a "structured safety justification document, known as the Safety Case". In the safety case, evidence of quality management, evidence of safety management and evidence of functional and technical safety shall be documented. In Annex A, which is normative, it "defines the interpretation and use of Safety Integrity Levels" in terms of Tolerable Hazard Rates (THR). Annex B, which is also normative, contains detailed technical requirements for assurance of correct functional operation, effects of faults, operation with external influences, safety related application conditions and safety qualification tests. Annexes C, D and E are each informative and contain procedures for identifying failure modes of hardware, supplementary technical information and techniques, and measures for avoiding or controlling faults.
2.3 EN 50128 This is the standard that handles the software specific aspects that are relevant for the two previously mentioned standards. It states that it "concentrates on the methods which need to be used in order to provide software which meets the demands for safety integrity which are placed upon it by these wider considerations".
The standard describes software safety integrity levels and identifies requirements for personnel and their responsibilities, lifecycle issues and documentation. It gives detailed descriptions of objectives, input documents, output documents and requirements for software requirements specification, architecture, design and implementation, verification and testing as well as software/hardware integration, software validation, quality assurance and maintenance. It also addresses the concept of software configured by application data (e.g. "table driven software"). In annex A, which is normative, it provides criteria for the selection of techniques and measures, depending on the software safety integrity level. In Annex B, which is informative, it gives descriptions and bibliography of most of the techniques identified in annex A.
The standard is broken up into seven parts, viz.: 1. General requirements 2. Requirements for electrical/electronic/programmable electronic safetyrelated systems 3. Software requirements 4. Definitions and abbreviations 5. Examples of methods for the determination of safety integrity levels 6. Guideline on the application of parts 2 and 3 7. Overview of techniques and measures Now the last four parts are evidently supplementary information that is intended to facilitate interpretation and application of the first three parts, so for our purposes we can concentrate on those first three parts. A brief look at the descriptions given in section 2 of this text reveals that the main CENELEC railway application standards follow a similar approach to IEC 61508, with EN 50126 "corresponding" to Part 1, prEN 50129 "corresponding" to Part 2 and EN 50128 "corresponding" to Part 3. The first difference is also immediately visible: whereas the various parts of IEC 61508 have mutual definitions (in Part 4), each of the CENELEC standards has its own set of definitions. And they are not consistent with each other (see next section)! A comparison of EN 50126 with IEC 61508 Part 1 reveals that EN 50126 uses a different life cycle model, which of course results in somewhat differing activities. The basic structure is, however, quite similar. Both standards require a systematic and detailed process that starts with a concept and leads to a well defined set of requirements, including clearly defined safety requirements. From there, a process for realisation is described, followed by requirements for operation, maintenance, possible modification or retrofit, and finally safe decommissioning and disposal. The main differences between the two standards lie in the description of the realisation process, but they are not of such a grave nature that one cannot claim that EN 50126 is substantially compliant with IEC 61508 Part 1. For prEN 50129 the comparison with IEC 61508 is slightly more complicated. IEC 61508 Part 2 makes extensive reference to the general requirements in Part 1, so both parts must be taken into account when examining compliance between prEN 50129 and IEC 61508. Since prEN 50129 is based on the same development model that was defined in EN 50126, there are of course similar differences with respect to IEC 61508. It should be noted that the approach in prEN 50129, Annex A, where safety integrity levels are associated with Tolerable Hazard Rates, is compliant with IEC 61508 Part 5. EN 50128 states that it "owes much of its direction to earlier work ... which is now part of IEC 61508", so it is not surprising that there is a large degree of similarity between EN 50128 and IEC 61508, Part 3. As with Part 2, Part 3 also makes extensive reference to the general requirements in Part 1, so here too, Part 1 must also be considered. One immediately visible difference is that EN 50128 explicitly describes software safety integrity levels, whereas IEC 61508 addresses safety integrity levels for the Equipment Under Control ("EUC"). The EUC safety integrity
levels will be determined by both soft- and hardware, so EN 50128 is more specific here. But then, that's what one would expect from a sector specific interpretation of IEC 61508. It should be noted that the foregoing paragraphs describe a fairly superficial comparison of the main CENELEC railway application standards with IEC 61508, and a more rigorous examination could well reveal considerable differences. Nevertheless, the superficial scrutiny does reveal a degree of compliance that goes well beyond simply making IEC 61508 a normative reference, so there is good reason to believe that the CENELEC railway application standards are substantially compliant with IEC 61508.
However, that is not the only problem with the life cycle model. The next two phases are "Apportionment of system requirements" followed by "Design and implementation". Now apportionment of requirements assumes that there is something to apportion them to, so some kind of structure is necessary. This can either be a modular structure of the system, or simply the decision to use a combination of software, hardware and possibly administrative procedures. That, however, is a design decision, and according to the life cycle model design should begin afterwards! So clearly, practitioners are forced to be "creative" and assume that "Design and implementation" means "More detailed design...". The phase following Design and implementation is "Manufacturing". Where the difference between "Implementation" and "Manufacturing" lies is unclear. Certainly, breaking down a preliminary design into a more detailed design is a form of implementation, but actually producing (manufacturing) the bits and pieces that are
supposed to perform the actual tasks is also part of implementation. Even the next phase, "Installation" can be regarded as part of the implementation process, and indeed, in real life projects these three phases are often treated in one big lump! This makes demonstrating compliance of a real life project with the standards at least complicated. Not impossible, but expensive. The next two phases are "System validation (including safety acceptance and commissioning)" and "System acceptance", followed by "Operation and maintenance". This sequence, too, is almost wishful thinking. Certainly for large railway applications, it is virtually impossible to perform any kind of system validation without first commissioning the system and performing some kind of "restricted operation". This is also the way things often are done, but once again, this makes documenting compliance with the standards more difficult. The remaining phases of the life cycle model are "Operation and maintenance", "Performance monitoring" and "Modification and retrofit", which are to be regarded as parallel processes, and the last phase of them all, "Decommissioning and disposal". For these phases, documenting compliance with the standards must be a rather theoretical exercise, because the time scale for the three parallel phases operation, performance monitoring and modifications can easily be thirty or more years. And nobody knows what environmental or safety requirements will be applicable a generation later, so whatever was planned and documented when the system was commissioned may be thoroughly out of date when the system is going to be decommissioned. This is not reflected in the requirements that the standards pose for these phases.
4.2 Legacy systems The CENELEC standards have a very clear focus on the development of new systems. For already existing systems, the standards presume that documentation according to the standards, i.e. a full safety case, is available. Then, any upgrade of an existing system is simply a case of "Modification and retrofit" and covered by the corresponding requirements in the standard.
Unfortunately, the requirements in the standard for modification and retrofit are not particularly detailed. In fact, their quintessence is "update the safety case appropriately". If there never was a safety case (because the system was developed and commissioned long before the standards were adopted), there's nothing to update, so we have to create an appropriate safety case. But following the life cycle model of the standard is no longer feasible. The initial phases of the development model as defined in the standards are not relevant: the requirements are dictated by the already existing parts of the system, both those parts to be replaced (or upgraded) and those to be left unchanged. The realisation phases can probably be conducted along the lines of the standard, although this might well mean that the manufacturer must adapt well proven procedures and routines to what the standards demand. This is admittedly a one-time exercise, but it can be expensive, and the reluctance of large, successful organisations to modify their well proven procedures is perfectly understandable.
And finally, the approval process will certainly need to be adapted in order to be practicable for upgraded legacy systems.
4.3 Terminology It was pointed out earlier that the three CENELEC standards each have their own set of definitions, and these definitions are not always consistent. Take for example the terms verification and validation.
Validation = "Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled" Verification = "Confirmation by examination and provision of objective evidence that the specified requirements have been fulfilled"
Validation = "activity of demonstration, by test and analysis, that the product meets in all respects its specified requirements" Verification = "activity of determination, by analysis or test, that the output of each phase of the life-cycle fulfils the requirements of the previous phase"
Now a definition is really just a special case of a specification, and like any good specification it should only say "what", but not "how" nor "when", "why" etc. So in order to extract the actual contents of the above definitions, we remove the superfluous frills, which makes the same definitions become: EN 50126:
o o
Validation = "Confirmation... that the... requirements for a specific... use have been fulfilled" Verification = "Confirmation... that the specified requirements have been fulfilled"
EN 50128:
o o
Validation = "...demonstration... that the product meets... its specified requirements" Verification = "...determination... that the output of each phase... fulfils the requirements ..."
Now here we see that the EN 50126 definitions of verification and validation are effectively the same. It should be noted that the definitions in EN 50126 are identical to the definitions in IEC 61508. More interestingly, the definition of verification in EN 50126 is essentially the same as the definition of validation in EN 50128 and vice versa! These two standards have
both been adopted and are applicable, so the confusion is "official". (A more detailed discussion of this subject can be found in ref. [7]). The term "error" is another example. Whilst EN 50126 doesn't define the word at all, both EN 50128 and prEN 50129 use the following definition:
o
Error = "a deviation from the intended design which could result in unintended system behaviour or failure"
Based on that definition, there can be no such thing as "human error" or "operational error", not to mention the "design errors" that annex A of the standard mentions! It should be noted here that IEC 61508 uses a different definition. Only prEN 50129 defines the term design:
o
Design = "the activity applied in order to analyse and transform specified requirements into acceptable design solutions which have the required safety integrity"
Now applying the same method we just used for the terms validation and verification, we remove the superfluous frills from this definition and get
o
So design is a design activity! The preceding examples show that the definitions in the standards (including IEC 61508!) are inconsistent, incomplete and sometimes even downright wrong. In addition, the way the terms are used within the texts is not always compliant with the definitions that are given in the standards. So there is certainly a need for the quality assurance exercise of checking that terms are used as defined, and - more importantly - that the definitions are sensible.
4.4 New technologies Certain aspects of modern systems are not addressed by the standards at all. With increasing cost pressure, there is a growing desire to use mass produced, general purpose components, so called Commercial Off The Shelf products (COTS). These products are typically not originally designed with a safety related application in mind, so their documentation will seldom be anywhere near the demands that the CENELEC railway application standards stipulate. Incorporating such systems into a safety related application then becomes a difficult exercise of demonstrating that they will be safe enough in a particular environment and application. It won't always succeed (ref. [8]). The standards do not address this matter at all, so the criteria that were used to determine if a COTS product was suitable can vary considerably from system to system.
In the field of software development, new techniques and languages are coming. Extreme programming is an example of such a new technique that can probably be adapted to safety related applications (ref. [9]), but this will require at least a "flexible" interpretation of the standards' demands on documentation. There are also new programming languages and tools emerging that didn't exist when the standards were written and consequently could never be addressed by the standards. EN 50128 identifies certain programming languages as either "Recommended", "Highly Recommended" or "Not Recommended" in the normative annex A, but the list was far from complete, even when EN 50128 was written, and today there are several application specific languages that are not mentioned in the list, but they are proven in use. And as experience grows, and more and better tools are developed, the demands of EN 50128 will become more and more obsolete. EN 50128 addresses "Systems configured by application data", and requires that tools and procedures for data preparation are developed "in accordance with (the) standard in parallel with the generic software and hardware for the system". Now this is based on the assumption that the system uses application data as parameters for performing statically defined operations. The idea of embedded processes that generate the operations, based on more complex data than just parameters for those operations, is not covered by the standard. It should also be noted that the requirement that tools and procedures shall be developed "in parallel with... the system" does not take into consideration the possibility of re-using generic processes and tools for data preparation that have been developed completely independently of the system. In those cases where manufacturers have developed such procedures and tools, it is natural to use them again and again, possibly refining and perfecting them in the process. For such generic tools and procedures, even identifying a (software) safety integrity level will be a problem, because the classification will depend on the data that is being produced and the way it is going to be used. This may vary from application to application, particularly if a sufficiently high degree of flexibility is maintained.
5.Conclusion
The CENELEC railway application standards EN 50126, prEN 50129 and EN 50128 together can be regarded as an "application specific" interpretation of IEC 61508. A rather superficial comparison shows that the CENELEC railway application standards appear to satisfy the demands that IEC 61508 makes, although this paper does not claim to present or report a rigorous confirmation of such compliance. The CENELEC standards (and IEC 61508 too) have their faults and weaknesses. There are inconsistencies and contradictions that must be rectified, and development methods and tools are changing, and with them the development life cycle, which so strongly influences the structure and demands of the standards, will also change. Some of the more evident weaknesses of the CENELEC railway application standards have been described here. Now this should not give the impression that the standards
are "bad" or unusable. Describing their positive sides would have far exceeded the scope of this paper, but there is nevertheless considerable room for improvement. This must be reflected in future modifications of the standards. Until then, it will be up to the railway community to find solutions and interpretations that are practicable without compromising safety.
6.References
0. IEC 61508; Functional safety of electrical/electronic/programmable electronic safety-related systems; IEC; 1998 1. EN 50126:1999; Railway Applications - The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS); CENELEC; 1999 2. prEN 50129:2000; Railway applications - Safety related electronic systems for signalling; CENELEC; 2000 3. EN 50128:2001; Railway Applications - Software for railway control and protection systems; CENELEC; 2001 4. EN 50159-1:2001; Railway Applications - Communication, signalling and processing systems - Part 1: Safety-related communication in closed transmission systems; CENELEC; 2001 5. EN 50159-2:2001; Railway Applications - Communication, signalling and processing systems - Part 2: Safety-related communication in open transmission systems; CENELEC; 2001 6. O.Nordland: "V&V - Veridation or Valification?" in Nagib Callaos, John Porter, Naphtali Rishe (editors): The 6th World Multiconference on Systemics, Cybernetics and Informatics Proceedings, Volume VII, pp. 261-266 (c) International Institute of Informatics and Systemics, Orlando, FL 32837, USA; 2002 ISBN 980-07-8150-1 7. Linda Kristiansen: "COTS components in safety critical systems"; Diploma thesis, NTNU, Trondheim, 2002 8. Liv Ryssdal Thorsen: "Extreme programming in safety related systems"; Diploma thesis, NTNU, Trondheim, 2002
How to Keep Your Railway Embedded Software on Track with CENLEC Standards
Meeting rigorous standards for the railway industry require both predictable and repeatable software operation. Railway industry requirements are defined by CENELEC, the European Committee for Electrotechnical Standardization. The three standards produced by CENELEC, EN 50126, EN 50128 and EN 50129 represent the backbone of the process of demonstrating safety of a railway system.
The standards EN 50128 "Software for railway control and protection systems" and EN 50129 "Safety related electronic systems for signaling"represent the railway application-specific interpretation of the international standard series - IEC 61508 (Functional safety of electrical/ electronic/programmable electronic safetyrelated systems). The EN 50128 standard describes software safety integrity levels and identifies requirements for personnel and their responsibilities, lifecycle issues, and documentation. It gives detailed descriptions of objectives, input documents, output documents and requirements for software requirements specification, architecture, design and implementation, verification and testing as well as software/hardware integration, software validation, quality assurance, and maintenance. EN51028 takes into account the five software integrity levels (SIL) that range from the very critical (SIL-4), such as safety signaling to the non-critical, such as management information systems (SIL-0).
Definition of EN 50128 Safety Integrity Levels Other standards based on IEC 61508 may implement either of two definitions of Safety Integrity Levels. The Demand Mode definition of IEC 61508 is reserved for systems which frequency of operation is intermittent (such as systems covered under EN 51028), while the Continuous Mode covers systems that are used in a sustained manner over a period of time. The following table provides the difference between the two definitions, and what a failure of the system may mean at different SIL levels.
To ensure predictable software operation, organizations need to know they tested 100% of the application code. VectorCAST/Cover does this easily by collecting coverage information during system test activities. The tool allows you to determine adequacy of your system testing. If parts of the code are not covered, then perhaps more testing is required for those areas of the application.
Why System Testing Isn't Enough for 100% Reliability
System testing will not result in 100% coverage, as many functions contain error handling code that is difficult or impossible to stimulate using the fully integrated application. The solution is to perform unit and integration testing on those functions using VectorCAST/C++ or VectorCAST/Ada. Because VectorCAST/Cover shares coverage information with VectorCAST for C/C++ and VectorCAST for Ada, you can easily produce coverage reports showing the combined coverage from all of your test activities.
Compliance with Highest Railway Standards
Our tools have been successfully used by numerous clients that need to comply with rigorous industrial standards, including those used in the Railway industry.
If you would like to see how VectorCAST embedded testing tools improve performance in your exact testing environment, register today for a 30-day, fully-functional trial. You may also register for the embedded software testing webinar orarrange a demo for your railway project.
SIGNALLING SYSTEM
Verification & Validation activities Documentation & flow of information Use of previously approved products and solutions Independent assessment prior to start-up Risk from poor safety planning may be greater than risk from individual equipment
Examples of practices Testability and understandability o Standard and classified variable names o Test planning covers entire functionality and any thinkable discontinuity and fault conditions o Safety functions are separated from other functions o Program state can be monitored from outside o Program records the first failure causing the stop o Program modules exchange information only through external visible interfaces o Freely programmed part of modules is defined and described o Cementation of program and explanation of restrictions o References to requirements specification inside program o Monitoring with automatically generated logs Minimizing time-dependent characteristics o Avoidance of delays and pulses o Program execution monitoring o No parallel execution paths o Application of state machine design o Time windows for functions o Consideration of data communication delays o Monitoring/filtering of field data change rate Verification of safety-critical information o Alarming signal range errors o Monitoring of data communication o Announcement of critical commands to operator o Use of combined signals instead of single signals Program identification and version control o Version control and modification management o Verification and Validation o Version identifiers inside the program