Systems deployed in regulated safety-critical domains (e.g., the medical, nuclear, and automotive... more Systems deployed in regulated safety-critical domains (e.g., the medical, nuclear, and automotive domains) are often required to undergo a stringent safety assessment procedure, as prescribed by a certification body, to demonstrate their compliance to one or more certification standards. Assurance cases are an emerging way of communicating safety, security, and dependability, as well as other properties of safety-critical systems in a structured and comprehensive manner. The significant size and complexity of these documents, however, makes the process of evaluating and assessing their validity a non-trivial task and an active area of research. Due to this, efforts have been made to develop and utilize software tools for the purpose of aiding developers and third party assessors in the act of assessing and analyzing assurance cases. This article presents a survey of the various assurance case assessment features contained in 10 assurance case software tools, all of which identified and selected by us via a previously conducted systematic literature review. We describe the various assessment techniques implemented, discuss their strengths and weaknesses, and identify possible areas in need of further research.
The application of machine learning (ML) based perception algorithms in safety-critical systems s... more The application of machine learning (ML) based perception algorithms in safety-critical systems such as autonomous vehicles have raised major safety concerns due to the apparent risks to human lives. Yet assuring the safety of such systems is a challenging task, in a large part because ML components (MLCs) rarely have clearly specified requirements. Instead, they learn their intended tasks from the training data. One of the most well-studied properties that ensure the safety of MLCs is the robustness against small changes in images. But the range of changes considered small has not been systematically defined. In this paper, we propose an approach for specifying and testing requirements for robustness based on human perception. With this approach, the MLCs are required to be robust to changes that fall within the range defined based on human perception performance studies. We demonstrate the approach on a state-of-the-art object detector.
ACM Sigsoft Software Engineering Notes, Jan 11, 2018
MiSE 2017 was the 9th edition of the workshop on Modelling in Software Engineering, held on 21-22... more MiSE 2017 was the 9th edition of the workshop on Modelling in Software Engineering, held on 21-22 May 2017 as a satellite event of the 39th International Conference on Software Engineer- ing (ICSE 2917), Buenos Aires, Argentina. The goal of this 2-day workshop was to bring together researchers and practitioners in order to exchange innovative technical ideas and experiences re- lated to modeling. The 9th edition of the MiSE workshop provided a forum to discuss successful applications of software-modeling techniques and to gain insights into challenging modeling prob- lems, including uncertainty management, model heterogeneity, model reuse and evolution, testing, and the adoption of mod- els in critical application domains like self-adaptive and real-time systems.
Summary form only given. When building large software-intensive systems, engineers need to expres... more Summary form only given. When building large software-intensive systems, engineers need to express and reason about at least two different types of choices. One type concerns uncertainty - choosing between different design alternatives, resolving inconsistencies, or resolving conflicting stakeholder requirements. Another type deals with variability - supporting different variants of software that serve multiple customers or market segments. Partial modeling has been proposed as a technique for managing uncertainty within a software model. A partial model explicates points of uncertainty and represents the set of possible models that could be obtained by making decisions and resolving the uncertainty. Methods for reasoning about the entire set of possibilities, transforming the entire set and uncertainty-reducing refinements have recently been developed. Software product line engineering approaches propose techniques for managing the variability within sets of related software product variants. Such approaches explicate points of variability (a.k.a.features) and relationships between them in an artifact usually referred to as a feature model. A selection of features from this model guides the derivation of a specific product of a software product line (SPL). Techniques for reasoning about sets of SPL products, transforming the entire SPL and supporting their partial configuration have recently been developed. Partial models and SPL representations are naturally quite similar - both provide ways of encoding and managing sets of artifacts. The techniques for representing, reasoning with and manipulating these sets, naturally, have much in common. Yet, the goals for creating these product sets are quite different, and thus the two techniques lead to distinct methodological considerations. Uncertainty is an aspect of the development process itself; it is transient and must be reduced and eventually eliminated as knowledge is gathered and decisions are made. Thus, the ultimate goal of resolving uncertainty is to produce only one desired artifact. On the other hand, variability is an aspect of the artifacts simultaneously managed through the entire development process; it is to be preserved and carefully engineered to represent the desired range of product variants required. Thus, product lines aim to produce and simultaneously manage multiple artifacts. In this talk, I will survey approaches to representing, reasoning with and transforming models with uncertainty and variability, separately, as well as discuss current work on trying to combine the two approaches.
From financial services platforms to social networks to vehicle control, software has come to med... more From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem — rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
In recent years, the automotive domain has increased its reliance on model based-software develop... more In recent years, the automotive domain has increased its reliance on model based-software development. Models in the automotive domain have the qualities of being heterogenous, large and interconnected through traceability links. When introducing safety related artifacts, such as HAZOP, FTA, FMEA and safety cases, querying these collections of system models and safety artifacts becomes a complex activity. In this paper, we define generic requirements for querying megamodels and demonstrate how to run queries in our MMINT framework using the Viatra query engine. We apply our querying approach to the Lane Management System from the automotive domain through three different scenarios and compare it to an OCL-based one.
International Conference on Software Engineering, Jul 1, 2001
In software engineering, there has long been a recognition that inconsistency is a fact of life. ... more In software engineering, there has long been a recognition that inconsistency is a fact of life. Evolving descriptions of software artefacts are frequently inconsistent, and tolerating this inconsistency is important if flexible collaborative working is to be supported. This workshop will focus on reasoning in the presence of inconsistency, for a wide range of software engineering activities, such as building and exploring requirements models, validating specifications, verifying correctness of implementations, monitoring runtime behaviour, and analyzing development processes. A particular interest is on how existing automated approaches such as model checking, theorem proving, logic programming, and model-based reasoning can still be applied in the presence of inconsistency.
Feature-oriented software development (FOSD) has recently emerged as a promising approach for dev... more Feature-oriented software development (FOSD) has recently emerged as a promising approach for developing a collection of similar software products from a shared set of software assets. A well-recognized issue in FOSD is the analysis of feature interactions: cases where the integration of multiple features would alter the behavior of one or several of them. Existing approaches to detecting feature interactions require specification of correctness of individual features and operate on the entire family of software products. In this poster, we develop and evaluate a highly scalable and modular approach, called Mr. Feature Potato Head (FPH), to detect interactions stemming from non-commutativity of features, i.e., cases where behavior of products changes depending on the order in which features have been composed. We instantiate FPH for systems expressed in Java and evaluate its performance on 29 examples. Our experiments show that FPH is an efficient and effective approach for identifying commutativity-related feature interactions.
Service Integration is the core to enabling any BPM and SOA based applications, and is the mechan... more Service Integration is the core to enabling any BPM and SOA based applications, and is the mechanism we use to enable new Services from existing applications, or create new Services by composing existing Services. ... On the other hand, to create a reliable and enterprise-level service integration solution will require a level of understanding and skills that will take years of experience to acquire. By applying proven integration patterns in your service integration design, our hope is that it will significantly reduce the complexity and skill requirements for users to come ...
Systems deployed in regulated safety-critical domains (e.g., the medical, nuclear, and automotive... more Systems deployed in regulated safety-critical domains (e.g., the medical, nuclear, and automotive domains) are often required to undergo a stringent safety assessment procedure, as prescribed by a certification body, to demonstrate their compliance to one or more certification standards. Assurance cases are an emerging way of communicating safety, security, and dependability, as well as other properties of safety-critical systems in a structured and comprehensive manner. The significant size and complexity of these documents, however, makes the process of evaluating and assessing their validity a non-trivial task and an active area of research. Due to this, efforts have been made to develop and utilize software tools for the purpose of aiding developers and third party assessors in the act of assessing and analyzing assurance cases. This article presents a survey of the various assurance case assessment features contained in 10 assurance case software tools, all of which identified and selected by us via a previously conducted systematic literature review. We describe the various assessment techniques implemented, discuss their strengths and weaknesses, and identify possible areas in need of further research.
The application of machine learning (ML) based perception algorithms in safety-critical systems s... more The application of machine learning (ML) based perception algorithms in safety-critical systems such as autonomous vehicles have raised major safety concerns due to the apparent risks to human lives. Yet assuring the safety of such systems is a challenging task, in a large part because ML components (MLCs) rarely have clearly specified requirements. Instead, they learn their intended tasks from the training data. One of the most well-studied properties that ensure the safety of MLCs is the robustness against small changes in images. But the range of changes considered small has not been systematically defined. In this paper, we propose an approach for specifying and testing requirements for robustness based on human perception. With this approach, the MLCs are required to be robust to changes that fall within the range defined based on human perception performance studies. We demonstrate the approach on a state-of-the-art object detector.
ACM Sigsoft Software Engineering Notes, Jan 11, 2018
MiSE 2017 was the 9th edition of the workshop on Modelling in Software Engineering, held on 21-22... more MiSE 2017 was the 9th edition of the workshop on Modelling in Software Engineering, held on 21-22 May 2017 as a satellite event of the 39th International Conference on Software Engineer- ing (ICSE 2917), Buenos Aires, Argentina. The goal of this 2-day workshop was to bring together researchers and practitioners in order to exchange innovative technical ideas and experiences re- lated to modeling. The 9th edition of the MiSE workshop provided a forum to discuss successful applications of software-modeling techniques and to gain insights into challenging modeling prob- lems, including uncertainty management, model heterogeneity, model reuse and evolution, testing, and the adoption of mod- els in critical application domains like self-adaptive and real-time systems.
Summary form only given. When building large software-intensive systems, engineers need to expres... more Summary form only given. When building large software-intensive systems, engineers need to express and reason about at least two different types of choices. One type concerns uncertainty - choosing between different design alternatives, resolving inconsistencies, or resolving conflicting stakeholder requirements. Another type deals with variability - supporting different variants of software that serve multiple customers or market segments. Partial modeling has been proposed as a technique for managing uncertainty within a software model. A partial model explicates points of uncertainty and represents the set of possible models that could be obtained by making decisions and resolving the uncertainty. Methods for reasoning about the entire set of possibilities, transforming the entire set and uncertainty-reducing refinements have recently been developed. Software product line engineering approaches propose techniques for managing the variability within sets of related software product variants. Such approaches explicate points of variability (a.k.a.features) and relationships between them in an artifact usually referred to as a feature model. A selection of features from this model guides the derivation of a specific product of a software product line (SPL). Techniques for reasoning about sets of SPL products, transforming the entire SPL and supporting their partial configuration have recently been developed. Partial models and SPL representations are naturally quite similar - both provide ways of encoding and managing sets of artifacts. The techniques for representing, reasoning with and manipulating these sets, naturally, have much in common. Yet, the goals for creating these product sets are quite different, and thus the two techniques lead to distinct methodological considerations. Uncertainty is an aspect of the development process itself; it is transient and must be reduced and eventually eliminated as knowledge is gathered and decisions are made. Thus, the ultimate goal of resolving uncertainty is to produce only one desired artifact. On the other hand, variability is an aspect of the artifacts simultaneously managed through the entire development process; it is to be preserved and carefully engineered to represent the desired range of product variants required. Thus, product lines aim to produce and simultaneously manage multiple artifacts. In this talk, I will survey approaches to representing, reasoning with and transforming models with uncertainty and variability, separately, as well as discuss current work on trying to combine the two approaches.
From financial services platforms to social networks to vehicle control, software has come to med... more From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem — rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
In recent years, the automotive domain has increased its reliance on model based-software develop... more In recent years, the automotive domain has increased its reliance on model based-software development. Models in the automotive domain have the qualities of being heterogenous, large and interconnected through traceability links. When introducing safety related artifacts, such as HAZOP, FTA, FMEA and safety cases, querying these collections of system models and safety artifacts becomes a complex activity. In this paper, we define generic requirements for querying megamodels and demonstrate how to run queries in our MMINT framework using the Viatra query engine. We apply our querying approach to the Lane Management System from the automotive domain through three different scenarios and compare it to an OCL-based one.
International Conference on Software Engineering, Jul 1, 2001
In software engineering, there has long been a recognition that inconsistency is a fact of life. ... more In software engineering, there has long been a recognition that inconsistency is a fact of life. Evolving descriptions of software artefacts are frequently inconsistent, and tolerating this inconsistency is important if flexible collaborative working is to be supported. This workshop will focus on reasoning in the presence of inconsistency, for a wide range of software engineering activities, such as building and exploring requirements models, validating specifications, verifying correctness of implementations, monitoring runtime behaviour, and analyzing development processes. A particular interest is on how existing automated approaches such as model checking, theorem proving, logic programming, and model-based reasoning can still be applied in the presence of inconsistency.
Feature-oriented software development (FOSD) has recently emerged as a promising approach for dev... more Feature-oriented software development (FOSD) has recently emerged as a promising approach for developing a collection of similar software products from a shared set of software assets. A well-recognized issue in FOSD is the analysis of feature interactions: cases where the integration of multiple features would alter the behavior of one or several of them. Existing approaches to detecting feature interactions require specification of correctness of individual features and operate on the entire family of software products. In this poster, we develop and evaluate a highly scalable and modular approach, called Mr. Feature Potato Head (FPH), to detect interactions stemming from non-commutativity of features, i.e., cases where behavior of products changes depending on the order in which features have been composed. We instantiate FPH for systems expressed in Java and evaluate its performance on 29 examples. Our experiments show that FPH is an efficient and effective approach for identifying commutativity-related feature interactions.
Service Integration is the core to enabling any BPM and SOA based applications, and is the mechan... more Service Integration is the core to enabling any BPM and SOA based applications, and is the mechanism we use to enable new Services from existing applications, or create new Services by composing existing Services. ... On the other hand, to create a reliable and enterprise-level service integration solution will require a level of understanding and skills that will take years of experience to acquire. By applying proven integration patterns in your service integration design, our hope is that it will significantly reduce the complexity and skill requirements for users to come ...
Uploads
Papers by Marsha Chechik