A fundamental principle in engineering, including software engineering, is to minimize the amount... more A fundamental principle in engineering, including software engineering, is to minimize the amount of accidental complexity which is introduced into engineering solutions due to mismatches between a problem and the technology used to represent the problem. As model-driven development moves to the center stage of software engineering, it is particularly important that this principle be applied to the technologies used to create and manipulate models, especially models that are intended to be free of solution decisions. At present, however, there is a significant mismatch between the "two level" modeling paradigm used to construct mainstream domain models and the conceptual information such models are required to represent-a mismatch that makes such models more complex than they need be. In this paper, we identify the precise nature of the mismatch, discuss a number of more or less satisfactory workarounds, and show how it can be avoided. Keywords Domain modeling • Model quality • Accidental complexity • Modeling languages • Modeling paradigm • Stereotypes • Powertypes • Deep instantiation Communicated by Professor Bernhard Rumpe.
Determining a system or component’s dependability invariably involves some kind of statistical an... more Determining a system or component’s dependability invariably involves some kind of statistical analysis of a large number of tests of its behavior under typical usage conditions, regardless of the particular collection of attributes chosen to measure dependability. The number of factors that can affect the final figure is therefore quite large, and includes such things as the ordering of system operation invocations, the test cases (i.e. the parameter values and expected outcomes), the acceptability of different operation invocation results and the cumulative effect of the results over different usage scenarios. Quoting a single dependability number is therefore of little value without a clear presentation of the accompanying factors that generated it. Today, however, there is no compact or unified approach for representing this information in a way that makes it possible to judge dynamic systems and components for their dependability for particular applications. To address this pro...
Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model., 2018
In a model-driven organization, all stakeholders are able to deal with information about an organ... more In a model-driven organization, all stakeholders are able to deal with information about an organization in the way that best supports their goals and tasks. In other words, they are able to select models of the organization at the optimal level of abstraction (e.g. platform independent) in the optimal form (e.g. graph-based) and with the optimal scope (e.g. a single component). However, no approach exists today that seamlessly supports this capability over the entire life-cycle of organizations and the IT systems that drive them. Enterprise architecture modeling approaches focus on supporting model-based views of the static architecture of organizations (i.e. enterprises) but generally provide little if any support for operational views. On the other hand, business intelligence approaches focus on providing operational views of organizations and usually do not accommodate static architectural views. In order to fully support the model-driven organization (MDO) vision, therefore, th...
Proceedings of the 7th International Conference on Model-Driven Engineering and Software Development, 2019
Multi-view environments provide different views of software systems optimized for different stake... more Multi-view environments provide different views of software systems optimized for different stakeholders. One way of ensuring consistency of overlapping and interdependent information contained in such views is to project them "on demand" from a Single Underlying Model (SUM). However, there are various ways of building and evolving such SUMs. This paper presents criteria to distinguish them, describes three archetypical approaches for building SUMs, and analyzes their advantages and disadvantages. From these criteria, guidelines for choosing which approach to use in specific application areas are derived.
Model-based testing (MBT) can reduce the cost of making test cases for critical applications sign... more Model-based testing (MBT) can reduce the cost of making test cases for critical applications significantly. Depending on the formality of the models, they can also be used for verification. Once the models are available model-based test case generation and verification can be seen as “push-button solutions.” However, making the models is often perceived by practitioners as being extremely difficult, error prone, and overall daunting. This paper outlines an approach for generating models out of observations gathered while a system is operating. After refining the models with moderate effort, they can be used for verification and test case generation. The approach is illustrated with a concrete system from the safety and security domain.
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ABSTRACT The cloud has the potential to revolutionize the way software is developed and governed,... more ABSTRACT The cloud has the potential to revolutionize the way software is developed and governed, and to consign much of the artificial complexity involved in software engineering today to history. The cloud promises to unlock the potential of large, heterogeneous distributed development teams by supporting social interaction, group dynamics and key project management principles in software engineering. . It not only holds the key to reducing the tensions between agile and “heavyweight” methods of developing software it also addresses the problem of software license management and piracy – software in the cloud cannot be copied! We outline the motivation for such a cloud-driven approach to software engineering which we refer to as Cloud Aided Software Engineering (CASE 2.0), and introduce some key innovations needed to turn it into reality. We also identify some of the main challenges that still need to be addressed, and some of the most promising strategies for overcoming them.
Dual lifecycle software processes have the potential to significantly improve the way in which su... more Dual lifecycle software processes have the potential to significantly improve the way in which suites of software applications are generated and sustained. However, several outstanding issues need to be more adequately addressed before the full potential of this philosophy can be realized. Detailed strategies for maintaining domain architectures in parallel with suites of fielded applications are at present particularly conspicuous by their absence. In this paper we present a dual-lifecycle maintenance process that was developed for the ROSE project, a major reengineering and repository-building effort in the domain of Flight Design and Dynamics. We present the major features of the process, the rationale behind these features, and changes which we feel would be beneficial based on lessons learned from the application of the process. The process is presented using a variant of the Fusion object-oriented design method known as ProFusion.
The rise in importance of component-based and service-oriented software engineering approaches ov... more The rise in importance of component-based and service-oriented software engineering approaches over the last few years, and the general uptake in model-driven development practices, has created a natural interest in using languages such as the UML to describe component-based systems. However, there is no standard way (de jure or de facto) of using the various viewpoints and diagram types identified in general model-driven development approaches to describe components or assemblies of components. To address this problem, we have developed a prototype IDE which provides a systematic and userfriendly conceptual model for defining and navigating around different views of components and/or component-based systems. This is supported by an infrastructure that allows the IDE to be extended with tools that create views and check consistency in an easy and systematic way, and a unifying metamodel which allows all views to be generated automatically from a single underlying representation of a component or component-based system.
Proceedings of the 21st international conference on Software engineering, 1999
The value of software inspection for uncovering defects early in the development lifecycle has be... more The value of software inspection for uncovering defects early in the development lifecycle has been well documented. Of the various types of inspection methods published to date, experiments have shown perspective-based inspection to be one of the most effective, because of its enhanced coverage of the defect space. However, inspections in general, and perspective-based inspections in particular, have so far been applied predominantly in the context of conventional structured development methods, and then almost always to textual artifacts, such as requirements documents or code modules. Object oriented-models, particularly of the graphical form, have so far not been adequately addressed by inspection methods. This paper tackles this problem by first discussing the difficulties involved in tailoring the perspective-based inspection approach to object-oriented development methods and, second, by presenting a generalization of the approach which overcomes these limitations. The new version of the approach is illustrated in the context of UML-based object-oriented development.
Emerging Technologies for the Evolution and Maintenance of Software Models
Test Sheets provide a new way of representing tests that combines the ease-of-use of tabular test... more Test Sheets provide a new way of representing tests that combines the ease-of-use of tabular test description approaches with the expressiveness of programmatic approaches. Since they are semantically self-contained, and thus executable, they offer a more compact and easy-to-understand approach to test specification than code-level representations of tests. Nevertheless, since they still define tests at a relatively low-level of detail, they can be difficult to develop, and can become quite complex when describing large testing scenarios. Model driven testing approaches on the other hand support high level, graphical views of tests and provide methodological support for deriving tests from system models. However, they invariably rely on code to describe executable versions of tests, and tend to depict the different ingredients of tests in separated, isolated views. The development of test sheets is therefore likely to be significantly simplified by the support of a suitable model-driven testing approach, while model driven testing approaches are likely to be enhanced by the availability of compact, executable representations of tests in the form of tests sheets. In this chapter we therefore explore the synergy between test sheets and model-driven development in the context of the test stories methodology. This is an advanced standards-compliant method that covers the whole test development process from abstract requirements to concrete executable tests, but in the context of programming languages as implementation vehicles. In this chapter we present a case study in which we apply the test stories methodology using test sheets as the test description, execution, and reporting vehicle.
It has been recognized for some time that Ada is not well designed for the purpose of programming... more It has been recognized for some time that Ada is not well designed for the purpose of programming networks of loosely-coupled processors, and numerous projects have been set up to find a solution. The DIADEM project had the particular goal of constructing flexible Ada programs that could be executed in a range of distributed systems, including, as a limiting case, a centralized uniprocessor. To this end a technique was developed by which the Ada rendezvous could be used as a remote communication mechanism, supported in a highly portable way on a range of different network architectures and communication systems. This was achieved by the definition of a special package providing a “standard interface” to the transport layer services of the host communication system. After outlining the basic principles of the DIADEM approach, and describing the first prototype implementation of the communication mechanism, this paper discusses how a standard interface similar to that used in DIADEM c...
Theory and Practice of Model Transformations, 2012
As practical tools for disciplined multi-level modeling have begun to emerge, the problem of supp... more As practical tools for disciplined multi-level modeling have begun to emerge, the problem of supporting simple and efficient transformations to-and-from multi-level model content has started to assume growing importance. The problem is not only to support efficient transformations between multi-level models, but also between multi-level and traditional two-level model content represented in traditional modeling infrastructures such as the UML and programming languages. This is not only important to facilitate interoperability between multi-level modeling tools and traditional tools, but also to extend the benefits of multi-level modeling to transformations. Multi-level model content can already be accessed by traditional transformation languages such as ATL and QVT, but in a way that is blind to the ontological classification information they contain. In this paper we present an approach for making rule-based transformation languages "multi-level aware" so that the semantics of ontological instantiation can be exploited when writing transformations.
Proceedings of the 2006 international workshop on Service-oriented software engineering, 2006
Service-oriented architecture is predicated on the availability of accurate and universally-under... more Service-oriented architecture is predicated on the availability of accurate and universally-understandable specifications of services which capture all the information that a potential user needs to know to use the service. However, WSDL, the most widely used service specification standard, only allows the syntactic signatures of the operations offered by a service to be described. This not only makes it difficult to specify context sensitive information, such as acceptable operation invocation sequences and drive service discovery through client-oriented requirements, it is also an inappropriate level of abstraction for a human friendly description of a service's capabilities. The current thinking is that context sensitive information such as operation sequencing rules should be described in an accompanying specification document written in an auxiliary language. For example, WS-CDL is a well known auxiliary language for writing choreography descriptions that capture interaction scenarios in terms of abstract roles and participants. However, this approach not only decouples the additional information from the core WSDL specification, it also describes it in terms of abstractions which may not match those used (implicitly or explicitly) by the service. In this paper we investigate this issue in greater depth, explore the different solution patterns and propose a new specification approach which rectifies the identified problems.
Proceedings of the 6th international workshop on Software engineering and middleware, 2006
In this paper we describe a new approach for increasing the reliability of ubiquitous software sy... more In this paper we describe a new approach for increasing the reliability of ubiquitous software systems. This is achieved by executing tests at runtime. The individual software components are consequently accompanied by executable tests. We augment this well-known built-in test (BIT) paradigm by combining it with resource-awareness. Starting from the constraints for such resource-aware tests (RATs) we derive their design and describe a number of strategies for executing such tests under resource constraints as well as the necessary middleware. Our approach is especially beneficial to ubiquitous software systems due to their dynamic nature-which prevents a static verification of their reliability-and their inherent resource limitations.
Proceedings Fifth IEEE International Enterprise Distributed Object Computing Conference
Component-based software engineering is widely expected to revolutionize the way in which softwar... more Component-based software engineering is widely expected to revolutionize the way in which software systems are developed and maintained. However, companies who wish to adopt the component paradigm for serious enterprise software development face serious migration obstacles due to the perceived incompatibility of components with traditional, commonly used development approaches. This perception is reinforced by contemporary methods and component technologies, which Qpically view components as merely "binarylevel" modules with little relevance beyond the implementation and deployment phases of development. In this paper we present a method, known as KobrA, that embraces the component concept at all phases of the software life-cycle, and allows high-level components (described in the UML) to be implemented using conventional software development approaches as well as the latest component technologies (e.g. JavaBeans, CORBA, COM). The approach therefore provides a practical vehicle for applying he component paradigm within the context of a model driven architecture. After explaining the noteworthy features of the method, the paper briefly presents an example of its use in the development of an Enterprise Resource Planning System.
The basic motivation for software inspections is to detect and remove defects before they propaga... more The basic motivation for software inspections is to detect and remove defects before they propagate to subsequent development phases where their detection and removal becomes more expensive. To attain this potential, the examination of the artefact under inspection must be as thorough and detailed as possible. This implies the need for systematic reading techniques that tell inspection participants what to look for and, more importantly, how to scrutinise a software document. Recent research efforts investigated the benefits of scenario-based reading techniques for defect detection in functional requirements and functional code documents. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). In this paper we describe and experimentally compare one scenariobased reading technique, namely perspective-based reading (PBR), for defect detection in objectoriented design documents using the notation of the Unified Modelling Language (UML) with the more traditional CBR approach. The comparison was performed in a controlled experiment with 18 practitioners as subjects. Our results indicate that PBR is more effective than CBR (i.e., it resulted in inspection teams detecting on average 41% more unique defects than CBR). Moreover the cost of defect detection using PBR is significantly lower than CBR (i.e., PBR exhibits on average a 58% cost per defect improvement over CBR). This study therefore provides evidence demonstrating the efficacy of PBR scenarios for defect detection in UML design documents. In addition, it demonstrates that a PBR inspection is a promising approach for improving the quality of models developed using the UML notation.
An important goal of workflow engines is to simplify the way in which the interaction of workflow... more An important goal of workflow engines is to simplify the way in which the interaction of workflows and software components (or services) is described and implemented. The vision of the AristaFlow project is to support a "plug and play" approach in which workflow designers can describe interactions with components simply by "dragging" them from a repositorya nd "dropping" them into appropriate points of a new workflow. However, to support such an approach in a practical and dependable way it is necessary to have semantically rich descriptions of components (or services) which can be used to perform automated compatibility checks and can be easily understood by human workflow designers. This, in turn, requires a modeling environment which supports multiple views on components and allows these to be easily generated and navigated around. In this paper we describe the Integrated Development Environment (IDE) developed in the AristaFlow project to support these requirements. After outlining the characteristics of the "plug and play" workflow development model, the paper describes one of the main innovations within the IDE-the multi-dimensional navigation over views. 1I ntroduction An important goal of workflow engines is to simplify the way in which the interaction of processes and software components (or services) is described and implemented [DR04, Ac04]. The AristaFlow project's vision of how to achieve this is based on the "plug and play" notion popularized on the desktop, in which workflow designers can describe interactions with components simply by "dragging" them from a repository and "dropping" them into the desired points of a new workflow [Da05]. However, the ability to define new workflows in such a simple and straightforward way is only advantageous if there is a high likelihood that the resulting processes are well-formed, correct and reliable. In other words, to make the "plug and play" metaphor work in practical workflow scenarios it is essential that components are used in the "correct way", and the possibility for run-time errors is significantly reduced at design time. In short, there should be few if any "surprises" at run-time. If workflows defined by the "plug and play" metaphor are highly unreliable or unpredictable this approach will not be used in practice.
Despite the rapid growth in the number of mobile devices connected to the internet via UMTS or wi... more Despite the rapid growth in the number of mobile devices connected to the internet via UMTS or wireless 802.11 hotspots the market for location-based services has yet to take off as expected. Moreover, other kinds of context information are still not routinely supported by mobile services and even when they are, users are not aware of the services that are available to them at a particular time and place. We believe that the adoption of mobile services will be significantly increased by context-sensitive service discovery services that use context information to deliver precise, personalized search results in a changing environment and reduce human-device interaction. However, developing such applications is still a major challenge for software developers. In this paper we therefore present a framework for building context-sensitive service discovery services for mobile clients that ensures the privacy of the users' context while offering valuable search results.
An important goal of workflow engines is to simplify the way in which the interaction of workflow... more An important goal of workflow engines is to simplify the way in which the interaction of workflows and software components (or services) is described and implemented. The vision of the AristaFlow project is to support a "plug and play" approach in which workflow designers can describe interactions with components simply by "dragging" them from a repository and "dropping" them into appropriate points of a new workflow. However, to support such an approach in a practical and dependable way it is necessary to have semantically rich descriptions of components (or services) which can be used to perform automated compatibility checks and can be easily understood by human workflow designers. This, in turn, requires a modeling environment which supports multiple views on components and allows these to be easily generated and navigated around. In this paper we describe the Integrated Development Environment (IDE) developed in the AristaFlow project to support these requirements. After outlining the characteristics of the "plug and play" workflow development model, the paper describes the two main innovations within the IDE-the dynamic generation of mutually consistent views and the multi-dimensional navigation scheme.
A fundamental principle in engineering, including software engineering, is to minimize the amount... more A fundamental principle in engineering, including software engineering, is to minimize the amount of accidental complexity which is introduced into engineering solutions due to mismatches between a problem and the technology used to represent the problem. As model-driven development moves to the center stage of software engineering, it is particularly important that this principle be applied to the technologies used to create and manipulate models, especially models that are intended to be free of solution decisions. At present, however, there is a significant mismatch between the "two level" modeling paradigm used to construct mainstream domain models and the conceptual information such models are required to represent-a mismatch that makes such models more complex than they need be. In this paper, we identify the precise nature of the mismatch, discuss a number of more or less satisfactory workarounds, and show how it can be avoided. Keywords Domain modeling • Model quality • Accidental complexity • Modeling languages • Modeling paradigm • Stereotypes • Powertypes • Deep instantiation Communicated by Professor Bernhard Rumpe.
Determining a system or component’s dependability invariably involves some kind of statistical an... more Determining a system or component’s dependability invariably involves some kind of statistical analysis of a large number of tests of its behavior under typical usage conditions, regardless of the particular collection of attributes chosen to measure dependability. The number of factors that can affect the final figure is therefore quite large, and includes such things as the ordering of system operation invocations, the test cases (i.e. the parameter values and expected outcomes), the acceptability of different operation invocation results and the cumulative effect of the results over different usage scenarios. Quoting a single dependability number is therefore of little value without a clear presentation of the accompanying factors that generated it. Today, however, there is no compact or unified approach for representing this information in a way that makes it possible to judge dynamic systems and components for their dependability for particular applications. To address this pro...
Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model., 2018
In a model-driven organization, all stakeholders are able to deal with information about an organ... more In a model-driven organization, all stakeholders are able to deal with information about an organization in the way that best supports their goals and tasks. In other words, they are able to select models of the organization at the optimal level of abstraction (e.g. platform independent) in the optimal form (e.g. graph-based) and with the optimal scope (e.g. a single component). However, no approach exists today that seamlessly supports this capability over the entire life-cycle of organizations and the IT systems that drive them. Enterprise architecture modeling approaches focus on supporting model-based views of the static architecture of organizations (i.e. enterprises) but generally provide little if any support for operational views. On the other hand, business intelligence approaches focus on providing operational views of organizations and usually do not accommodate static architectural views. In order to fully support the model-driven organization (MDO) vision, therefore, th...
Proceedings of the 7th International Conference on Model-Driven Engineering and Software Development, 2019
Multi-view environments provide different views of software systems optimized for different stake... more Multi-view environments provide different views of software systems optimized for different stakeholders. One way of ensuring consistency of overlapping and interdependent information contained in such views is to project them "on demand" from a Single Underlying Model (SUM). However, there are various ways of building and evolving such SUMs. This paper presents criteria to distinguish them, describes three archetypical approaches for building SUMs, and analyzes their advantages and disadvantages. From these criteria, guidelines for choosing which approach to use in specific application areas are derived.
Model-based testing (MBT) can reduce the cost of making test cases for critical applications sign... more Model-based testing (MBT) can reduce the cost of making test cases for critical applications significantly. Depending on the formality of the models, they can also be used for verification. Once the models are available model-based test case generation and verification can be seen as “push-button solutions.” However, making the models is often perceived by practitioners as being extremely difficult, error prone, and overall daunting. This paper outlines an approach for generating models out of observations gathered while a system is operating. After refining the models with moderate effort, they can be used for verification and test case generation. The approach is illustrated with a concrete system from the safety and security domain.
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ABSTRACT The cloud has the potential to revolutionize the way software is developed and governed,... more ABSTRACT The cloud has the potential to revolutionize the way software is developed and governed, and to consign much of the artificial complexity involved in software engineering today to history. The cloud promises to unlock the potential of large, heterogeneous distributed development teams by supporting social interaction, group dynamics and key project management principles in software engineering. . It not only holds the key to reducing the tensions between agile and “heavyweight” methods of developing software it also addresses the problem of software license management and piracy – software in the cloud cannot be copied! We outline the motivation for such a cloud-driven approach to software engineering which we refer to as Cloud Aided Software Engineering (CASE 2.0), and introduce some key innovations needed to turn it into reality. We also identify some of the main challenges that still need to be addressed, and some of the most promising strategies for overcoming them.
Dual lifecycle software processes have the potential to significantly improve the way in which su... more Dual lifecycle software processes have the potential to significantly improve the way in which suites of software applications are generated and sustained. However, several outstanding issues need to be more adequately addressed before the full potential of this philosophy can be realized. Detailed strategies for maintaining domain architectures in parallel with suites of fielded applications are at present particularly conspicuous by their absence. In this paper we present a dual-lifecycle maintenance process that was developed for the ROSE project, a major reengineering and repository-building effort in the domain of Flight Design and Dynamics. We present the major features of the process, the rationale behind these features, and changes which we feel would be beneficial based on lessons learned from the application of the process. The process is presented using a variant of the Fusion object-oriented design method known as ProFusion.
The rise in importance of component-based and service-oriented software engineering approaches ov... more The rise in importance of component-based and service-oriented software engineering approaches over the last few years, and the general uptake in model-driven development practices, has created a natural interest in using languages such as the UML to describe component-based systems. However, there is no standard way (de jure or de facto) of using the various viewpoints and diagram types identified in general model-driven development approaches to describe components or assemblies of components. To address this problem, we have developed a prototype IDE which provides a systematic and userfriendly conceptual model for defining and navigating around different views of components and/or component-based systems. This is supported by an infrastructure that allows the IDE to be extended with tools that create views and check consistency in an easy and systematic way, and a unifying metamodel which allows all views to be generated automatically from a single underlying representation of a component or component-based system.
Proceedings of the 21st international conference on Software engineering, 1999
The value of software inspection for uncovering defects early in the development lifecycle has be... more The value of software inspection for uncovering defects early in the development lifecycle has been well documented. Of the various types of inspection methods published to date, experiments have shown perspective-based inspection to be one of the most effective, because of its enhanced coverage of the defect space. However, inspections in general, and perspective-based inspections in particular, have so far been applied predominantly in the context of conventional structured development methods, and then almost always to textual artifacts, such as requirements documents or code modules. Object oriented-models, particularly of the graphical form, have so far not been adequately addressed by inspection methods. This paper tackles this problem by first discussing the difficulties involved in tailoring the perspective-based inspection approach to object-oriented development methods and, second, by presenting a generalization of the approach which overcomes these limitations. The new version of the approach is illustrated in the context of UML-based object-oriented development.
Emerging Technologies for the Evolution and Maintenance of Software Models
Test Sheets provide a new way of representing tests that combines the ease-of-use of tabular test... more Test Sheets provide a new way of representing tests that combines the ease-of-use of tabular test description approaches with the expressiveness of programmatic approaches. Since they are semantically self-contained, and thus executable, they offer a more compact and easy-to-understand approach to test specification than code-level representations of tests. Nevertheless, since they still define tests at a relatively low-level of detail, they can be difficult to develop, and can become quite complex when describing large testing scenarios. Model driven testing approaches on the other hand support high level, graphical views of tests and provide methodological support for deriving tests from system models. However, they invariably rely on code to describe executable versions of tests, and tend to depict the different ingredients of tests in separated, isolated views. The development of test sheets is therefore likely to be significantly simplified by the support of a suitable model-driven testing approach, while model driven testing approaches are likely to be enhanced by the availability of compact, executable representations of tests in the form of tests sheets. In this chapter we therefore explore the synergy between test sheets and model-driven development in the context of the test stories methodology. This is an advanced standards-compliant method that covers the whole test development process from abstract requirements to concrete executable tests, but in the context of programming languages as implementation vehicles. In this chapter we present a case study in which we apply the test stories methodology using test sheets as the test description, execution, and reporting vehicle.
It has been recognized for some time that Ada is not well designed for the purpose of programming... more It has been recognized for some time that Ada is not well designed for the purpose of programming networks of loosely-coupled processors, and numerous projects have been set up to find a solution. The DIADEM project had the particular goal of constructing flexible Ada programs that could be executed in a range of distributed systems, including, as a limiting case, a centralized uniprocessor. To this end a technique was developed by which the Ada rendezvous could be used as a remote communication mechanism, supported in a highly portable way on a range of different network architectures and communication systems. This was achieved by the definition of a special package providing a “standard interface” to the transport layer services of the host communication system. After outlining the basic principles of the DIADEM approach, and describing the first prototype implementation of the communication mechanism, this paper discusses how a standard interface similar to that used in DIADEM c...
Theory and Practice of Model Transformations, 2012
As practical tools for disciplined multi-level modeling have begun to emerge, the problem of supp... more As practical tools for disciplined multi-level modeling have begun to emerge, the problem of supporting simple and efficient transformations to-and-from multi-level model content has started to assume growing importance. The problem is not only to support efficient transformations between multi-level models, but also between multi-level and traditional two-level model content represented in traditional modeling infrastructures such as the UML and programming languages. This is not only important to facilitate interoperability between multi-level modeling tools and traditional tools, but also to extend the benefits of multi-level modeling to transformations. Multi-level model content can already be accessed by traditional transformation languages such as ATL and QVT, but in a way that is blind to the ontological classification information they contain. In this paper we present an approach for making rule-based transformation languages "multi-level aware" so that the semantics of ontological instantiation can be exploited when writing transformations.
Proceedings of the 2006 international workshop on Service-oriented software engineering, 2006
Service-oriented architecture is predicated on the availability of accurate and universally-under... more Service-oriented architecture is predicated on the availability of accurate and universally-understandable specifications of services which capture all the information that a potential user needs to know to use the service. However, WSDL, the most widely used service specification standard, only allows the syntactic signatures of the operations offered by a service to be described. This not only makes it difficult to specify context sensitive information, such as acceptable operation invocation sequences and drive service discovery through client-oriented requirements, it is also an inappropriate level of abstraction for a human friendly description of a service's capabilities. The current thinking is that context sensitive information such as operation sequencing rules should be described in an accompanying specification document written in an auxiliary language. For example, WS-CDL is a well known auxiliary language for writing choreography descriptions that capture interaction scenarios in terms of abstract roles and participants. However, this approach not only decouples the additional information from the core WSDL specification, it also describes it in terms of abstractions which may not match those used (implicitly or explicitly) by the service. In this paper we investigate this issue in greater depth, explore the different solution patterns and propose a new specification approach which rectifies the identified problems.
Proceedings of the 6th international workshop on Software engineering and middleware, 2006
In this paper we describe a new approach for increasing the reliability of ubiquitous software sy... more In this paper we describe a new approach for increasing the reliability of ubiquitous software systems. This is achieved by executing tests at runtime. The individual software components are consequently accompanied by executable tests. We augment this well-known built-in test (BIT) paradigm by combining it with resource-awareness. Starting from the constraints for such resource-aware tests (RATs) we derive their design and describe a number of strategies for executing such tests under resource constraints as well as the necessary middleware. Our approach is especially beneficial to ubiquitous software systems due to their dynamic nature-which prevents a static verification of their reliability-and their inherent resource limitations.
Proceedings Fifth IEEE International Enterprise Distributed Object Computing Conference
Component-based software engineering is widely expected to revolutionize the way in which softwar... more Component-based software engineering is widely expected to revolutionize the way in which software systems are developed and maintained. However, companies who wish to adopt the component paradigm for serious enterprise software development face serious migration obstacles due to the perceived incompatibility of components with traditional, commonly used development approaches. This perception is reinforced by contemporary methods and component technologies, which Qpically view components as merely "binarylevel" modules with little relevance beyond the implementation and deployment phases of development. In this paper we present a method, known as KobrA, that embraces the component concept at all phases of the software life-cycle, and allows high-level components (described in the UML) to be implemented using conventional software development approaches as well as the latest component technologies (e.g. JavaBeans, CORBA, COM). The approach therefore provides a practical vehicle for applying he component paradigm within the context of a model driven architecture. After explaining the noteworthy features of the method, the paper briefly presents an example of its use in the development of an Enterprise Resource Planning System.
The basic motivation for software inspections is to detect and remove defects before they propaga... more The basic motivation for software inspections is to detect and remove defects before they propagate to subsequent development phases where their detection and removal becomes more expensive. To attain this potential, the examination of the artefact under inspection must be as thorough and detailed as possible. This implies the need for systematic reading techniques that tell inspection participants what to look for and, more importantly, how to scrutinise a software document. Recent research efforts investigated the benefits of scenario-based reading techniques for defect detection in functional requirements and functional code documents. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). In this paper we describe and experimentally compare one scenariobased reading technique, namely perspective-based reading (PBR), for defect detection in objectoriented design documents using the notation of the Unified Modelling Language (UML) with the more traditional CBR approach. The comparison was performed in a controlled experiment with 18 practitioners as subjects. Our results indicate that PBR is more effective than CBR (i.e., it resulted in inspection teams detecting on average 41% more unique defects than CBR). Moreover the cost of defect detection using PBR is significantly lower than CBR (i.e., PBR exhibits on average a 58% cost per defect improvement over CBR). This study therefore provides evidence demonstrating the efficacy of PBR scenarios for defect detection in UML design documents. In addition, it demonstrates that a PBR inspection is a promising approach for improving the quality of models developed using the UML notation.
An important goal of workflow engines is to simplify the way in which the interaction of workflow... more An important goal of workflow engines is to simplify the way in which the interaction of workflows and software components (or services) is described and implemented. The vision of the AristaFlow project is to support a "plug and play" approach in which workflow designers can describe interactions with components simply by "dragging" them from a repositorya nd "dropping" them into appropriate points of a new workflow. However, to support such an approach in a practical and dependable way it is necessary to have semantically rich descriptions of components (or services) which can be used to perform automated compatibility checks and can be easily understood by human workflow designers. This, in turn, requires a modeling environment which supports multiple views on components and allows these to be easily generated and navigated around. In this paper we describe the Integrated Development Environment (IDE) developed in the AristaFlow project to support these requirements. After outlining the characteristics of the "plug and play" workflow development model, the paper describes one of the main innovations within the IDE-the multi-dimensional navigation over views. 1I ntroduction An important goal of workflow engines is to simplify the way in which the interaction of processes and software components (or services) is described and implemented [DR04, Ac04]. The AristaFlow project's vision of how to achieve this is based on the "plug and play" notion popularized on the desktop, in which workflow designers can describe interactions with components simply by "dragging" them from a repository and "dropping" them into the desired points of a new workflow [Da05]. However, the ability to define new workflows in such a simple and straightforward way is only advantageous if there is a high likelihood that the resulting processes are well-formed, correct and reliable. In other words, to make the "plug and play" metaphor work in practical workflow scenarios it is essential that components are used in the "correct way", and the possibility for run-time errors is significantly reduced at design time. In short, there should be few if any "surprises" at run-time. If workflows defined by the "plug and play" metaphor are highly unreliable or unpredictable this approach will not be used in practice.
Despite the rapid growth in the number of mobile devices connected to the internet via UMTS or wi... more Despite the rapid growth in the number of mobile devices connected to the internet via UMTS or wireless 802.11 hotspots the market for location-based services has yet to take off as expected. Moreover, other kinds of context information are still not routinely supported by mobile services and even when they are, users are not aware of the services that are available to them at a particular time and place. We believe that the adoption of mobile services will be significantly increased by context-sensitive service discovery services that use context information to deliver precise, personalized search results in a changing environment and reduce human-device interaction. However, developing such applications is still a major challenge for software developers. In this paper we therefore present a framework for building context-sensitive service discovery services for mobile clients that ensures the privacy of the users' context while offering valuable search results.
An important goal of workflow engines is to simplify the way in which the interaction of workflow... more An important goal of workflow engines is to simplify the way in which the interaction of workflows and software components (or services) is described and implemented. The vision of the AristaFlow project is to support a "plug and play" approach in which workflow designers can describe interactions with components simply by "dragging" them from a repository and "dropping" them into appropriate points of a new workflow. However, to support such an approach in a practical and dependable way it is necessary to have semantically rich descriptions of components (or services) which can be used to perform automated compatibility checks and can be easily understood by human workflow designers. This, in turn, requires a modeling environment which supports multiple views on components and allows these to be easily generated and navigated around. In this paper we describe the Integrated Development Environment (IDE) developed in the AristaFlow project to support these requirements. After outlining the characteristics of the "plug and play" workflow development model, the paper describes the two main innovations within the IDE-the dynamic generation of mutually consistent views and the multi-dimensional navigation scheme.
Uploads
Papers by Colin Atkinson