Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings
The development of cyber-physical systems typically involves the association between multiple cou... more The development of cyber-physical systems typically involves the association between multiple coupled models that capture different aspects of the system and the environment where it operates. Due to the dynamic aspect of the environment, unexpected conditions and uncertainty may impact the system. In this work, we tackle this problem and propose a taxonomy for characterizing uncertainty in coupled models. Our taxonomy extends existing proposals to cope with the particularities of coupled models in cyber-physical systems. In addition, our taxonomy discusses the notion of uncertainty propagation to other parts of the system. This allows for studying and (in some cases) quantifying the effects of uncertainty on other models in a system even at design time. We show the applicability of our uncertainty taxonomy in real use cases motivated by our envisioned scenario of automotive development.
With the increasing demand for customized systems and rapidly evolving technology, software engin... more With the increasing demand for customized systems and rapidly evolving technology, software engineering faces many challenges. A particular challenge is the development and maintenance of systems that are highly variable both in space (concurrent variations of the system at one point in time) and time (sequential variations of the system, due to its evolution). Recent research aims to address this challenge by managing variability in space and time simultaneously. However, this research originates from two different areas, software product line engineering and software configuration management, resulting in non-uniform terminologies and a varying understanding of concepts. These problems hamper the communication and understanding of involved concepts, as well as the development of techniques that unify variability in space and time. To tackle these problems, we performed an iterative, expert-driven analysis of existing tools from both research areas to derive a conceptual model that...
Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, 2018
Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications ... more Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications in a virtualized environment without having to customize their applications to a specific Platform as a Service (PaaS) stack. It is common practice to host multiple Virtual Machines (VMs) on the same server to save resources. Traditionally, IaaS data center management required manual effort for optimization, e.g. by consolidating VM placement based on changes in usage patterns. Many resource management algorithms and frameworks have been developed to automate this process. Resource management algorithms are typically tested via experimentation or using simulation. The main drawback of both approaches is the high effort required to conduct the testing. Existing Cloud or IaaS simulators require the algorithm engineer to reimplement their algorithm against the simulator's API. Furthermore, the engineer manually needs to define the workload model used for algorithm testing. We propose an approach for the simulative analysis of IaaS Cloud infrastructure that allows algorithm engineers and data center operators to evaluate optimization algorithms without investing additional effort to reimplement them in a simulation environment. By leveraging runtime monitoring data, we automatically construct the simulation models used to test the algorithms. Our validation shows that algorithm tests conducted using our IaaS Cloud simulator match the measured behavior on actual hardware.
To avoid design-related performance problems, model-driven performance prediction methods analy... more To avoid design-related performance problems, model-driven performance prediction methods analyse the response times, throughputs, and resource utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions.
Wir möchten uns an dieser Stelle bei allen Teilnehmern des Seminars für ihre engagierte Mitarbeit... more Wir möchten uns an dieser Stelle bei allen Teilnehmern des Seminars für ihre engagierte Mitarbeit sehr herzlich bedanken. Ein mehrstufiger Begutachtungs-Prozess bestehend aus "peer-to-peer-Reviews" sowie Gutachten durch die Betreuer ermöglichte die Auswahl qualitativ hochwertiger Paper. Insgesamt wurden acht Ausarbeitungen für diesen technischen Bericht angenommen. Auf der Homepage 1 zu diesem Seminar sind daneben auch die Vortragsfolien der Seminarteilnehmer zu finden, die auf der Abschlusskonferenz des Seminars vorgestellt wurden. Ganz besonders möchten wir uns bei Herrn Achim Baier von der itemis AG & Co. KG für seine Keynote auf der Abschlusskonferenz bedanken.
Although the quality of a system's software architecture is one of the critical factors in its ov... more Although the quality of a system's software architecture is one of the critical factors in its overall quality, the architecture is simply a means to an end, the end being the implemented system. Thus the ultimate measure of the quality of the software architecture lies in the implemented system, in how well it satisfies the requirements and constraints of the project and whether it can be maintained and evolved successfully. But in order to treat design as science rather than an art, we need the ability to address the quality of the software architecture directly, not simply as it is reflected in the implemented system. This is the goal of QoSA: to address software architecture quality directly by addressing the problems of: • designing software architectures of good quality, • defining, measuring, evaluating architecture quality, and • managing architecture quality, tying it upstream to requirements and downstream to implementation, and preserving architecture quality throughout the lifetime of the system. Cross-cutting these problems is the question of the nature of software architecture. Software architecture organizes a system, partitioning it into elements and defining relationships among the elements. For this we often use multiple views, each with a different organizing principle. Heinz Züllighoven, graduted in Mathematics and German Language and Literature, holds a PhD in Computer Science. Since October 1991 he is professor at the Computer Science Department of the University of Hamburg and head of the attached Software Technology Centre. He is one of the original designers of the Tools & Materials approach to object-oriented application software and the Java framework JWAM, supporting this approach. Since 2000, Heinz Züllighoven is also one of the managing directors of C1 Workplace Solutions Ltd. He is consulting industrial software development projects in the area of object-oriented design, among which are several major banks. Heinz Züllighoven has published a number of papers and books on various software engineering topics. An English construction handbook for the Tools & Materials approach has been published by Morgan Kaufmann in 2004. Among his current research interests are agile object-oriented development strategies, migration processes and the architecture of large industrial interactive software systems. In addition, he an his co-researchers are further developing a lightweight modeling concept for business processes which is tool-supported. Carola Lilienthal holds a Diploma degree in computer science from University of Hamburg (1995). She is a research assistant at the University of Hamburg and is working in the Software Engineering Group of Christiane Floyd and Heinz Züllighoven. Since 1995 she is also working as a consultant for object oriented design, software architecture, software quality, agile software development and participatory design in several industrial projects.
Die Entwicklung von Software mit Hilfe von Eclipse gehort heute zu den Standard-Aufgaben eines So... more Die Entwicklung von Software mit Hilfe von Eclipse gehort heute zu den Standard-Aufgaben eines Software-Entwicklers. Die Artikel in diesem technischen Bericht beschaftigen sich mit den umfangreichen Moglichkeiten des Eclipse-Frameworks, die nicht zuletzt auf Grund zahlreicher Erweiterungsmoglichkeiten mittels Plugins moglich sind. Dieser technische Bericht entstand aus einem Proseminar im Wintersemester 2006/2007.
22 | 0:00:00 Starten 0:00:32 Information Hiding 0:01:33 Cross-cutting Concerns 0:05:32 Benefits o... more 22 | 0:00:00 Starten 0:00:32 Information Hiding 0:01:33 Cross-cutting Concerns 0:05:32 Benefits of a Good Design 0:06:08 Program to an Interface 0:06:36 Isolate Volatile Behaviour 0:07:17 Design Problems 0:08:59 Improper Layering 0:12:49 Simulated Polymorphism 0:15:56 Refused Bequest 0:19:39 Feature Envy 0:21:01 Knows of Derived 0:21:57 God Class 0:23:12 Key Points 0:25:44 Wrap-Up 0:26:04 Learning Goals 0:34:39 How much Theory, how much Practice? 0:36:13 Klausuren 0:39:07 Which Diagrams Should I know? 0:40:35 Weitere hilfreiche Hinweise 0:43:25 JEE Web Application
This chapter describes the benefits and deliverables of the case studies in SPP1593 for the outsi... more This chapter describes the benefits and deliverables of the case studies in SPP1593 for the outside community. Section 12.1 sums up the benefits of the Common ComponentModeling Example (CoCoME) case study together with the deliverables for the community. Section 12.2 describes the benefits of the Pick-and-Place Unit (PPU) and its extension (xPPU) as well as the deliverables for the outside community. Section 12.3 describes the benefits and deliverables of the industry 4.0 case study that integrates CoCoME and xPPU.
1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good... more 1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good performance leads to high user satisfaction, bad performance leads to loss of users, perceived unavailability of the system, or unnecessarily high costs of networking or computing resources. Therefore, various techniques to evaluate, control, and improve the performance of IT systems have been developed, ranging from online monitoring and benchmarking to modeling and prediction. Experience shows that for system design or later optimization, such techniques need to be applied in smart combination.
2017 IEEE International Conference on Software Architecture (ICSA), 2017
Business processes as well as software systems face various changes during their lifetime. As the... more Business processes as well as software systems face various changes during their lifetime. As they mutually influence each other, business processes and software systems have to be modified in co-evolution. Thus, to adequately predict the change impact, it is important to consider the complex mutual dependencies of both domains. However, existing approaches are limited to analyzing the change propagation in software systems or business processes in isolation. In this paper, we present a tool-supported approach to estimate the change propagation caused by a change request in business processes or software systems based on the software architecture and the process design. We focus on the mutual dependencies regarding the change propagation between both domains. In the evaluation, we apply our approach to a community case study to demonstrate the quality of results in terms of precision, recall, and coverage.
Companion of the 2023 ACM/SPEC International Conference on Performance Engineering
Performability is the classic metric for performance evaluation of static systems in case of fail... more Performability is the classic metric for performance evaluation of static systems in case of failures. Compared to static systems, Self-Adaptive Systems (SASs) are inherently more complex due to their constantly changing nature. Thus software architects are facing more complex design decisions which are preferably evaluated at design-time. Model-Based Quality Analysis (MBQA) provides valuable support by putting software architects in a position to take well-founded design decisions about software system quality attributes over the whole development phase of a system. We claim that combining methods from MBQA and established performability concepts support software architects in this decision making process to design effective fault-tolerant adaptation strategies. Our contribution is a model-based approach to evaluate performabilityoriented adaptation strategies of SAS at design-time. We demonstrate the applicability of our approach by a proof-of-concept. CCS CONCEPTS • Software and its engineering → Software architectures; Model-driven software engineering; Extra-functional properties.
Although the quality of a system's software architecture is one of the critical factors in its ov... more Although the quality of a system's software architecture is one of the critical factors in its overall quality, the architecture is simply a means to an end, the end being the implemented system. Thus the ultimate measure of the quality of the software architecture lies in the implemented system, in how well it satisfies the requirements and constraints of the project and whether it can be maintained and evolved successfully. But in order to treat design as science rather than an art, we need the ability to address the quality of the software architecture directly, not simply as it is reflected in the implemented system. This is the goal of QoSA: to address software architecture quality directly by addressing the problems of: • designing software architectures of good quality, • defining, measuring, evaluating architecture quality, and • managing architecture quality, tying it upstream to requirements and downstream to implementation, and preserving architecture quality throughout the lifetime of the system. Cross-cutting these problems is the question of the nature of software architecture. Software architecture organizes a system, partitioning it into elements and defining relationships among the elements. For this we often use multiple views, each with a different organizing principle. Heinz Züllighoven, graduted in Mathematics and German Language and Literature, holds a PhD in Computer Science. Since October 1991 he is professor at the Computer Science Department of the University of Hamburg and head of the attached Software Technology Centre. He is one of the original designers of the Tools & Materials approach to object-oriented application software and the Java framework JWAM, supporting this approach. Since 2000, Heinz Züllighoven is also one of the managing directors of C1 Workplace Solutions Ltd. He is consulting industrial software development projects in the area of object-oriented design, among which are several major banks. Heinz Züllighoven has published a number of papers and books on various software engineering topics. An English construction handbook for the Tools & Materials approach has been published by Morgan Kaufmann in 2004. Among his current research interests are agile object-oriented development strategies, migration processes and the architecture of large industrial interactive software systems. In addition, he an his co-researchers are further developing a lightweight modeling concept for business processes which is tool-supported. Carola Lilienthal holds a Diploma degree in computer science from University of Hamburg (1995). She is a research assistant at the University of Hamburg and is working in the Software Engineering Group of Christiane Floyd and Heinz Züllighoven. Since 1995 she is also working as a consultant for object oriented design, software architecture, software quality, agile software development and participatory design in several industrial projects.
1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good... more 1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good performance leads to high user satisfaction, bad performance leads to loss of users, perceived unavailability of the system, or unnecessarily high costs of networking or computing resources. Therefore, various techniques to evaluate, control, and improve the performance of IT systems have been developed, ranging from online monitoring and benchmarking to modeling and prediction. Experience shows that for system design or later optimization, such techniques need to be applied in smart combination.
During the development of large software-intensive systems, developers use several modeling langu... more During the development of large software-intensive systems, developers use several modeling languages and tools to describe a system from different viewpoints. Model-driven and view-based technologies have made it easier to define domain-specific languages and transformations. Nevertheless, using several languages leads to fragmentation of information, to redundancies in the system description, and eventually to inconsistencies. Inconsistencies have negative impacts on the system's quality and are costly to fix. Often, there is no support for consistency management across multiple languages. Using a single language is no practicable solution either, as it is overly complex to define, use, and evolve such a language. View-based development is a suitable approach to deal with complex systems, and is widely used in other engineering disciplines. Still, we need to cope with the problems of fragmentation and consistency. In this paper, we present the Vitruvius approach for consistency in view-based modeling. We describe the approach by formalizing the notion of consistency, presenting languages for consistency preservation, and defining a model-driven development process. Furthermore, we show how existing models can be integrated. We have evaluated our approach at two case studies from component-based and embedded automotive software development, using our prototypical implementation based on the Eclipse Modeling Framework.
The amount of data to be processed by experiments in high energy physics (HEP) will increase trem... more The amount of data to be processed by experiments in high energy physics (HEP) will increase tremendously in the coming years. To cope with this increasing load, most efficient usage of the resources is mandatory. Furthermore, the computing resources for user jobs in HEP will be increasingly distributed and heterogeneous, resulting in more difficult scheduling due to the increasing complexity of the system. We aim to create a simulation for the WLCG helping the HEP community to solve both challenges: a more efficient utilization of the grid and coping with the rising complexity of the system. There is currently no simulation in existence which helps the operators of the grid to make the correct decisions while optimizing the load balancing strategy. This paper presents a proof of concept in which the computing jobs at the Tier 1 center GridKa are modeled and simulated. To model the computing jobs we extended the Palladio simulator with a mechanism to simulate load balancing strategi...
Bidirectional transformations (BX) are a common approach for keeping two types of models consiste... more Bidirectional transformations (BX) are a common approach for keeping two types of models consistent, but consistency preservation between more than two types of models is not researched well. One solution is the composition of BX to networks of transformations. Nevertheless, such networks are prone to failures due to interoperability issues between the individual BX, which are independently developed by various experts. We therefore systematically identify and categorize such issues. First, we structure the process of consistency specification into different conceptual levels. Then we develop a catalog of potential mistakes, which we derive from those levels, and consequential failure types. Finally, we discuss strategies to avoid mistakes at the different levels. This catalog is beneficial for transformation developers and transformation language developers. It improves awareness in developers of potential mistakes and consequential failures, enables the development of techniques to avoid specific mistakes by construction, and eases the identification of reasons for failures.
Are successful test cases useless or not? p. 2 DoSAM-domain-specific software architecture compar... more Are successful test cases useless or not? p. 2 DoSAM-domain-specific software architecture comparison model p. 4 An architecture-centric approach for producing quality systems p. 21
Proceedings of the 2015 European Conference on Software Architecture Workshops, 2015
Due to hostile environments, space systems are equipped with hardware redundancies to guarantee p... more Due to hostile environments, space systems are equipped with hardware redundancies to guarantee proper operation. For reconfigurations beyond redundancies, manual decision making is needed, which results in down times, communication efforts and man hours in maintenance phases. We investigate automated reconfiguration decision support that determines Pareto-optimal architectures w.r.t. variable hardware availability and quality properties. Reconfiguration options for control software according to available sensing and actuation hardware are derived and prioritised w.r.t. predicted qualitative impacts. The knowledge about relations of the system's variations is persisted in a decision model at design time on the level of software architectures. Upon a resources fault, the model is traversed for an alternative architecture. This promotes a transparent analysis of available deployments as well as an acceleration of the reconfiguration process during maintenance. We provide tool support for analysis and a concept for reconfigurations during operation. For evaluation, we inspect a reengineered extension of the attitude control system of the TET-1 micro satellite.
Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings
The development of cyber-physical systems typically involves the association between multiple cou... more The development of cyber-physical systems typically involves the association between multiple coupled models that capture different aspects of the system and the environment where it operates. Due to the dynamic aspect of the environment, unexpected conditions and uncertainty may impact the system. In this work, we tackle this problem and propose a taxonomy for characterizing uncertainty in coupled models. Our taxonomy extends existing proposals to cope with the particularities of coupled models in cyber-physical systems. In addition, our taxonomy discusses the notion of uncertainty propagation to other parts of the system. This allows for studying and (in some cases) quantifying the effects of uncertainty on other models in a system even at design time. We show the applicability of our uncertainty taxonomy in real use cases motivated by our envisioned scenario of automotive development.
With the increasing demand for customized systems and rapidly evolving technology, software engin... more With the increasing demand for customized systems and rapidly evolving technology, software engineering faces many challenges. A particular challenge is the development and maintenance of systems that are highly variable both in space (concurrent variations of the system at one point in time) and time (sequential variations of the system, due to its evolution). Recent research aims to address this challenge by managing variability in space and time simultaneously. However, this research originates from two different areas, software product line engineering and software configuration management, resulting in non-uniform terminologies and a varying understanding of concepts. These problems hamper the communication and understanding of involved concepts, as well as the development of techniques that unify variability in space and time. To tackle these problems, we performed an iterative, expert-driven analysis of existing tools from both research areas to derive a conceptual model that...
Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, 2018
Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications ... more Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications in a virtualized environment without having to customize their applications to a specific Platform as a Service (PaaS) stack. It is common practice to host multiple Virtual Machines (VMs) on the same server to save resources. Traditionally, IaaS data center management required manual effort for optimization, e.g. by consolidating VM placement based on changes in usage patterns. Many resource management algorithms and frameworks have been developed to automate this process. Resource management algorithms are typically tested via experimentation or using simulation. The main drawback of both approaches is the high effort required to conduct the testing. Existing Cloud or IaaS simulators require the algorithm engineer to reimplement their algorithm against the simulator's API. Furthermore, the engineer manually needs to define the workload model used for algorithm testing. We propose an approach for the simulative analysis of IaaS Cloud infrastructure that allows algorithm engineers and data center operators to evaluate optimization algorithms without investing additional effort to reimplement them in a simulation environment. By leveraging runtime monitoring data, we automatically construct the simulation models used to test the algorithms. Our validation shows that algorithm tests conducted using our IaaS Cloud simulator match the measured behavior on actual hardware.
To avoid design-related performance problems, model-driven performance prediction methods analy... more To avoid design-related performance problems, model-driven performance prediction methods analyse the response times, throughputs, and resource utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions.
Wir möchten uns an dieser Stelle bei allen Teilnehmern des Seminars für ihre engagierte Mitarbeit... more Wir möchten uns an dieser Stelle bei allen Teilnehmern des Seminars für ihre engagierte Mitarbeit sehr herzlich bedanken. Ein mehrstufiger Begutachtungs-Prozess bestehend aus "peer-to-peer-Reviews" sowie Gutachten durch die Betreuer ermöglichte die Auswahl qualitativ hochwertiger Paper. Insgesamt wurden acht Ausarbeitungen für diesen technischen Bericht angenommen. Auf der Homepage 1 zu diesem Seminar sind daneben auch die Vortragsfolien der Seminarteilnehmer zu finden, die auf der Abschlusskonferenz des Seminars vorgestellt wurden. Ganz besonders möchten wir uns bei Herrn Achim Baier von der itemis AG & Co. KG für seine Keynote auf der Abschlusskonferenz bedanken.
Although the quality of a system's software architecture is one of the critical factors in its ov... more Although the quality of a system's software architecture is one of the critical factors in its overall quality, the architecture is simply a means to an end, the end being the implemented system. Thus the ultimate measure of the quality of the software architecture lies in the implemented system, in how well it satisfies the requirements and constraints of the project and whether it can be maintained and evolved successfully. But in order to treat design as science rather than an art, we need the ability to address the quality of the software architecture directly, not simply as it is reflected in the implemented system. This is the goal of QoSA: to address software architecture quality directly by addressing the problems of: • designing software architectures of good quality, • defining, measuring, evaluating architecture quality, and • managing architecture quality, tying it upstream to requirements and downstream to implementation, and preserving architecture quality throughout the lifetime of the system. Cross-cutting these problems is the question of the nature of software architecture. Software architecture organizes a system, partitioning it into elements and defining relationships among the elements. For this we often use multiple views, each with a different organizing principle. Heinz Züllighoven, graduted in Mathematics and German Language and Literature, holds a PhD in Computer Science. Since October 1991 he is professor at the Computer Science Department of the University of Hamburg and head of the attached Software Technology Centre. He is one of the original designers of the Tools & Materials approach to object-oriented application software and the Java framework JWAM, supporting this approach. Since 2000, Heinz Züllighoven is also one of the managing directors of C1 Workplace Solutions Ltd. He is consulting industrial software development projects in the area of object-oriented design, among which are several major banks. Heinz Züllighoven has published a number of papers and books on various software engineering topics. An English construction handbook for the Tools & Materials approach has been published by Morgan Kaufmann in 2004. Among his current research interests are agile object-oriented development strategies, migration processes and the architecture of large industrial interactive software systems. In addition, he an his co-researchers are further developing a lightweight modeling concept for business processes which is tool-supported. Carola Lilienthal holds a Diploma degree in computer science from University of Hamburg (1995). She is a research assistant at the University of Hamburg and is working in the Software Engineering Group of Christiane Floyd and Heinz Züllighoven. Since 1995 she is also working as a consultant for object oriented design, software architecture, software quality, agile software development and participatory design in several industrial projects.
Die Entwicklung von Software mit Hilfe von Eclipse gehort heute zu den Standard-Aufgaben eines So... more Die Entwicklung von Software mit Hilfe von Eclipse gehort heute zu den Standard-Aufgaben eines Software-Entwicklers. Die Artikel in diesem technischen Bericht beschaftigen sich mit den umfangreichen Moglichkeiten des Eclipse-Frameworks, die nicht zuletzt auf Grund zahlreicher Erweiterungsmoglichkeiten mittels Plugins moglich sind. Dieser technische Bericht entstand aus einem Proseminar im Wintersemester 2006/2007.
22 | 0:00:00 Starten 0:00:32 Information Hiding 0:01:33 Cross-cutting Concerns 0:05:32 Benefits o... more 22 | 0:00:00 Starten 0:00:32 Information Hiding 0:01:33 Cross-cutting Concerns 0:05:32 Benefits of a Good Design 0:06:08 Program to an Interface 0:06:36 Isolate Volatile Behaviour 0:07:17 Design Problems 0:08:59 Improper Layering 0:12:49 Simulated Polymorphism 0:15:56 Refused Bequest 0:19:39 Feature Envy 0:21:01 Knows of Derived 0:21:57 God Class 0:23:12 Key Points 0:25:44 Wrap-Up 0:26:04 Learning Goals 0:34:39 How much Theory, how much Practice? 0:36:13 Klausuren 0:39:07 Which Diagrams Should I know? 0:40:35 Weitere hilfreiche Hinweise 0:43:25 JEE Web Application
This chapter describes the benefits and deliverables of the case studies in SPP1593 for the outsi... more This chapter describes the benefits and deliverables of the case studies in SPP1593 for the outside community. Section 12.1 sums up the benefits of the Common ComponentModeling Example (CoCoME) case study together with the deliverables for the community. Section 12.2 describes the benefits of the Pick-and-Place Unit (PPU) and its extension (xPPU) as well as the deliverables for the outside community. Section 12.3 describes the benefits and deliverables of the industry 4.0 case study that integrates CoCoME and xPPU.
1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good... more 1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good performance leads to high user satisfaction, bad performance leads to loss of users, perceived unavailability of the system, or unnecessarily high costs of networking or computing resources. Therefore, various techniques to evaluate, control, and improve the performance of IT systems have been developed, ranging from online monitoring and benchmarking to modeling and prediction. Experience shows that for system design or later optimization, such techniques need to be applied in smart combination.
2017 IEEE International Conference on Software Architecture (ICSA), 2017
Business processes as well as software systems face various changes during their lifetime. As the... more Business processes as well as software systems face various changes during their lifetime. As they mutually influence each other, business processes and software systems have to be modified in co-evolution. Thus, to adequately predict the change impact, it is important to consider the complex mutual dependencies of both domains. However, existing approaches are limited to analyzing the change propagation in software systems or business processes in isolation. In this paper, we present a tool-supported approach to estimate the change propagation caused by a change request in business processes or software systems based on the software architecture and the process design. We focus on the mutual dependencies regarding the change propagation between both domains. In the evaluation, we apply our approach to a community case study to demonstrate the quality of results in terms of precision, recall, and coverage.
Companion of the 2023 ACM/SPEC International Conference on Performance Engineering
Performability is the classic metric for performance evaluation of static systems in case of fail... more Performability is the classic metric for performance evaluation of static systems in case of failures. Compared to static systems, Self-Adaptive Systems (SASs) are inherently more complex due to their constantly changing nature. Thus software architects are facing more complex design decisions which are preferably evaluated at design-time. Model-Based Quality Analysis (MBQA) provides valuable support by putting software architects in a position to take well-founded design decisions about software system quality attributes over the whole development phase of a system. We claim that combining methods from MBQA and established performability concepts support software architects in this decision making process to design effective fault-tolerant adaptation strategies. Our contribution is a model-based approach to evaluate performabilityoriented adaptation strategies of SAS at design-time. We demonstrate the applicability of our approach by a proof-of-concept. CCS CONCEPTS • Software and its engineering → Software architectures; Model-driven software engineering; Extra-functional properties.
Although the quality of a system's software architecture is one of the critical factors in its ov... more Although the quality of a system's software architecture is one of the critical factors in its overall quality, the architecture is simply a means to an end, the end being the implemented system. Thus the ultimate measure of the quality of the software architecture lies in the implemented system, in how well it satisfies the requirements and constraints of the project and whether it can be maintained and evolved successfully. But in order to treat design as science rather than an art, we need the ability to address the quality of the software architecture directly, not simply as it is reflected in the implemented system. This is the goal of QoSA: to address software architecture quality directly by addressing the problems of: • designing software architectures of good quality, • defining, measuring, evaluating architecture quality, and • managing architecture quality, tying it upstream to requirements and downstream to implementation, and preserving architecture quality throughout the lifetime of the system. Cross-cutting these problems is the question of the nature of software architecture. Software architecture organizes a system, partitioning it into elements and defining relationships among the elements. For this we often use multiple views, each with a different organizing principle. Heinz Züllighoven, graduted in Mathematics and German Language and Literature, holds a PhD in Computer Science. Since October 1991 he is professor at the Computer Science Department of the University of Hamburg and head of the attached Software Technology Centre. He is one of the original designers of the Tools & Materials approach to object-oriented application software and the Java framework JWAM, supporting this approach. Since 2000, Heinz Züllighoven is also one of the managing directors of C1 Workplace Solutions Ltd. He is consulting industrial software development projects in the area of object-oriented design, among which are several major banks. Heinz Züllighoven has published a number of papers and books on various software engineering topics. An English construction handbook for the Tools & Materials approach has been published by Morgan Kaufmann in 2004. Among his current research interests are agile object-oriented development strategies, migration processes and the architecture of large industrial interactive software systems. In addition, he an his co-researchers are further developing a lightweight modeling concept for business processes which is tool-supported. Carola Lilienthal holds a Diploma degree in computer science from University of Hamburg (1995). She is a research assistant at the University of Hamburg and is working in the Software Engineering Group of Christiane Floyd and Heinz Züllighoven. Since 1995 she is also working as a consultant for object oriented design, software architecture, software quality, agile software development and participatory design in several industrial projects.
1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good... more 1. PREFACE Performance is one of the most relevant quality attributes of an IT system. While good performance leads to high user satisfaction, bad performance leads to loss of users, perceived unavailability of the system, or unnecessarily high costs of networking or computing resources. Therefore, various techniques to evaluate, control, and improve the performance of IT systems have been developed, ranging from online monitoring and benchmarking to modeling and prediction. Experience shows that for system design or later optimization, such techniques need to be applied in smart combination.
During the development of large software-intensive systems, developers use several modeling langu... more During the development of large software-intensive systems, developers use several modeling languages and tools to describe a system from different viewpoints. Model-driven and view-based technologies have made it easier to define domain-specific languages and transformations. Nevertheless, using several languages leads to fragmentation of information, to redundancies in the system description, and eventually to inconsistencies. Inconsistencies have negative impacts on the system's quality and are costly to fix. Often, there is no support for consistency management across multiple languages. Using a single language is no practicable solution either, as it is overly complex to define, use, and evolve such a language. View-based development is a suitable approach to deal with complex systems, and is widely used in other engineering disciplines. Still, we need to cope with the problems of fragmentation and consistency. In this paper, we present the Vitruvius approach for consistency in view-based modeling. We describe the approach by formalizing the notion of consistency, presenting languages for consistency preservation, and defining a model-driven development process. Furthermore, we show how existing models can be integrated. We have evaluated our approach at two case studies from component-based and embedded automotive software development, using our prototypical implementation based on the Eclipse Modeling Framework.
The amount of data to be processed by experiments in high energy physics (HEP) will increase trem... more The amount of data to be processed by experiments in high energy physics (HEP) will increase tremendously in the coming years. To cope with this increasing load, most efficient usage of the resources is mandatory. Furthermore, the computing resources for user jobs in HEP will be increasingly distributed and heterogeneous, resulting in more difficult scheduling due to the increasing complexity of the system. We aim to create a simulation for the WLCG helping the HEP community to solve both challenges: a more efficient utilization of the grid and coping with the rising complexity of the system. There is currently no simulation in existence which helps the operators of the grid to make the correct decisions while optimizing the load balancing strategy. This paper presents a proof of concept in which the computing jobs at the Tier 1 center GridKa are modeled and simulated. To model the computing jobs we extended the Palladio simulator with a mechanism to simulate load balancing strategi...
Bidirectional transformations (BX) are a common approach for keeping two types of models consiste... more Bidirectional transformations (BX) are a common approach for keeping two types of models consistent, but consistency preservation between more than two types of models is not researched well. One solution is the composition of BX to networks of transformations. Nevertheless, such networks are prone to failures due to interoperability issues between the individual BX, which are independently developed by various experts. We therefore systematically identify and categorize such issues. First, we structure the process of consistency specification into different conceptual levels. Then we develop a catalog of potential mistakes, which we derive from those levels, and consequential failure types. Finally, we discuss strategies to avoid mistakes at the different levels. This catalog is beneficial for transformation developers and transformation language developers. It improves awareness in developers of potential mistakes and consequential failures, enables the development of techniques to avoid specific mistakes by construction, and eases the identification of reasons for failures.
Are successful test cases useless or not? p. 2 DoSAM-domain-specific software architecture compar... more Are successful test cases useless or not? p. 2 DoSAM-domain-specific software architecture comparison model p. 4 An architecture-centric approach for producing quality systems p. 21
Proceedings of the 2015 European Conference on Software Architecture Workshops, 2015
Due to hostile environments, space systems are equipped with hardware redundancies to guarantee p... more Due to hostile environments, space systems are equipped with hardware redundancies to guarantee proper operation. For reconfigurations beyond redundancies, manual decision making is needed, which results in down times, communication efforts and man hours in maintenance phases. We investigate automated reconfiguration decision support that determines Pareto-optimal architectures w.r.t. variable hardware availability and quality properties. Reconfiguration options for control software according to available sensing and actuation hardware are derived and prioritised w.r.t. predicted qualitative impacts. The knowledge about relations of the system's variations is persisted in a decision model at design time on the level of software architectures. Upon a resources fault, the model is traversed for an alternative architecture. This promotes a transparent analysis of available deployments as well as an acceleration of the reconfiguration process during maintenance. We provide tool support for analysis and a concept for reconfigurations during operation. For evaluation, we inspect a reengineered extension of the attitude control system of the TET-1 micro satellite.
Uploads
Papers by Ralf Reussner