Papers by Flavio Frattini
ICC 2022 - IEEE International Conference on Communications
Springer Series in Reliability Engineering, 2016
Understanding software bugs and their effects is important in several engineering activities, inc... more Understanding software bugs and their effects is important in several engineering activities, including testing, debugging, and design of fault containment or tolerance methods. Dealing with hard-to-reproduce failures requires a deep comprehension of the mechanisms leading from bug activation to software failure. This chapter surveys taxonomies and recent studies about bugs from the perspective of their reproducibility, providing insights into the process of bug manifestation and the factors influencing it. These insights are based on the analysis of thousands of bug reports of a widely used open-source software, namely MySQL Server. Bug reports are automatically classified according to reproducibility characteristics, providing figures about the proportion of hard to reproduce bug their features, and evolution over releases.
Abstract—We analyze performance degradation phenomena due to software aging on a real supercomput... more Abstract—We analyze performance degradation phenomena due to software aging on a real supercomputer deployed at the Federico II University of Naples, by considering a dataset of ten months of operational usage. We adopted a statistical approach for identifying when and where the supercomputer experienced a performance degradation trend. The analysis pinpointed performance degradation trends that were actually caused by the gradual error accumulation within basic software of the supercomputer.
2019 15th European Dependable Computing Conference (EDCC), 2019
Physical Protection Systems are Physical Systems that evolved towards the cyber world. Sensors, c... more Physical Protection Systems are Physical Systems that evolved towards the cyber world. Sensors, cameras, barriers and control panels are now networked, making up a monitoring system subject to cyber attacks. Physical Security Information Management (PSIM) software systems are used for managing physical security information; Security Information and Event Management (SIEM) systems are used for cyber security information and events. Considering cyber-physical risks, they can not remain separated. In this paper, we describe our experience in merging PCMS, a PSIM system widely used by Banks in Italy, with QRadar, the well known IBM SIEM. Their integration helps physical security personnel in figuring out hidden threats, as well as the cyber security team for understanding risks related to the Physical Protection System.
Understanding software bugs and their effects is important in several engineering activities, inc... more Understanding software bugs and their effects is important in several engineering activities, including testing, debugging, and design of fault containment or tolerance methods. Dealing with hard-to-reproduce failures requires a deep comprehension of the mechanisms leading from bug activation to software failure. This chapter surveys taxonomies and recent studies about bugs from the perspective of their reproducibility, providing insights into the process of bug manifestation and the factors influencing it. These insights are based on the analysis of thousands of bug reports of a widely used open-source software, namely MySQL Server. Bug reports are automatically classified according to reproducibility characteristics, providing figures about the proportion of hard to reproduce bug their features, and evolution over releases.
Localization within a Wireless Sensor Network consists of defining the position of a given set of... more Localization within a Wireless Sensor Network consists of defining the position of a given set of sensors by satisfying some non-functional requirements such as (1) efficient energy consumption, (2) low communication or computation overhead, (3) no, or limited, use of particular hardware components, (4) fast localization, (5) robustness, and (6) low localization error. Although there are several algorithms and techniques available in literature, localization is viewed as an open issue because none of the current solutions are able to jointly satisfy all the previous requirements. An algorithm called ROCRSSI appears to be a suitable solution; however, it is affected by several inefficiencies that limit its effectiveness in real case scenarios. This paper proposes a refined version of this algorithm, called ROCRSSI++, which resolves such inefficiencies using and storing information gathered by the sensors in a more efficient manner. Several experiments on actual devices have been perf...
Wireless sensor networks demands proper means in order to obtain an accurate location of their no... more Wireless sensor networks demands proper means in order to obtain an accurate location of their nodes for a twofold reason: on the one hand, the exchanged data must be spatially meaningful since their content may be unusual if the location of where they have been produced is not associated to them, on the other hand, such networks need efficient routing algorithms where optimal routing decisions must be based on location information. Accuracy is not the only demands for positioning of sensors, but also simplicity and infrastructure independence in order to avoid excessive energy consumption and deployment costs. For these reasons, GPS is not used but the RF technologies are mainly preferred. Based on those technologies, most of the solutions tailored for sensors are designed so as to determine a location based on simple measurements of the signal intensity of the received messages. Despite being able to satisfy the peculiar requirements for localization in sensor networks, those meth...
2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2015
Energy efficiency of large processing systems is usually assessed as the relation between a perfo... more Energy efficiency of large processing systems is usually assessed as the relation between a performance and a power consumption metric, neglecting malfunction. Execution failures have a tangible cost in terms of wasted energy, however. They are often managed through fault tolerance mechanisms, which in turn consume electricity. We introduce the consumability attribute for batch processing systems, encompassing performance, consumption, and dependability aspects altogether. We propose a metric for its quantification and a methodology for its analysis. Using a real 500-node batch system as a case study, we show that consumability is representative of both efficiency and effectiveness, and we show the usefulness of the proposed metric and the suitability of the proposed methodology.
Lecture Notes in Computer Science, 2012
Proceedings of the 5th International Workshop on Cloud Data and Platforms - CloudDP '15, 2015
2014 IEEE International Symposium on Software Reliability Engineering Workshops, 2014
ABSTRACT Invariants represent properties of a system that are expected to hold when everything go... more ABSTRACT Invariants represent properties of a system that are expected to hold when everything goes well. Thus, the violation of an invariant most likely corresponds to the occurrence of an anomaly in the system. In this paper, we discuss the accuracy and the completeness of an anomaly detection system based on invariants. The case study we have taken is a backend operation of a SaaS platform. Results show the rationality of the approach and discuss the impact of the invariant mining strategy on the detection capabilities, both in terms of accuracy and of time to reveal violations.
2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), 2013
Wiley Encyclopedia of Operations Research and Management Science, 2013
The use of large scale processing systems has exploded during the last decade and now they are in... more The use of large scale processing systems has exploded during the last decade and now they are indicated for significantly contributing to the world energy consumption and, in turn, environmental pollution. Processing systems are no more evaluated only for their performance, but also for how much they consume to perform at a certain level. Those evaluations aim at quantifying the energy efficiency conceived as the relation between a performance metric and a power consumption metric, disregarding the malfunction that commonly happens. The study of a real 500-nodes batch system shows that 9% of its power consumption is ascribable to failures compromising the execution of the jobs. Also fault tolerance techniques, commonly adopted for reducing the frequency of failure occurrences, have a cost in terms of energy consumption. This dissertation introduces the concept of consumability for processing systems, encompassing performance, consumption and dependability aspects. The idea is to ha...
IEEE Internet of Things Journal
Future Generation Computer Systems, 2016
Lecture Notes in Computer Science, 2016
Invariants are stable relationships among system metrics expected to hold during normal operating... more Invariants are stable relationships among system metrics expected to hold during normal operating conditions. The violation of such relationships can be used to detect anomalies at runtime. However, this approach does not scale to large systems, as the number of invariants quickly grows with the number of considered metrics. The resulting "background noise" for the invariant-based detection system hinders its effectiveness. In this paper we propose a general and automatic approach for identifying a subset of mined invariants that properly model system runtime behavior with a reduced amount of background noise. This translates into better overall performance (i.e., less false positives).
Proceedings of the 31st Annual ACM Symposium on Applied Computing - SAC '16, 2016
Uploads
Papers by Flavio Frattini