Proceedings of the 5th International Conference on Software and Data Technologies
In this paper we investigate main limitations of actual software metrics techniques/tools, propos... more In this paper we investigate main limitations of actual software metrics techniques/tools, propose a unified intermediate representation for calculation of software metrics, and describe a promising prototype of a new metrics tool. The motivation was the evident lack of wider utilization of software metrics in raising the quality of software products.
Contrary to the increasing popularity of JavaScript programming language in the field of web appl... more Contrary to the increasing popularity of JavaScript programming language in the field of web application development, the numerical expression of evidence about the quality of solutions developed in this language is still not reliable. Based on the preliminary literature review, which is the main subject of this paper, this area has not yet been fully explored. Measurement is done by application of general and object-oriented metrics, which can reflect only general characteristics of the solution, while the specifics related to the programming language are not expressible by existing metrics. Due to the popularity of the language and the increasing number of JavaScript projects, the idea is to determine appropriate metrics and approach to measurement for their application in practice. Finally, the measurement approach will be implemented in the SSQSA Framework to enable its application. The preliminary research presented in this paper was conducted during a student course of Empiric...
The aim of this paper is to describe a basic idea to introduce formal modeling in development and... more The aim of this paper is to describe a basic idea to introduce formal modeling in development and presentation of e-learning systems. By introducing formalism, reasoning on e-learning systems and processes through them can be more easily understood. In particular, we propose to use Harel automata (statecharts) in modeling sequencing and navigation through learning content and learning process.
2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2017
eCST (enriched Concrete Syntax Tree) is introduced as a fundament of SSQSA (Set of Software Quali... more eCST (enriched Concrete Syntax Tree) is introduced as a fundament of SSQSA (Set of Software Quality Static Analyzers) platform for consistent static quality analysis across the input languages. It is based on the concept of enrichment of the complete syntax tree representing the input program by universal nodes. Universal nodes are based on the idea of imaginary nodes in an abstract syntax tree, but unified, so that one single node is used for all languages where it is applicable. In this paper, we describe a translation of eCST back to source code. At this moment, this is only translation to the original language in which code is written. Moreover, the translation of eCST to a code written in the original language can have a wide spectre of applications such as in (semi-)automated code refactoring and transformations.
Systematic application of software metric techniques can lead to significant improvements of the ... more Systematic application of software metric techniques can lead to significant improvements of the quality of a final software product. However, there is still the evident lack of wider utilization of software metrics techniques and tools due to many reasons. In this paper we investigate some limitations of contemporary software metrics tools and then propose construction of a new tool that would solve some of the problems. We describe the promising prototype, its internal structure, and then focus on its independency of the input language.
Complex network theory is based on graph theory and statistical analysis Complex real-world syste... more Complex network theory is based on graph theory and statistical analysis Complex real-world systems represented by typed and/or attributed graphs form different kinds of complex networks. Statistical methods applied on these graphs provide powerful mechanism in network analysis. Complex networks theory has an application in many areas such are social networks, computer networks, etc. In a context of software engineering and software development we can talk about special type of complex networks – software networks. Software networks are directed graphs representing relationships between software entities (packages, classes, modules, methods, functions, procedures, etc.). Software network can be observed as a static representation of software code and design and can be used in analysis of the quality of software development process and software product with particular application in a field of large-scale software systems. Software metrics are basic mechanism in software quality anal...
The quality of Information Technology (IT) solutions can be measured on several levels, among the... more The quality of Information Technology (IT) solutions can be measured on several levels, among them focusing on data, software, as well as finally – the human user. Since IT solutions are one of the essential elements used in actual business or production processes, the quality of the business process and its validation is one of the important high-level quality assessments. There have been many attempts to measure or quantify the quality of business processes, respectively their models. However, a general, an overview has not been provided of the combination of several possible approaches . In this paper, a few, seemingly different approaches from different fields/domains are joined together, analysing the same process, providing a framework and insight for business process quality evaluation, analysis and improvement. The included methods are modelling, definition of key performance indicators, risk and waste assessment, root cause analysis and simulation. The process of quality-or...
The structure and content of XML schemas impacts significantly the quality of data respectively d... more The structure and content of XML schemas impacts significantly the quality of data respectively documents, defined by XML schemas. Attempts to evaluate the quality of XML schemas have been made, dividing them into six quality aspects: structure, transparency and documentation, optimality, minimalism, reuse and integrability. XML schema quality index was used to combine all the quality aspects and provide a general evaluation of XML schema quality in a specific domain, comparable with the quality of XML schemas from othe r domains. A quality estimation of an XML schema based on the quality index leads to a higher efficiency of its usage, simplification, more efficient maintenance and higher quality of data and processes. This paper addresses challenges in measuring the level of XML schema quality within the publishing domain, which deals with challenges of multimedia content presentation and transformation. Results of several XML schema evaluations from the publishing domain are pres...
Analyzers (SSQSA) is a set of software tools for static analysis that has a final goal to enable ... more Analyzers (SSQSA) is a set of software tools for static analysis that has a final goal to enable consistent static analysis, which would improve the overall quality of the software product. The main characteristic of SSQSA is utilization of its unique structure which is an intermediate representation of the source code, called enriched Concrete Syntax Tree (eCST). The eCST enables language independency of analyses implementation in SSQSA, and therefore increases consistency of extracted results by using a single implementation approach for its analyzers. Since SSQSA is also meant to be involved in static timing analysis, one of dedicated tasks is Worst-Case Execution Time (WCET) estimation at code level. The aim of this paper is to describe the progress and the steps taken towards the estimation of the Worst-Case Execution Time in SSQSA framework. The key feature that makes this estimation stand out is its language independence. Although complex to conduct, the eCST intermediate structure is constantly improving in order to provide all the necessary data for successful and precise analyses, making it crucial for complex estimations such as WCET.
Traditionally, when a new code clone detection tool is developed, few well-known and popular benc... more Traditionally, when a new code clone detection tool is developed, few well-known and popular benchmarks are being used to evaluate the results that are achieved. These benchmarks have typically been created by cross-running several state-of-the-art clone detection tools, in order to overcome the bias of using just one tool, and combining their result sets in some fashion. These candidate clones, or more specifically their subsets, have then been manually examined by clone experts or other participants, who would judge whether a candidate is a true clone or not. Many authors dealt with the problem of creating most objective benchmarks, how the candidate sets should be created, who should judge them, whether the judgment of these participants can be trusted or not. One of the main pitfalls, as with development of a clone detection tool, is the inherent lack of formal definitions and standards when it comes to clones and their classification. Recently, some new approaches were presente...
Set of Software Quality Static Analyzers (SSQSA) is a set of software tools for static analysis t... more Set of Software Quality Static Analyzers (SSQSA) is a set of software tools for static analysis that is incorporated in the framework developed to target the common aim – consistent software quality analysis. The main characteristic of all integrated tools is the independency of the input computer language. Language independency is achieved by enriched Concrete Syntax Tree (eCST) that is used as an intermediate representation of the source code. This characteristic gives the tools more generality comparing to the other similar static analyzers. The aim of this paper is to describe an early idea for introducing support for static timing analysis and Worst Case Execution Time (WCET) calculation at code level in SSQSA framework.
Automatic assessment systems encounter significant problems when assessing students’ solutions to... more Automatic assessment systems encounter significant problems when assessing students’ solutions to programming assignments. Main difficulties arise from lack of academic discipline among the students. While the students are expected to submit their original solutions, sometimes this is not the case. The assignments that they are expected to solve are usually very similar, and there is a restricted set of varieties among the correct solutions. Therefore, plagiarism detection is always an open question in the assessment of students’ work. Multiple tools and approaches are available for plagiarism detection. They are based on source code similarity detection algorithms or code clone detection tools. Available approaches rely on lexical, syntactic, structural, or semantic information about the programming solution. Thus, algorithms analyze character streams, token streams, controls structures or dependencies in the observed code. Usually, they are applied to the source code or some of it...
Code clones are parts of source code that were usually created by copy-paste activities, with som... more Code clones are parts of source code that were usually created by copy-paste activities, with some minor changes in terms of added and deleted lines, changes in variable names, types used etc. or no changes at all. Clones in code decrease overall quality of software product, since they directly decrease maintainability, increase fault-proneness and make changes harder. Numerous researches deal with clone analysis, propose categorizations and solutions, and many tools have been developed for source code clone detection. However, there are still open questions primarily regarding what are precise characteristics of code fragments that should be considered as clones. Furthermore, tools are primarily focused on clone detection for a specific language, or set of languages. In this paper, we propose a language-independent code clone analysis, introduced as part of SSQSA (Set of Software Quality Static Analyzers) platform, aimed to enable consistent static analysis of heterogeneous softwar...
SSQSA is a set of language independent tools whose main purpose is to analyze source code of soft... more SSQSA is a set of language independent tools whose main purpose is to analyze source code of software systems in order to evaluate their quality attributes. The aim of this paper is to present how a formal language that is not a programming language can be integrated into the front-end of the SSQSA framework. Namely, it is explained how the SSQSA front-end is extended to support OWL2 which is a domain-specific language for the description of ontological systems. Such extension of the SSQSA front-end represents a step towards the realization of a SSQSA back-end which will be able to compute a hybrid set of metrics that reflect different aspects of complexity of ontological descriptions.
2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)
Code clones mostly have been proven harmful for the development and maintenance of software syste... more Code clones mostly have been proven harmful for the development and maintenance of software systems, leading to code deterioration and an increase in bugs as the system evolves. Modern software systems are composed of several components, incorporating multiple technologies in their development. In such systems, it is common to replicate (parts of) functionality across the different components, potentially in a different programming language. Effect of these duplicates is more acute, as their identification becomes more challenging. This paper presents LICCA, a tool for the identification of duplicate code fragments across multiple languages. LICCA is integrated with the SSQSA platform and relies on its high-level representation of code in which it is possible to extract syntactic and semantic characteristics of code fragments positing full cross-language clone detection. LICCA is on a technology development level. We demonstrate its potential by adopting a set of cloning scenarios, extended and rewritten in five characteristic languages: Java, C, JavaScript, Modula-2 and Scheme.
Abstract Context: Software developers spend a significant amount of time on reading, comprehendin... more Abstract Context: Software developers spend a significant amount of time on reading, comprehending, and debugging of source code. Numerous software metrics can give us awareness of incomprehensible functions or of flaws in their collaboration. Invocation chains, especially recursive ones, affect solution complexity, readability, and understandability. Even though decomposed and recursive solutions are characterized as short and clear in comparison with iterative ones, they hide the complexity of the observed problem and solution. As the collaboration between functions can strongly depend on context, difficulties are usually detected in debugging, testing or by static analysis, while metrics support is still very weak. Objective: We introduce a new complexity metric, called Overall Path Complexity (OPC), which is aware of (recursive) call chains in the observed source code. As invocations are basic collaboration mechanism and recursions are broadly accepted, the OPC metric is intended to be applicable independently on programming language and paradigm. Method: We propose four different versions of the OPC calculation algorithm and explore and discuss their suitability. We have validated proposed metrics based on a Framework specially designed for evaluation and validation of software complexity metrics and accordingly performed theoretical, empirical and practical validation. Practical validation was performed on toy examples and industrial cases (47012 LOCs, 2899 functions, and 758 recursive paths) written in Erlang. Result: Based on our analysis we selected the most suitable (of 4 proposed) OPC calculation formula, and showed that the new metric expresses advanced properties of the software in comparison with other available metrics that was confirmed by low correlation. Conclusion: We introduced the OPC metric calculated on the Overall Control Flow Graph as an extension of Cyclomatic Complexity by adding awareness of (recursive) invocations. The values of the new metric can lead us to find the problematic fragments of the code or of the execution paths.
Measuring quality of IT solutions is a priority in software engineering. Although numerous metric... more Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
In our earlier research an area of consistent and systematic application of software metrics was ... more In our earlier research an area of consistent and systematic application of software metrics was explored. Strong dependency of applicability of software metrics on input programming language was recognized as one of the main weaknesses in this field. Introducing enriched Concrete Syntax Tree (eCST) for internal and intermediate representation of the source code resulted with step forward over this weakness. In this paper we explain innovation made by introducing eCST and provide idea for broader applicability of eCST in some other fields of software engineering.
eCST is an innovative, language-independent intermediate source code representation designed as a... more eCST is an innovative, language-independent intermediate source code representation designed as a basis of approach applied in development of SSQSA framework. This framework provides an infrastructure for consistent static software analysis. Tempura is a formal specification language, while Tempura programs are executable ITL (Interval Temporal Logic) specifications. This paper describes required steps to enable generation of eCST representation of Tempura code which leads to an incorporation of Tempura language in the infrastructure of SSQSA framework. This incorporation serves as a proof of concept that a formal specification language (like Tempura) can be successfully represented with an intermediate language representation (like eCST) that was primarily aimed for representation of “classical” programming languages.
Proceedings of the 5th International Conference on Software and Data Technologies
In this paper we investigate main limitations of actual software metrics techniques/tools, propos... more In this paper we investigate main limitations of actual software metrics techniques/tools, propose a unified intermediate representation for calculation of software metrics, and describe a promising prototype of a new metrics tool. The motivation was the evident lack of wider utilization of software metrics in raising the quality of software products.
Contrary to the increasing popularity of JavaScript programming language in the field of web appl... more Contrary to the increasing popularity of JavaScript programming language in the field of web application development, the numerical expression of evidence about the quality of solutions developed in this language is still not reliable. Based on the preliminary literature review, which is the main subject of this paper, this area has not yet been fully explored. Measurement is done by application of general and object-oriented metrics, which can reflect only general characteristics of the solution, while the specifics related to the programming language are not expressible by existing metrics. Due to the popularity of the language and the increasing number of JavaScript projects, the idea is to determine appropriate metrics and approach to measurement for their application in practice. Finally, the measurement approach will be implemented in the SSQSA Framework to enable its application. The preliminary research presented in this paper was conducted during a student course of Empiric...
The aim of this paper is to describe a basic idea to introduce formal modeling in development and... more The aim of this paper is to describe a basic idea to introduce formal modeling in development and presentation of e-learning systems. By introducing formalism, reasoning on e-learning systems and processes through them can be more easily understood. In particular, we propose to use Harel automata (statecharts) in modeling sequencing and navigation through learning content and learning process.
2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2017
eCST (enriched Concrete Syntax Tree) is introduced as a fundament of SSQSA (Set of Software Quali... more eCST (enriched Concrete Syntax Tree) is introduced as a fundament of SSQSA (Set of Software Quality Static Analyzers) platform for consistent static quality analysis across the input languages. It is based on the concept of enrichment of the complete syntax tree representing the input program by universal nodes. Universal nodes are based on the idea of imaginary nodes in an abstract syntax tree, but unified, so that one single node is used for all languages where it is applicable. In this paper, we describe a translation of eCST back to source code. At this moment, this is only translation to the original language in which code is written. Moreover, the translation of eCST to a code written in the original language can have a wide spectre of applications such as in (semi-)automated code refactoring and transformations.
Systematic application of software metric techniques can lead to significant improvements of the ... more Systematic application of software metric techniques can lead to significant improvements of the quality of a final software product. However, there is still the evident lack of wider utilization of software metrics techniques and tools due to many reasons. In this paper we investigate some limitations of contemporary software metrics tools and then propose construction of a new tool that would solve some of the problems. We describe the promising prototype, its internal structure, and then focus on its independency of the input language.
Complex network theory is based on graph theory and statistical analysis Complex real-world syste... more Complex network theory is based on graph theory and statistical analysis Complex real-world systems represented by typed and/or attributed graphs form different kinds of complex networks. Statistical methods applied on these graphs provide powerful mechanism in network analysis. Complex networks theory has an application in many areas such are social networks, computer networks, etc. In a context of software engineering and software development we can talk about special type of complex networks – software networks. Software networks are directed graphs representing relationships between software entities (packages, classes, modules, methods, functions, procedures, etc.). Software network can be observed as a static representation of software code and design and can be used in analysis of the quality of software development process and software product with particular application in a field of large-scale software systems. Software metrics are basic mechanism in software quality anal...
The quality of Information Technology (IT) solutions can be measured on several levels, among the... more The quality of Information Technology (IT) solutions can be measured on several levels, among them focusing on data, software, as well as finally – the human user. Since IT solutions are one of the essential elements used in actual business or production processes, the quality of the business process and its validation is one of the important high-level quality assessments. There have been many attempts to measure or quantify the quality of business processes, respectively their models. However, a general, an overview has not been provided of the combination of several possible approaches . In this paper, a few, seemingly different approaches from different fields/domains are joined together, analysing the same process, providing a framework and insight for business process quality evaluation, analysis and improvement. The included methods are modelling, definition of key performance indicators, risk and waste assessment, root cause analysis and simulation. The process of quality-or...
The structure and content of XML schemas impacts significantly the quality of data respectively d... more The structure and content of XML schemas impacts significantly the quality of data respectively documents, defined by XML schemas. Attempts to evaluate the quality of XML schemas have been made, dividing them into six quality aspects: structure, transparency and documentation, optimality, minimalism, reuse and integrability. XML schema quality index was used to combine all the quality aspects and provide a general evaluation of XML schema quality in a specific domain, comparable with the quality of XML schemas from othe r domains. A quality estimation of an XML schema based on the quality index leads to a higher efficiency of its usage, simplification, more efficient maintenance and higher quality of data and processes. This paper addresses challenges in measuring the level of XML schema quality within the publishing domain, which deals with challenges of multimedia content presentation and transformation. Results of several XML schema evaluations from the publishing domain are pres...
Analyzers (SSQSA) is a set of software tools for static analysis that has a final goal to enable ... more Analyzers (SSQSA) is a set of software tools for static analysis that has a final goal to enable consistent static analysis, which would improve the overall quality of the software product. The main characteristic of SSQSA is utilization of its unique structure which is an intermediate representation of the source code, called enriched Concrete Syntax Tree (eCST). The eCST enables language independency of analyses implementation in SSQSA, and therefore increases consistency of extracted results by using a single implementation approach for its analyzers. Since SSQSA is also meant to be involved in static timing analysis, one of dedicated tasks is Worst-Case Execution Time (WCET) estimation at code level. The aim of this paper is to describe the progress and the steps taken towards the estimation of the Worst-Case Execution Time in SSQSA framework. The key feature that makes this estimation stand out is its language independence. Although complex to conduct, the eCST intermediate structure is constantly improving in order to provide all the necessary data for successful and precise analyses, making it crucial for complex estimations such as WCET.
Traditionally, when a new code clone detection tool is developed, few well-known and popular benc... more Traditionally, when a new code clone detection tool is developed, few well-known and popular benchmarks are being used to evaluate the results that are achieved. These benchmarks have typically been created by cross-running several state-of-the-art clone detection tools, in order to overcome the bias of using just one tool, and combining their result sets in some fashion. These candidate clones, or more specifically their subsets, have then been manually examined by clone experts or other participants, who would judge whether a candidate is a true clone or not. Many authors dealt with the problem of creating most objective benchmarks, how the candidate sets should be created, who should judge them, whether the judgment of these participants can be trusted or not. One of the main pitfalls, as with development of a clone detection tool, is the inherent lack of formal definitions and standards when it comes to clones and their classification. Recently, some new approaches were presente...
Set of Software Quality Static Analyzers (SSQSA) is a set of software tools for static analysis t... more Set of Software Quality Static Analyzers (SSQSA) is a set of software tools for static analysis that is incorporated in the framework developed to target the common aim – consistent software quality analysis. The main characteristic of all integrated tools is the independency of the input computer language. Language independency is achieved by enriched Concrete Syntax Tree (eCST) that is used as an intermediate representation of the source code. This characteristic gives the tools more generality comparing to the other similar static analyzers. The aim of this paper is to describe an early idea for introducing support for static timing analysis and Worst Case Execution Time (WCET) calculation at code level in SSQSA framework.
Automatic assessment systems encounter significant problems when assessing students’ solutions to... more Automatic assessment systems encounter significant problems when assessing students’ solutions to programming assignments. Main difficulties arise from lack of academic discipline among the students. While the students are expected to submit their original solutions, sometimes this is not the case. The assignments that they are expected to solve are usually very similar, and there is a restricted set of varieties among the correct solutions. Therefore, plagiarism detection is always an open question in the assessment of students’ work. Multiple tools and approaches are available for plagiarism detection. They are based on source code similarity detection algorithms or code clone detection tools. Available approaches rely on lexical, syntactic, structural, or semantic information about the programming solution. Thus, algorithms analyze character streams, token streams, controls structures or dependencies in the observed code. Usually, they are applied to the source code or some of it...
Code clones are parts of source code that were usually created by copy-paste activities, with som... more Code clones are parts of source code that were usually created by copy-paste activities, with some minor changes in terms of added and deleted lines, changes in variable names, types used etc. or no changes at all. Clones in code decrease overall quality of software product, since they directly decrease maintainability, increase fault-proneness and make changes harder. Numerous researches deal with clone analysis, propose categorizations and solutions, and many tools have been developed for source code clone detection. However, there are still open questions primarily regarding what are precise characteristics of code fragments that should be considered as clones. Furthermore, tools are primarily focused on clone detection for a specific language, or set of languages. In this paper, we propose a language-independent code clone analysis, introduced as part of SSQSA (Set of Software Quality Static Analyzers) platform, aimed to enable consistent static analysis of heterogeneous softwar...
SSQSA is a set of language independent tools whose main purpose is to analyze source code of soft... more SSQSA is a set of language independent tools whose main purpose is to analyze source code of software systems in order to evaluate their quality attributes. The aim of this paper is to present how a formal language that is not a programming language can be integrated into the front-end of the SSQSA framework. Namely, it is explained how the SSQSA front-end is extended to support OWL2 which is a domain-specific language for the description of ontological systems. Such extension of the SSQSA front-end represents a step towards the realization of a SSQSA back-end which will be able to compute a hybrid set of metrics that reflect different aspects of complexity of ontological descriptions.
2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER)
Code clones mostly have been proven harmful for the development and maintenance of software syste... more Code clones mostly have been proven harmful for the development and maintenance of software systems, leading to code deterioration and an increase in bugs as the system evolves. Modern software systems are composed of several components, incorporating multiple technologies in their development. In such systems, it is common to replicate (parts of) functionality across the different components, potentially in a different programming language. Effect of these duplicates is more acute, as their identification becomes more challenging. This paper presents LICCA, a tool for the identification of duplicate code fragments across multiple languages. LICCA is integrated with the SSQSA platform and relies on its high-level representation of code in which it is possible to extract syntactic and semantic characteristics of code fragments positing full cross-language clone detection. LICCA is on a technology development level. We demonstrate its potential by adopting a set of cloning scenarios, extended and rewritten in five characteristic languages: Java, C, JavaScript, Modula-2 and Scheme.
Abstract Context: Software developers spend a significant amount of time on reading, comprehendin... more Abstract Context: Software developers spend a significant amount of time on reading, comprehending, and debugging of source code. Numerous software metrics can give us awareness of incomprehensible functions or of flaws in their collaboration. Invocation chains, especially recursive ones, affect solution complexity, readability, and understandability. Even though decomposed and recursive solutions are characterized as short and clear in comparison with iterative ones, they hide the complexity of the observed problem and solution. As the collaboration between functions can strongly depend on context, difficulties are usually detected in debugging, testing or by static analysis, while metrics support is still very weak. Objective: We introduce a new complexity metric, called Overall Path Complexity (OPC), which is aware of (recursive) call chains in the observed source code. As invocations are basic collaboration mechanism and recursions are broadly accepted, the OPC metric is intended to be applicable independently on programming language and paradigm. Method: We propose four different versions of the OPC calculation algorithm and explore and discuss their suitability. We have validated proposed metrics based on a Framework specially designed for evaluation and validation of software complexity metrics and accordingly performed theoretical, empirical and practical validation. Practical validation was performed on toy examples and industrial cases (47012 LOCs, 2899 functions, and 758 recursive paths) written in Erlang. Result: Based on our analysis we selected the most suitable (of 4 proposed) OPC calculation formula, and showed that the new metric expresses advanced properties of the software in comparison with other available metrics that was confirmed by low correlation. Conclusion: We introduced the OPC metric calculated on the Overall Control Flow Graph as an extension of Cyclomatic Complexity by adding awareness of (recursive) invocations. The values of the new metric can lead us to find the problematic fragments of the code or of the execution paths.
Measuring quality of IT solutions is a priority in software engineering. Although numerous metric... more Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
In our earlier research an area of consistent and systematic application of software metrics was ... more In our earlier research an area of consistent and systematic application of software metrics was explored. Strong dependency of applicability of software metrics on input programming language was recognized as one of the main weaknesses in this field. Introducing enriched Concrete Syntax Tree (eCST) for internal and intermediate representation of the source code resulted with step forward over this weakness. In this paper we explain innovation made by introducing eCST and provide idea for broader applicability of eCST in some other fields of software engineering.
eCST is an innovative, language-independent intermediate source code representation designed as a... more eCST is an innovative, language-independent intermediate source code representation designed as a basis of approach applied in development of SSQSA framework. This framework provides an infrastructure for consistent static software analysis. Tempura is a formal specification language, while Tempura programs are executable ITL (Interval Temporal Logic) specifications. This paper describes required steps to enable generation of eCST representation of Tempura code which leads to an incorporation of Tempura language in the infrastructure of SSQSA framework. This incorporation serves as a proof of concept that a formal specification language (like Tempura) can be successfully represented with an intermediate language representation (like eCST) that was primarily aimed for representation of “classical” programming languages.
Uploads
Papers by Gordana Rakic