This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Software authorship attribution, defined as the problem of software authentication and resolution... more Software authorship attribution, defined as the problem of software authentication and resolution of source code ownership, is of major relevance in the software engineering field. Authorship analysis of source code is more difficult than the classic task on literature, but it would be of great use in various software development activities such as software maintenance, software quality analysis or project management. This paper addresses the problem of code authorship attribution and introduces, as a proof of concept, a new supervised classification model AutoSoft for identifying the developer of a certain piece of code. The proposed model is composed of an ensemble of autoencoders that are trained to encode and recognize the programming style of software developers. An extension of the AutoSoft classifier, able to recognize an unknown developer (a developer that was not seen during the training), is also discussed and evaluated. Experiments conducted on software programs collected...
The automakers are continuously interested in noise vibration and harshness solutions for high qu... more The automakers are continuously interested in noise vibration and harshness solutions for high quality vehicles in general and quieter passenger cabins in special. In this paper the modal parameters of a car windshield are determined by using experimental modal analysis. The frequency response functions have been measured by using impulsive technique. An impact hammer has been used to generate the impulsive force applied at various locations evenly spread on the windshield surface. A fixed mini-accelerometer has been used to measure the structure response. Starting from the set of FRF functions of free-free conditions, the Least Square Complex Exponential parameter estimation method implemented in LMS Test.Lab software has been used to extract the modal parameters. The validation phase sustained the correctness of the modal parameters found.
... This identification is completed with a set of heuristics for recognizing false entailment. .... more ... This identification is completed with a set of heuristics for recognizing false entailment. ... Monotonicity supposes that if aa text entails another text, then adding more text to the first, the entailment relation still holds [7]. The heuristics are represented by the bellow condition ...
Studia Universitatis Babes-Bolyai: Series Informatica, 2010
This paper presents a study of the nonmonotonic consequence relation which models the skeptical r... more This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution.
In this study it is proven that the Hrebs used in Denotation analysis of texts and Cohesion Chain... more In this study it is proven that the Hrebs used in Denotation analysis of texts and Cohesion Chains (defined as a fusion between Lexical Chains and Coreference Chains) represent similar linguistic tools. This result gives us the possibility to extend to Cohesion Chains (CCs) some important indicators as, for example the Kernel of CCs, the topicality of a CC, text concentration, CC-diffuseness and mean diffuseness of the text. Let us mention that nowhere in the Lexical Chains or Coreference Chains literature these kinds of indicators are introduced and used since now. Similarly, some applications of CCs in the study of a text (as for example segmentation or summarization of a text) could be realized starting from hrebs. As an illustration of the similarity between Hrebs and CCs a detailed analyze of the poem "Lacul" by Mihai Eminescu is given.
A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionar... more A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk's algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet for a new algorithm originated in Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation.
In this paper the problems of deriving a taxonomy from a text and concept-oriented text segmentat... more In this paper the problems of deriving a taxonomy from a text and concept-oriented text segmentation are approached. Formal Concept Analysis (FCA) method is applied to solve both of these linguistic problems. The proposed segmentation method offers a conceptual view for text segmentation, using a context-driven clustering of sentences. The Concept-oriented Clustering Segmentation algorithm (COCS) is based on k-means linear clustering of the sentences. Experimental results obtained using COCS algorithm are presented.
Abstract. Default logics represent an important class of the nonmonotonic formalisms. Using simpl... more Abstract. Default logics represent an important class of the nonmonotonic formalisms. Using simple by powerful inference rules, called defaults, these logic systems model reasoning patterns of the form ”in the absence of information to the contrary of... ”, and thus formalize the default reasoning, a special type of nonmonotonic reasoning. In this paper we propose an automated system, called DARR, with two components: a propositional theorem prover and a theorem prover for constrained and rational propositional default logics. A modified version of semantic tableaux method is used to implement the propositional prover. Also, this theorem proving method is adapted for computing extensions because one of its purpose is to produce models, and extensions are models of the world described by default theories. 1.
Abstract. The default nonmonotonic reasoning was formalised by a class of logical systems: defaul... more Abstract. The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (classical, justified, constrained, rational), based on the same syntax which utilises nonmonotonic inference rules: defaults, but with different semantics for the defaults. In this paper we introduce a uniform semantic characterisation for the constrained and rational extensions of a default theory. This characterisation is an operational approach of the nonmonotonic reasoning that is viewed as a successive application of the applicable defaults. During the reasoning process can be observed the interaction between the defaults and the reasoning context. The graphical interpretation associated to the semantic characterisation of extensions illustrates the type of applicability: cautious (for constrained extensions) and hazardous (for rational extensions) of the defaults and some formal properties: semi-monotonicity, regularity, existence of extensions, commitment to assumptions ...
Abstract. A large class of unsupervised algorithms for Word Sense Disam-biguation (WSD) is that o... more Abstract. A large class of unsupervised algorithms for Word Sense Disam-biguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk’s algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet [3] for a new algorithm originated in Lesk’s, namely chain algorithm for disambigua-tion of all words (CHAD). We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation. 1. The polysemy Word sense disambiguation is the process of identifying the correct sense of words in particular contexts. The solving of WSD seems to be AI complete ( that means its solution requires a solution to all the general AI problems of representing and reasoning about arbitrary) and it is one of the most important open problems in NLP [5],[6],[7], [10],[12],[13]. In the electronical on-line dictionary WordNet, the most well-developed and widely ...
Abstract. Default logics represent a simple but a powerful class of nonmonotonic formalisms. The ... more Abstract. Default logics represent a simple but a powerful class of nonmonotonic formalisms. The main computational problem specific to these logical systems is a search problem: finding all the extensions (sets of nonmonotonic theorems- beliefs) of a default theory. GADEL is an automated system based on a heuristic approach of the classical default extension computing problem and applies the principles of genetic algorithms to solve the problem. The purpose of this paper is to extend this heuristic approach for computing all type of default extensions: classical, justified, constrained and rational. 1.
Abstract. This paper presents a new method for recognizing the text entailment obtained from the ... more Abstract. This paper presents a new method for recognizing the text entailment obtained from the text-to-text metric introduced in [3] and from the modified resolution introduced in [12]. In [11], using the directional measure of similarity as presented in [3], which measures the semantic similarity of a text T1 with respect to a text T2, some conditions of text entailment are established. In this paper we present a method based on the results presented in [12] and [11], method which supposes the word sense disambiguation of the two texts T1 and T2 (text and hypothesis) and adds some appropriate heuristics. The algorithm is applied to a part of the set of pairs (text-hypothesis) contained in PASCAL RTE-2 data [16]. 1. Text entailment verification by logical methods Establishing entailment relationship between two texts is one of the most complex tasks in Natural Language Understanding. Thus, a very important problem in some computational linguistic applications (as question answerin...
The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (... more The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (classical, justified, constrained, rational), based on the same syntax which utilises nonmonotonic inference rules: defaults, but with diffent semantics for the defaults. In this paper we introduce a uniform semantic characterisation for the constrained and rational extensions of a default theory. This characterisation is an operational approach of the nonmonotonic reasoning that is viewed as a successive application of the applicable defaults. During the reasoning process can be observed the interaction between the defaults and the reasoning context. The graphical interpretation associated to the semantic characterisation of extensions illustrates the type of applicability: cautious (for constrained extensions) and hazardous (for rational extensions) of the defaults and some formal properties: semi-monotonicity, regularity, existence of extensions, commitment to assumptions of these var...
Abstract. Nonmonotonic reasoning is succesfully formalized by the class of default logics. In thi... more Abstract. Nonmonotonic reasoning is succesfully formalized by the class of default logics. In this paper we introduce an axiomatic system for credulous reasoning in rational default logic. Based on classical sequent calculus and anti-sequent calculus, an abstract characterization of credulous nonmonotonic default inference in this variant of default logic is presented.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Software authorship attribution, defined as the problem of software authentication and resolution... more Software authorship attribution, defined as the problem of software authentication and resolution of source code ownership, is of major relevance in the software engineering field. Authorship analysis of source code is more difficult than the classic task on literature, but it would be of great use in various software development activities such as software maintenance, software quality analysis or project management. This paper addresses the problem of code authorship attribution and introduces, as a proof of concept, a new supervised classification model AutoSoft for identifying the developer of a certain piece of code. The proposed model is composed of an ensemble of autoencoders that are trained to encode and recognize the programming style of software developers. An extension of the AutoSoft classifier, able to recognize an unknown developer (a developer that was not seen during the training), is also discussed and evaluated. Experiments conducted on software programs collected...
The automakers are continuously interested in noise vibration and harshness solutions for high qu... more The automakers are continuously interested in noise vibration and harshness solutions for high quality vehicles in general and quieter passenger cabins in special. In this paper the modal parameters of a car windshield are determined by using experimental modal analysis. The frequency response functions have been measured by using impulsive technique. An impact hammer has been used to generate the impulsive force applied at various locations evenly spread on the windshield surface. A fixed mini-accelerometer has been used to measure the structure response. Starting from the set of FRF functions of free-free conditions, the Least Square Complex Exponential parameter estimation method implemented in LMS Test.Lab software has been used to extract the modal parameters. The validation phase sustained the correctness of the modal parameters found.
... This identification is completed with a set of heuristics for recognizing false entailment. .... more ... This identification is completed with a set of heuristics for recognizing false entailment. ... Monotonicity supposes that if aa text entails another text, then adding more text to the first, the entailment relation still holds [7]. The heuristics are represented by the bellow condition ...
Studia Universitatis Babes-Bolyai: Series Informatica, 2010
This paper presents a study of the nonmonotonic consequence relation which models the skeptical r... more This paper presents a study of the nonmonotonic consequence relation which models the skeptical reasoning formalised by constrained default logic. The nonmonotonic skeptical consequence relation is defined using the sequent calculus axiomatic system. We study the formal properties desirable for a good nonmonotonic relation: supraclassicality, cut, cautious monotony, cumulativity, absorption, distribution.
In this study it is proven that the Hrebs used in Denotation analysis of texts and Cohesion Chain... more In this study it is proven that the Hrebs used in Denotation analysis of texts and Cohesion Chains (defined as a fusion between Lexical Chains and Coreference Chains) represent similar linguistic tools. This result gives us the possibility to extend to Cohesion Chains (CCs) some important indicators as, for example the Kernel of CCs, the topicality of a CC, text concentration, CC-diffuseness and mean diffuseness of the text. Let us mention that nowhere in the Lexical Chains or Coreference Chains literature these kinds of indicators are introduced and used since now. Similarly, some applications of CCs in the study of a text (as for example segmentation or summarization of a text) could be realized starting from hrebs. As an illustration of the similarity between Hrebs and CCs a detailed analyze of the poem "Lacul" by Mihai Eminescu is given.
A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionar... more A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk's algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet for a new algorithm originated in Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation.
In this paper the problems of deriving a taxonomy from a text and concept-oriented text segmentat... more In this paper the problems of deriving a taxonomy from a text and concept-oriented text segmentation are approached. Formal Concept Analysis (FCA) method is applied to solve both of these linguistic problems. The proposed segmentation method offers a conceptual view for text segmentation, using a context-driven clustering of sentences. The Concept-oriented Clustering Segmentation algorithm (COCS) is based on k-means linear clustering of the sentences. Experimental results obtained using COCS algorithm are presented.
Abstract. Default logics represent an important class of the nonmonotonic formalisms. Using simpl... more Abstract. Default logics represent an important class of the nonmonotonic formalisms. Using simple by powerful inference rules, called defaults, these logic systems model reasoning patterns of the form ”in the absence of information to the contrary of... ”, and thus formalize the default reasoning, a special type of nonmonotonic reasoning. In this paper we propose an automated system, called DARR, with two components: a propositional theorem prover and a theorem prover for constrained and rational propositional default logics. A modified version of semantic tableaux method is used to implement the propositional prover. Also, this theorem proving method is adapted for computing extensions because one of its purpose is to produce models, and extensions are models of the world described by default theories. 1.
Abstract. The default nonmonotonic reasoning was formalised by a class of logical systems: defaul... more Abstract. The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (classical, justified, constrained, rational), based on the same syntax which utilises nonmonotonic inference rules: defaults, but with different semantics for the defaults. In this paper we introduce a uniform semantic characterisation for the constrained and rational extensions of a default theory. This characterisation is an operational approach of the nonmonotonic reasoning that is viewed as a successive application of the applicable defaults. During the reasoning process can be observed the interaction between the defaults and the reasoning context. The graphical interpretation associated to the semantic characterisation of extensions illustrates the type of applicability: cautious (for constrained extensions) and hazardous (for rational extensions) of the defaults and some formal properties: semi-monotonicity, regularity, existence of extensions, commitment to assumptions ...
Abstract. A large class of unsupervised algorithms for Word Sense Disam-biguation (WSD) is that o... more Abstract. A large class of unsupervised algorithms for Word Sense Disam-biguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk’s algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet [3] for a new algorithm originated in Lesk’s, namely chain algorithm for disambigua-tion of all words (CHAD). We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation. 1. The polysemy Word sense disambiguation is the process of identifying the correct sense of words in particular contexts. The solving of WSD seems to be AI complete ( that means its solution requires a solution to all the general AI problems of representing and reasoning about arbitrary) and it is one of the most important open problems in NLP [5],[6],[7], [10],[12],[13]. In the electronical on-line dictionary WordNet, the most well-developed and widely ...
Abstract. Default logics represent a simple but a powerful class of nonmonotonic formalisms. The ... more Abstract. Default logics represent a simple but a powerful class of nonmonotonic formalisms. The main computational problem specific to these logical systems is a search problem: finding all the extensions (sets of nonmonotonic theorems- beliefs) of a default theory. GADEL is an automated system based on a heuristic approach of the classical default extension computing problem and applies the principles of genetic algorithms to solve the problem. The purpose of this paper is to extend this heuristic approach for computing all type of default extensions: classical, justified, constrained and rational. 1.
Abstract. This paper presents a new method for recognizing the text entailment obtained from the ... more Abstract. This paper presents a new method for recognizing the text entailment obtained from the text-to-text metric introduced in [3] and from the modified resolution introduced in [12]. In [11], using the directional measure of similarity as presented in [3], which measures the semantic similarity of a text T1 with respect to a text T2, some conditions of text entailment are established. In this paper we present a method based on the results presented in [12] and [11], method which supposes the word sense disambiguation of the two texts T1 and T2 (text and hypothesis) and adds some appropriate heuristics. The algorithm is applied to a part of the set of pairs (text-hypothesis) contained in PASCAL RTE-2 data [16]. 1. Text entailment verification by logical methods Establishing entailment relationship between two texts is one of the most complex tasks in Natural Language Understanding. Thus, a very important problem in some computational linguistic applications (as question answerin...
The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (... more The default nonmonotonic reasoning was formalised by a class of logical systems: default logics (classical, justified, constrained, rational), based on the same syntax which utilises nonmonotonic inference rules: defaults, but with diffent semantics for the defaults. In this paper we introduce a uniform semantic characterisation for the constrained and rational extensions of a default theory. This characterisation is an operational approach of the nonmonotonic reasoning that is viewed as a successive application of the applicable defaults. During the reasoning process can be observed the interaction between the defaults and the reasoning context. The graphical interpretation associated to the semantic characterisation of extensions illustrates the type of applicability: cautious (for constrained extensions) and hazardous (for rational extensions) of the defaults and some formal properties: semi-monotonicity, regularity, existence of extensions, commitment to assumptions of these var...
Abstract. Nonmonotonic reasoning is succesfully formalized by the class of default logics. In thi... more Abstract. Nonmonotonic reasoning is succesfully formalized by the class of default logics. In this paper we introduce an axiomatic system for credulous reasoning in rational default logic. Based on classical sequent calculus and anti-sequent calculus, an abstract characterization of credulous nonmonotonic default inference in this variant of default logic is presented.
Authors: Peter Zörnig, Kamil Stachowski, Anna Rácová, Yunhua Qu, Michal Místecký, Kuizi Ma, Mihai... more Authors: Peter Zörnig, Kamil Stachowski, Anna Rácová, Yunhua Qu, Michal Místecký, Kuizi Ma, Mihaiela Lupea, Emmerich Kelih, Volker Gröller, Hanna Gnatchuk, Alfiya Galieva, Sergey Andreev, Gabriel Altmann ISBN: 978-3-942303-88-0
Uploads
Papers by Mihaiela Lupea
ISBN: 978-3-942303-88-0