Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991, IEEE International Conference on Systems Engineering
…
4 pages
1 file
The functionality of a neural network determines the generalization properties. Multilayer networks have a restricted functionality. This is determined by the topology of the network and, in particular, the number of nodes and the method of interconnection. The authors present a strategy whereby a given logical neural network structure can be evaluated in order to determine whether it will support
IJCA Proceedings on …, 2012
The purpose of this paper is to provide a quick overview of neural networks and to explain how they can be used in controlsystems. We introduce the multilayer perceptron neural network and describe how it can be used for function approximation. The ...
2003
A neural network, composed of neurons of two types, able to perform Boolean operations is presented. Based on recursive definition of ”basic Boolean operations” the proposed model, for any fixed number of input variables, can either realize all of them paralelly, or accomplish any chosen one of them alone. Moreover, possibilities of combining a few such networks into more complex structures in order to perform superpositions of basic operations are disscussed. The general concept of neural implementation of First Order logic (FO) based on the presented network is also introduced.
2008
Abstract: In this paper, there was designed a software based on neural networks principle. First, there was defined the logic neuron having as inputs logic signals and as threshold a logic function. From this definition it is possible to determine a general formula for the logic ...
2009 3rd International Workshop on Soft Computing Applications, 2009
Artificial Neural Networks (ANN) embed shallow knowledge through learning. Used in diagnosis and decision support, ANN are immediate computational models for effects and causes as from human experience but keep out from the deep knowledge of them. The paper presents a way of embedding logical processing over the numerical ones in "neural logical sites" for the classical ANN paradigms, then proposes a way of structuring deep knowledge in the network for all types of abduction problems in a unified way, which is compared with similar attempt. The approach may be spread in any diagnosis and decision support applications involving deep and shallow knowledge. I.
International Conference on Acoustics, Speech, and Signal Processing, 1990
The design of feed-forward ADALINE neural networks can be split into two independent optimization problems.
International journal of intelligent systems, 1992
Journal of Intelligent Systems, 1992
The performance of a learning algorithm is measured by looking at the structure achieved through such learning processes and comparing the desired function / to the function computed by the network acting as a classical automaton. It is important to characterise the functions which can be computed by the network in this fixed structure, since if there is no configuration which allows the computation of f, then a network cannot learn to compute f. We studied the computability of networks of PLNs (Probabilistic Logic Node (Aleksander, 1988)). We suggested a new method of recognition based on stored probabilities with PLN networks. This new method increases the computability power of such networks beyond that of finite state acceptors. We proved that the computability of a PLN network is identical to the computability of a probabilistic automaton (Rabin, 1963). This implies that it is possible to recognise more than finite state languages with such machines.
Neural Processing Letters, 1998
Three neural-based methods for extraction of logical rules from data are presented. These methods facilitate conversion of graded response neural networks into networks performing logical functions. MLP2LN method tries to convert a standard MLP into a network performing logical operations (LN). C-MLP2LN is a constructive algorithm creating such MLP networks. Logical interpretation is assured by adding constraints to the cost function, forcing the weights to ±1 or 0. Skeletal networks emerge ensuring that a minimal number of logical rules are found. In both methods rules covering many training examples are generated before more specific rules covering exceptions. The third method, FSM2LN, is based on the probability density estimation. Several examples of performance of these methods are presented. I. INTRODUCTION Classification using crisp logical rules is preferable to humans over other methods because it exposes the inherent logical structure of the problem. Although the class of problems with logical structure simple enough to be manageable by humans may be rather limited nevertheless it covers some important applications, such as the decision support systems in financial institutions. One way to obtain logical description of the data is to analyze neural networks trained on these data. Many methods for extraction of logical rules from neural networks exist (for a review and extensive references see [?]). In this paper several new methods of logical rule extraction and feature selection are presented. Although we concentrate on crisp logical rules these methods can easily obtain also fuzzy rules. In contrast with the existing neural rule extraction algorithms based on analysis of small connection weights, analysis of sensitivity to changes in input or analysis
Fourth International Conference on Hybrid Intelligent Systems (HIS'04)
Inductive Logic Programming (ILP) is a well known machine learning technique in learning concepts from relational data. Nevertheless, ILP systems are not robust enough to noisy or unseen data in real world domains. Furthermore, in multi-class problems, if the example is not matched with any learned rules, it cannot be classified. This paper presents a novel hybrid learning method to alleviate this restriction by enabling Neural Networks to handle first-order logic programs directly. The proposed method, called First-Order Logical Neural Network (FOLNN), is based on feedforward neural networks and integrates inductive learning from examples and background knowledge. We also propose a method for determining the appropriate variable substitution in FOLNN learning by using Multiple-Instance Learning (MIL). In the experiments, the proposed method has been evaluated on two first-order learning problems, i.e., the Finite Element Mesh Design and Mutagenesis and compared with the state-of-the-art, the PROGOL system. The experimental results show that the proposed method performs better than PROGOL.
Milan Law Review
Русь, Литва, Орда в памятниках нумизматики и сфрагистики. Вып. 7, 2019
Anais Eletrônicos do XXVII Encontro Estadual de História da ANPUH-SP, 2024
International Journal of Integrated Engineering, 2019
Molecular Cancer Therapeutics, 2017
Rivista Giuridica dell'Ambiente, 2014
International Journal of Circuit Theory and Applications, 2010
European Respiratory Journal, 2013
Frontiers in Plant Science, 2020
Cellular and Molecular Neurobiology, 2014
Babali Nursing Research