Papers by Md. Monirul Islam
Lecture Notes in Computer Science
Associative classifiers have been the subject of intense research for the last few years. Experim... more Associative classifiers have been the subject of intense research for the last few years. Experiments have shown that they generally result in higher accuracy than decision tree classifiers. In this paper, we introduce a novel algorithm for associative classification "Classification based on Association Rules Generated in a Bidirectional Apporach" (CARGBA). It generates rules in two steps. At first, it generates a set of high confidence rules of smaller length with support pruning and then augments this set with some high confidence rules of higher length with support below minimum support. Experiments on 6 datasets show that our approach achieves better accuracy than other state-of-the-art associative classification algorithms.
2008 11th IEEE International Conference on Computational Science and Engineering, 2008
Classification using association rules has added a new dimension to the ongoing research for accu... more Classification using association rules has added a new dimension to the ongoing research for accurate classifiers. Over the years, a number of associative classifiers based on positive rules have been proposed in literature. The target of this paper is to improve classification accuracy by using both negative and positive class association rules without sacrificing performance. The generation of negative associations from datasets has been attacked from different perspectives by various authors and this has proved to be a very computationally expensive task. This paper approaches the problem of generating negative rules from a classification perspective, how to generate a sufficient number of high quality negative rules efficiently so that classification accuracy is enhanced. We adopt a simple variant of Apriori algorithm for this and show that our proposed classifier "Associative Classifier with negative rules"(ACN) is not only time-efficient but also achieves significantly better accuracy than four other state-of-the-art classification methods by experimenting on benchmark UCI datasets.
Neurocomputing, 2010
This paper presents a new feature selection (FS) algorithm based on the wrapper approach using ne... more This paper presents a new feature selection (FS) algorithm based on the wrapper approach using neural networks (NNs). The vital aspect of this algorithm is the automatic determination of NN architectures during the FS process. Our algorithm uses a constructive approach involving correlation information in selecting features and determining NN architectures. We call this algorithm as constructive approach for FS (CAFS). The aim of using correlation information in CAFS is to encourage the search strategy for selecting less correlated (distinct) features if they enhance accuracy of NNs. Such an encouragement will reduce redundancy of information resulting in compact NN architectures. We evaluate the performance of CAFS on eight benchmark classification problems. The experimental results show the essence of CAFS in selecting features with compact NN architectures.
IEICE Transactions on Information and Systems, 2010
Ant Colony Optimization (ACO) algorithms are a new branch of swarm intelligence. They have been a... more Ant Colony Optimization (ACO) algorithms are a new branch of swarm intelligence. They have been applied to solve different combinatorial optimization problems successfully. Their performance is very promising when they solve small problem instances. However, the algorithms' time complexity increase and solution quality decrease for large problem instances. So, it is crucial to reduce the time requirement and at the same time to increase the solution quality for solving large combinatorial optimization problems by the ACO algorithms. This paper introduces a Local Search based ACO algorithm (LSACO), a new algorithm to solve large combinatorial optimization problems. The basis of LSACO is to apply an adaptive local search method to improve the solution quality. This local search automatically determines the number of edges to exchange during the execution of the algorithm. LSACO also applies pheromone updating rule and constructs solutions in a new way so as to decrease the convergence time. The performance of LSACO has been evaluated on a number of benchmark combinatorial optimization problems and results are compared with several existing ACO algorithms. Experimental results show that LSACO is able to produce good quality solutions with a higher rate of convergence for most of the problems.
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2009
This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in des... more This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms. Index Terms-Adding neurons, artificial neural network (ANN) design, generalization ability, merging neurons, retraining. I. INTRODUCTION A RTIFICIAL neural networks (ANNs) have been used widely in many application areas such as system identification, signal processing, classification, and patternrecognition.
IEEE Computational Intelligence Magazine, 2008
A 1. Introduction rtificial neural networks (ANNs) and evolutionary algorithms (EAs) are both abs... more A 1. Introduction rtificial neural networks (ANNs) and evolutionary algorithms (EAs) are both abstractions of natural processes. Since early 1990s, they have been combined into a general and unified computational model of adaptive systems, i.e., Evolutionary ANNs (EANNs) [52], [54], to utilize the learning power of ANNs and adaptive capabilities of EAs. EANNs refer to a special class of ANNs in which evolution is another fundamental form of adaptation in addition to learning [55]-[59]. The two forms of adaptation, i.e., evolution and learning, in EANNs make their adaptation to a dynamic environment much more effective and efficient. EANNs can be regarded as a general framework for adaptive systems [59], i.e., systems that can change their architectures and learning rules adaptively without human intervention.
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2009
The generalization ability of artificial neural networks (ANNs) is greatly dependent on their arc... more The generalization ability of artificial neural networks (ANNs) is greatly dependent on their architectures. Constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture for a given problem. Several such algorithms have been proposed in the literature and shown their effectiveness. This paper presents a new constructive algorithm (NCA) in automatically determining ANN architectures. Unlike most previous studies on determining ANN architectures, NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. To achieve functional adaptation, NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In this paper, the convergence and computational issues of NCA are analytically studied. The computational complexity of NCA is found to be O(W × P t × τ), where W is the number of weights in the ANN, P t is the number of training examples, and τ is the number of training epochs. This complexity has the same order as what the backpropagation learning algorithm requires for training a fixed ANN architecture. A set of eight classification and two approximation benchmark problems was used to evaluate the performance of NCA. The experimental results show that NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and nonconstructive algorithms.
2007 10th International Conference on Computer and Information Technology, 2007
Abstract This paper presents a completely new approach to fulfill both local and global optimizat... more Abstract This paper presents a completely new approach to fulfill both local and global optimization goals simultaneously of the conventional evolutionary algorithm. The basis of the proposed framework is repeatedly alternating three different stages of evolution, each ...
2013 Second International Conference on Robot, Vision and Signal Processing, 2013
In order to generate or tune fuzzy rules, Neuro-Fuzzy learning algorithms with Gaussian type memb... more In order to generate or tune fuzzy rules, Neuro-Fuzzy learning algorithms with Gaussian type membership functions based on gradient-descent method are well known. In this paper, we propose a new learning approach, the Quaternion Neuro-Fuzzy learning algorithm. This method is an extension of the conventional method to four-dimensional space by using a quaternion neural network that maps quaternion to real values. Input, antecedent membership functions and consequent singletons are quaternion, and output is real. Four-dimensional input can be better represented by quaternion than by real values. We compared it with the conventional method by several function identification problems, and revealed that the proposed method outperformed the counterpart: The number of rules was reduced to 5 from 625, the number of epochs by one fortieth, and error by one tenth in the best cases.
Neural networks, 2005
To study the regularity and complexity of autonomous behavior, the flow of sensory information ob... more To study the regularity and complexity of autonomous behavior, the flow of sensory information obtained in autonomous mobile robots under various conditions was analyzed as a complex system. Sensory information time series X n was collected from a miniature mobile robot ...
Neural Networks, 2001
This paper describes the cascade neural network design algorithm (CNNDA), a new algorithm for des... more This paper describes the cascade neural network design algorithm (CNNDA), a new algorithm for designing compact, two-hidden-layer artificial neural networks (ANNs). This algorithm determines an ANN's architecture with connection weights automatically. The ...
Uploads
Papers by Md. Monirul Islam