Papers by Balakrushna Tripathy
Electronics
Deep generative models, such as deep Boltzmann machines, focused on models that provided parametr... more Deep generative models, such as deep Boltzmann machines, focused on models that provided parametric specification of probability distribution functions. Such models are trained by maximizing intractable likelihood functions, and therefore require numerous approximations to the likelihood gradient. This underlying difficulty led to the development of generative machines such as generative stochastic networks, which do not represent the likelihood functions explicitly, like the earlier models, but are trained with exact backpropagation rather than the numerous approximations. These models use piecewise linear units that are having well behaved gradients. Generative machines were further extended with the introduction of an associative adversarial network leading to the generative adversarial nets (GANs) model by Goodfellow in 2014. The estimations in GANs process two multilayer perceptrons, called the generative model and the discriminative model. These are learned jointly by alternat...
Advances in Intelligent Systems and Computing, 2016
To compare the performance of the clustering algorithm on two data processing architectures, the ... more To compare the performance of the clustering algorithm on two data processing architectures, the implementations of k-means clustering algorithm on two big data architectures are given at first in this paper. Then we focus on the differences of theoretical performance of k-means algorithm on two architectures from the mathematical point of view. The theoretical analysis shows that Spark architecture is superior to the Hadoop in aspects of the average execution time and I/O time. Finally, a text data set of social networking site of users' behaviors is employed to conduct algorithm experiments. The results show that Spark is significantly less than MapReduce in aspects of the execution time and I/O time based on k-means algorithm. The theoretical analysis and the implementation technology of the big data algorithm proposed in this paper are a good reference for the application of big data technology.
In electrical engineering educational industrial applications it is often required for the studen... more In electrical engineering educational industrial applications it is often required for the students to build and analyze well-known circuits according to their syllabus. In this paper a colored graph isomorphism based model is discussed that is being developed to match two electrical circuits. The procedure involves two steps, first being, to store the circuit in the data base in form of a colored graph and then matching users input circuit with it using graph isomorphism. Since graph isomorphism is in NP, it has no known polynomial time solution. However, in this particular problem of circuits, the colors and weight used in the nodes vary and the graphs generated are sparsely connected. Thus the algorithm runs in a reasonable time.
Lecture Notes in Computer Science, 2013
Data clustering has found its usefulness in various fields. Algorithms are mostly developed using... more Data clustering has found its usefulness in various fields. Algorithms are mostly developed using euclidean distance. But it has several drawbacks which maybe rectified by using kernel distance formula. In this paper, we propose a kernel based rough-fuzzy C-Means (KRFCM) algorithm and use modified version of the performance indexes (DB and D) obtained by replacing the distance function with kernel function. We provide a comparative analysis of RFCM with KRFCM by computing their DB and D index values. The analysis is based upon both numerical as well as image datasets. The results establish that the proposed algotihtm outperforms the existing one.
International Journal of Online Engineering (iJOE), 2011
Modern semiconductor manufacturing is recognized as a specialized field in electrical engineering... more Modern semiconductor manufacturing is recognized as a specialized field in electrical engineering curricula. Teaching micro- and nanoelectronics in university environment is a challenging task and a new framework is needed. An integrated measurement-based microelectronics laboratory along with technology computer aided design (TCAD) simulation laboratory has been developed and is in use for imparting hands-on laboratory experience to the students. An internet-based laboratory management system for monitoring and control of a real-time measurement system interfaced via a dedicated local computer is discussed. TCAD process/device simulations generate a vast amount of data and handling of the data is a challenge for web-based applications. A simple architecture for implementation, visualization and analyses of the generated data from process/device simulations is reported.
2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2013
In an attempt to incorporate user knowledge in order to decide about the equality of sets, the co... more In an attempt to incorporate user knowledge in order to decide about the equality of sets, the concepts of approximate equalities using rough sets were introduced. These notions have been generalised in several ways and very recently [1] extended four types of approximate equalities using rough fuzzy sets instead of only rough sets. To be precise, a concept of leveled approximate equality was introduced and properties were studied. In this paper we extend this work with case studies to illustrate the applications of the concepts and compare them respectively. We also introduce and discuss the rough measures of basic sets, fuzzy sets and interpret four types of approximate equalities in terms of the accuracy measure as well as rough measures. The analysis had provided a clear distinguish notion in terms of the measures.
Lecture Notes in Engineering and Computer Science, 2010
Abstract—Brain signals recognition has been a problem that computers are not efficient at. It is ... more Abstract—Brain signals recognition has been a problem that computers are not efficient at. It is difficult to find the fine variations in the signal because the signals are too lengthy. Today, efficient brain signal recognition is limited to find different brain diseases such as epilepsy, tumor, encephalitis, and sleep disorders; using hardware like Electroencephalogram (EEG) wherein the strokes are directly detected and the brain signals is recognized. But if you want to convert a brain signal document to digital text, we have to extract the EEG signals and ...
World Applied Sciences Journal
Electroencephalogram (EEG)remainsthe brain signal processing system that tolerates gaining the ap... more Electroencephalogram (EEG)remainsthe brain signal processing system that tolerates gaining the appreciative of the multipartinternal mechanisms of the brain and irregular brain waves have exposed to be associatedthroughexact brain syndromes. The study of brain waves shows an essentialpart in analysis of dissimilar brain syndromes.Currently there are many people in the world who are suffering from severe brain related illnesses. The physical state or condition of the patient can be assessed by analysing his EEG data. Doctors thus feel a need to check on the EEG data of a patient from time to time. This is where the proposed system comes into play. It provides a means for doctors to analyse the patient's EEG data without direct interaction. Objective of this research work is to associate the classification of epileptic risk level from (Electroencephalogram) EEG signal thenperformance analysis of Support Vector Machine (SVM) and Minimum Relative Entropy (MRE) in optimization of fuzzy crops in the classification of epileptic risk levels from EEG warning signal. The fuzzy preclassifier is castoff to classify the risk phases of epilepsy based on extractedlimitssimilarto energy, variance, peaks, sharp, spike waves, duration, events and covariance after the EEG signs of the patient. Support Vector Machine and Minimum Relative Entropy are useful on the categorized data to recognize the enhanced risk level (singleton) which designates the patient's risk level. The effectiveness of the aboveapproaches is related based on the bench mark boundaries such as Performance Index (PI), then Quality Value (QV).
Abstract—The non textual image recognition is one of the important aspects of the multimedia. The... more Abstract—The non textual image recognition is one of the important aspects of the multimedia. The image recognition is also termed as the computer vision and is used in the various areas and devices such as robotics. The ANN (Artificial Neural Networks) is one of the great tools involved in the image recognition. So far, the ANN were used in applications involving the computer vision. Although the artificial neural networks based systems are distinguished for their ability to cope with problems in pattern recognition and computer vision in general, they are rather impotent when faced with the dimensionality of problems in the image understanding domain. That makes them unable to deal sufficiently with problems such as rotation, distortion, clutter and scale variances. In this paper, we are going to deal the image recognition with the technology known as the CANN (Cellular Associative Neural Networks).The main thing which makes the difference between the traditional neural networks a...
— Remote laboratory is an innovative approach to create and provide laboratory experience to geog... more — Remote laboratory is an innovative approach to create and provide laboratory experience to geographically dispersed students from anywhere at any time. One of the most important aspects of remote laboratories is to provide the user maximum mobility and freedom to perform experiments. Apart from the PC-based remotely triggered laboratories to enhance technical education, mobile devices can play a major role in wider implementation of the laboratory for hardware-based remote experimentation. In this paper, different techniques, such as, Adobe Flash Lite, HTML5 and SMS for developing platforms for mobile devices are studied and compared. Index Terms — m-learning, laboratory education, flash
Data mining is the extraction of different data of intriguing as such (constructive, relevant, co... more Data mining is the extraction of different data of intriguing as such (constructive, relevant, constructive, previously unexplored and considerably valuable) patterns or information from very large stack of data or different dataset. In other words, it is the experimental exploration of associations, links, and mainly the overall patterns that prevails in large datasets but is hidden or unknown. So, to explore the performance analysis using different clustering techniques we used R Language. This R language is a tool, which allows the user to analyse the data from various and different perspective and angles, in order to get a proper experimental results and in order to derive a meaningful relationships. In this paper, we are studying, analysing and comparing various algorithms and their techniques used for cluster analysis using R language. Our aim in this paper, is to present the comparison of 5 different clustering algorithms and validating those algorithms in terms of internal a...
Soft Computing Systems
Artificial neural networks excel at doing specific tasks like image recognition, sequence learnin... more Artificial neural networks excel at doing specific tasks like image recognition, sequence learning, machine translation. But, the direction of research has moved towards the creation of more general purpose neural network architectures. Recently, DeepMind introduced Differentiable Neural Computer (DNC), with an external memory system that is capable of working on complex data structures. DNC can infer from graph problem, solve block puzzle using reinforcement learning and so on. DNC uses LSTM as controller network that manipulates the memory matrix. In this paper, we introduce a change to DNC architecture by replacing LSTM network with multiplicative LSTM and measure the performance of the improved model by training it on three different tasks; namely question answering task using bAbI dataset, character level modelling using harry potter text and planning search using air cargo problem. We compared the performance of the previous model to determine its behavior.
In this article we introduce the notion of logarithmic density and uni- form density for subsets ... more In this article we introduce the notion of logarithmic density and uni- form density for subsets of N N: We introduce dieren t types of I-convergent double sequences and I-Cauchy double sequences. Their dieren t properties like solidity, symmetricity, completeness, denseness etc are studied in detail. 1. Introduction and Background Throughout the article a double sequence A is denoted by hanki i.e. a double innite array of complex numbers ank, n; k 2 N. Throughout the ar- ticle w2, ('1)2, c2, (c0)2, ( c)2, ( c0)2, c R , (c0) R , ( c) R , ( c0) R denote the spaces of all, bounded, convergent in Pringsheim's sense, null in Pringsheim's sense, statistically convergent in Pringsheim's sense, statistically null in Pringsheim's sense, regularly convergent, regularly null, regularly statistically convergent and regularly statistically null double sequences respectively. Throughout E denote the characteristic function of E. The notion of statistical convergence was intro...
There are several algorithms used for data clustering and as imprecision has become an inherent p... more There are several algorithms used for data clustering and as imprecision has become an inherent part of datasets now days, many such algorithms have been developed so far using fuzzy sets, rough sets, intuitionistic fuzzy sets, and their hybrid models. In order to increase the flexibility of conventional rough approximations, a probability based rough sets concept was introduced in the 90s namely decision theoretic rough sets (DTRS). Using this model Li et al. extended the conventional rough c-means. Euclidean distance has been used to measure the similarity among data. As has been observed the Euclidean distance has the property of separability. So, as a solution to that several Kernel distances are used in literature. In fact, we have selected three of the most popular kernels and developed an improved Kernelized rough c-means algorithm. We compare the results with the basic decision theoretic rough c-means. For the comparison we have used three datasets namely Iris, Wine and Glas...
The aim of our research is to combine the conventional clustering algorithms based on rough sets ... more The aim of our research is to combine the conventional clustering algorithms based on rough sets and fuzzy sets with metaheuristics like firefly algorithm and fuzzy firefly algorithm. Image segmentation is carried out using the resultant hybrid clustering algorithms. The performance of the proposed algorithms is compared with numerous contemporary clustering algorithms and their firefly fused counter-parts. We further bolster the performance of our proposed algorithm my using Gaussian kernel in place of the traditional Euclidean distance measure. We test the performance of our algorithms using two performance indices, namely DB (Davis Bouldin) indexand Dunn index. Our experimental results highlight the advantages of using metaheuristics and kernels over the existing clustering algorithms.
Our mainly concern is in dealing with the disadvantages of old clustering algorithms. And coming ... more Our mainly concern is in dealing with the disadvantages of old clustering algorithms. And coming up with a method that can generate clusters which produce optimal results when compared with previous approaches. Firstly, our focus in on analyzing the limitations of most widely used clustering algorithm. Here we choose K means clustering algorithm for the purpose. To provide the optimal results from the initial stage of algorithm we use firefly algorithm. The bio-inspired algorithm that generate optimal minimum or maximum values based on certain parameters. To avoid the strictness on the boundary area in k means algorithm, we choose Rough C means algorithm, which provide some flexibility during the clustering process. Our proposed method provides most efficient both in terms of time and space. We used efficient data structures which help us to avoid waste of memory while computation and also our algorithm utilizes maximum resources of the machine to make the execution rate as fast as ...
Sensors
One of the major health concerns for human society is skin cancer. When the pigments producing sk... more One of the major health concerns for human society is skin cancer. When the pigments producing skin color turn carcinogenic, this disease gets contracted. A skin cancer diagnosis is a challenging process for dermatologists as many skin cancer pigments may appear similar in appearance. Hence, early detection of lesions (which form the base of skin cancer) is definitely critical and useful to completely cure the patients suffering from skin cancer. Significant progress has been made in developing automated tools for the diagnosis of skin cancer to assist dermatologists. The worldwide acceptance of artificial intelligence-supported tools has permitted usage of the enormous collection of images of lesions and benevolent sores approved by histopathology. This paper performs a comparative analysis of six different transfer learning nets for multi-class skin cancer classification by taking the HAM10000 dataset. We used replication of images of classes with low frequencies to counter the im...
In this paper, we combine two famous fuzzy data clustering algorithms called fuzzy C-means and in... more In this paper, we combine two famous fuzzy data clustering algorithms called fuzzy C-means and intuitionistic fuzzy C-means with a metaheuristic called fuzzy firefly algorithm. The resultant hybrid clustering algorithms (FCMFFA and IFCMFFA) are used for image segmentation. We compare the performance of the proposed algorithms with FCM, IFCM, FCMFA (fuzzy C-means fused with firefly algorithm), and IFCMFA (intuitionistic fuzzy C-means fused with firefly algorithm). The centroid values returned by firefly algorithm and fuzzy firefly algorithm are compared. Two performance indices, namely Davies–Bouldin (DB) index and Dunn index, have also been used to judge the quality of the clustering output. Different types of images have been used for the empirical analysis. Our experimental results prove that the proposed clustering algorithms outperform the existing contemporary clustering algorithms.
J. Educ. Technol. Soc., 2013
This paper discusses the technical considerations and pedagogical issues that influence the desig... more This paper discusses the technical considerations and pedagogical issues that influence the design and classification of hardware-based experiments and their implementation in an internet-based remote laboratory environment for technical education. The necessary architecture to maintain academic integrity and imparting quality laboratory education via remote laboratories involving the instructor, faculty, and the students has been discussed. Online laboratory teaching and learning may be augmented by adding voice/multimedia protocols for teacher-student interaction. Pedagogical issues conducive for online laboratory instructions incorporating effective remote learning facilities that mimic the face-to-face interaction in conventional laboratory have been discussed. An online remote laboratory management system ensuring quality of service, security, automated laboratory report evaluation, and monitoring of student progress has been proposed. The main contribution of this paper is to ...
Uploads
Papers by Balakrushna Tripathy