Medical image watermarking has become a widely studied topic in recent years. The method used to ... more Medical image watermarking has become a widely studied topic in recent years. The method used to embed data into medical images must be reversible becouse tampered in medical images caused by irreversible embedding can lead to inaccurate diagnosis. Medical image watermarking should be able to identify the tampered area and recover it. This research work introduced a method for reversible watermarking of medical images. This method can embed data, detect any tampering that occurs, locate the area that has been tampered with, and then recovery that area of tampered in the medical image. The medical image is segmented into three sections. The first is the region of interest (ROI) for reversible embedding of patient records. The second is the region of non-interest (RONI) for reversible embedding of localization data for tampered and recovery areas. The third is the border area used for embedding coordinates of the selected ROI using Least Significant Bits (LSB). The proposed method has a capacity of up to 6 bpp of data and 12 % of the ROI area with very good visual quality of up to 50.86 dB. The watermarked image, when extracted, is identical from the original image except for the border area, provided that there is no tampering. If the ROI is tampered, the area of tampered can be identified and restored with results reaching up to +27.61 dB, with only a small difference in image quality-2.86 dB compared to the original image after extraction.
Hepatitis is one of the major health problems which can progress to chronic hepatitis and cancer.... more Hepatitis is one of the major health problems which can progress to chronic hepatitis and cancer. Currently, computer based diagnosis is commonly use among medical examination. The diagnosis has been examined by using the disease dataset as a reference to make the decisions. However, the dataset was incomplete because it contained many instances containing missing values. This situation can lead the results of the analysis to be biased. One method of handling missing values is Multiple Imputation. Hepatitis dataset has an arbitrary pattern of missing values. This pattern can be handled by using Markov Chain Monte Carlo (MCMC) and Fully Conditional Specification (FCS) as Multiple Imputation algorithms. The research conducted an experiment to compare combinations of Multiple Imputations algorithm and Principal Component Analysis (PCA) as instance selection. Instance selection applied to reduce data by selecting variables that contribute greatly to the dataset. The goal was to improve the accuracy of the analysis on data which had missing values with the arbitrary pattern. The results showed that FCS-PCA is the best performance with the higher accuracy (98.80%) and the lowest error rate (0.0116).
Journal of Engineering and Technological Sciences, Nov 30, 2016
Nowadays, the Internet is one of the most important things in a human's life. The unlimited acces... more Nowadays, the Internet is one of the most important things in a human's life. The unlimited access to information has the potential for people to gather any data related to their needs. However, this sophisticated technology also bears a bad side, for instance negative content information. Negative content can come in the form of images that contain pornography. This paper presents the development of a skin classification scheme as part of a negative content filtering system. The data are trained by grey-level co-occurrence matrices (GLCM) texture features and then used to classify skin color by support vector machine (SVM). The tests on skin classification in the skin and non-skin categories achieved an accuracy of 100% and 97.03%, respectively. These results indicate that the proposed scheme has potential to be implemented as part of a negative content filtering system.
Penelitian ini berisikan tentang suatu sistem deteksi wajah pada manusia dengan menggunakan metod... more Penelitian ini berisikan tentang suatu sistem deteksi wajah pada manusia dengan menggunakan metode Viola-Jones. Data yang digunakan pada penelitian ini berupa sampel gambar yang diambil dari internet secara acak sebanyak 30 citra yang terdiri atas 22 citra manusia dan 8 citra hewan. Dimensi sampel citra berukuran paling kecil adalah 219x285 pixel dan dimensi yang paling besar adalah 1536x2048 pixel. Metode Viola-Jones relatif mendapatkan hasil yang cepat, akurat, dan efisien dalam melakukan deteksi wajah pada gambar. Metode Viola-Jones merupakan algoritma yang paling banyak digunakan untuk mendeteksi wajah. Metode ini terdiri atas tiga komponen penting yaitu integral image digunakan untuk menentukan ada atau tidaknya fitur Haar tertentu pada sebuah gambar, metode AdaBoost machine learning yang digunakan untuk untuk memilih fitur Haar yang spesifik yang akan digunakan serta untuk mengatur nilai ambangnya (threshold), dan cascade classifier sebagai pengklasifikasi akhir untuk menentukan daerah wajah pada gambar dari metode ini. Urutan filter pada cascade ditentukan oleh bobot yang diberikan AdaBoost. Filter dengan bobot paling besar diletakkan pada proses pertama kali dengan tujuan untuk menghapus daerah gambar bukan wajah secepat mungkin. Dalam penelitian ini ditampilkan gambar-gambar yang terdeteksi sebagai wajah dan tidak terdeteksi sebagai wajah. Hasil penelitian ini mendapatkan nilai akurasi sistem deteksi wajah sebesar 90,9%. Hasil lain yang didapatkan adalah posisi wajah yang tegak/tidak tegak menentukan keberhasilan deteksi wajah tersebut.
International Journal of Medical Engineering and Informatics, 2017
In determining the level of tumour malignancy in lung cancer, several characteristics of lesion i... more In determining the level of tumour malignancy in lung cancer, several characteristics of lesion in the lungs need to be recognised. The characteristics include several components, namely tumour size, enhancement, irregular spiculated edge, lobulated, air bronchograms, ground glass opacity (GGO) and density. This study identifies GGO lesion characteristics using CT image datasets obtained from Sardjito Public Hospital, Indonesia. The initial stage conducted is a cropping process performed by a radiologist so that the research's focus is merely on the lesion. The next process is the feature extraction by using grey level co-occurrence matrices (GLCM) with four features, namely energy, contrast, correlation and homogeneity. The classification stage is carried out after the extraction stage which is followed by features selection. Having selected two most dominant features from total of 16 features, the proposed method achieves accuracy of 88.8%, sensitivity of 87.5% and specificity of 90%.
One of the serious components to be maintained in rotating machinery including induction motors i... more One of the serious components to be maintained in rotating machinery including induction motors is bearings. Broken bearing diagnosis is a vital activity in maintaining electrical machines. Researchers have explored the use of machine learning for diagnostic purposes, both shallow and deep architecture. This study experimentally explores the progress of dislocated time sequences–deep neural network (DTS–DNN) used to improve multi-class broken bearing diagnosis by using public data from Case Western Reserve University. Deep architectures can be utilized with the purpose of simplifying or avoiding any traditional feature extraction process. DNN is utilized for avoiding the pooling operation in Convolution neural network that could remove important information. The obtained results were compared with the present techniques. The examination resulted in 99.42% average accuracy which is higher than the present techniques.
TELKOMNIKA (Telecommunication Computing Electronics and Control)
Breast cancer is one of the dominant causes of death in the world. Mammography is the standard fo... more Breast cancer is one of the dominant causes of death in the world. Mammography is the standard for early detection of breast cancer. In examining mammograms, the overall parenchyma pattern of the left and right breast was placed side by side for symmetry assessed of left and right breast tissue by radiologist. Thus, in building computer-aided diagnosis (CAD) system for screening mammography, it is necessary to adapt the working procedure of the radiologist. In this study, 30 training images and 30 testing images from Kotabaru Oncology Clinic in Yogyakarta were used. The first step was to enhance the image quality with median filter and contrast limited adaptive histogram equalization (CLAHE). Then, feature extraction was processed by histogram-based and by gray level co-occurrence matrix (GLCM) based. Furthermore, the similarity measurement process was used to measure the difference value between selected features, i.e. angular second moment (ASM), inverse difference moment (IDM), contrast, entropy based GLCM, and energy, on the left and right mammograms. This process was intended to assess the symmetry of left and right mammograms as radiologists do in mammography screening. The obtained results of the classification between normal and abnormal images with backpropagation algorithm were accuracy of 0.933, sensitivity of 0.833, and specificity of 1.000.
2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM)
In 3D medical imaging, anatomical and other structures such as kidney stones are often identified... more In 3D medical imaging, anatomical and other structures such as kidney stones are often identified and extracted with the aid of diagnosis and assessment of disease. Automatic kidney stone segmentation from abdominal CT images is challenging on the aspects of segmentation accuracy due to its variety of size, shape and location. The performance of 3D organ segmentation algorithm is also degraded by the image complexity containing multiple organs and because of their huge size. The current need is a preprocessing algorithm to assist the segmentation process. The objective of the present study was to develop reader independent preprocessing algorithm for kidney stone detection and segmentation in CT images. Three thresholding algorithms based on intensity, size and location are applied for unwanted regions removing such as soft-organ removing, bony skeleton removing and bed-mat removing. The digitized transverse abdomen CT scans images from 30 patients with kidney stone cases were included in statistical analysis and validation. As validation data for analysis, the estimation of coordinate points in stone region was measured independently by expert radiology. Experimental results prove that the proposed preprocessing algorithm has 95.24% sensitivity as the evaluation parameter. So, it can reduce the noise and unwanted regions in each procedure with good detection.
Abstract- Self-Regulated Learning (SRL) skill can be improved by improving students' cognitiv... more Abstract- Self-Regulated Learning (SRL) skill can be improved by improving students' cognitive and metacognitive abilities. To improve metacognitive abilities, metacognitive support in learning process using e-learning needs to be included. One of the example is assisting students by giving feedbacks once students had finished doing specific avtivities. The purpose of this study was to develop a pedagogical agent with the abilities to give students feedbacks, particularly recommendations of lesson sub-materials order. Recommendations were given by considering students pretest scores (students' prior knowledge). The computations for recommendations used Collaborative Filtering and Bayesian Ranking methods. Results obtained in this study show that based on MAP (Mean Average Precision) testings, Item-based method got the highest MAP score, which was 1. Computation time for each method was calculated to find runtime complexity of each method. The results of computation time show...
2017 10th Biomedical Engineering International Conference (BMEiCON), 2017
Nowadays, an efficient image segmentation process as a preprocessing step provides important cues... more Nowadays, an efficient image segmentation process as a preprocessing step provides important cues for numerous applications in human pose estimation, computer vision, objects recognition, tracking and image analysis. Many of the existing segmentation algorithms have high computational cost because of the segmentation foreground object from the large and complex-background images. But, some objects are situated on single-color background image without this complex background. For this single-color background, simple and fast segmentation algorithm will be suggested. The main focus of this paper is to tackle this problem using a modified sectional thresholding method to speed up the segmentation process. The proposed algorithm is applied to sectional image in segmentation process. Moreover, two threshold values based on user interactive are produced to remove all pixels in the background completely. The algorithm is able to work with any background of passport-type images. The proposed algorithm was tested on twenty-eight passport-type images with different background using in Intel i5 processor (1.80 GHz) with 4 GB RAM and 500 GB hard disk. According to the analysis, the execution time of the modified sectional thresholding method is 1.5 times faster than the traditional one.
Proceedings of the International Conference on Imaging, Signal Processing and Communication, 2017
Diabetic retinopathy is one of the primary causes of blindness as complication of long term diabe... more Diabetic retinopathy is one of the primary causes of blindness as complication of long term diabetes. The permanent vision loss can be avoided by conducting early detection of retinopathy symptoms such as retinal exudates. This paper proposes a scheme to classify fundus images whether containing exudates based on analysis of extracted texture features. Removal of optic disc and detection of exudate candidate area were firstly conducted. Afterwards, some texture features consisting of five features of grey level co-occurrence matrices (GLCM), eleven features of grey level of run-length matrices (GLRLM) and six histogram-based features were extracted from candidate exudates detected. These extracted features subsequently underwent classification process by using multi-layer perceptron (MLP) classifier. The performance of proposed scheme was evaluated on 80 fundus images taken from DIARETDB1 comprising of 38 images with exudates and 42 images without exudates. The best evaluation result of classification was achieved by using five GLCM features with 95% of accuracy, 97.37% of sensitivity and 92.86% of specificity. These results indicate that the proposed scheme successfully detects exudates and also classifies fundus images either containing exudates or no exudates. In addition, the implementation of the proposed scheme is expected to assist the ophthalmologists in monitoring and diagnosing diabetic retinopathy especially on the presence of retinal exudates.
2018 11th Biomedical Engineering International Conference (BMEiCON), 2018
Accurate segmentation techniques used in the automated kidney stone detection is one of the most ... more Accurate segmentation techniques used in the automated kidney stone detection is one of the most challenging problems because of grey levels similarities of adjacent organs, variation in shapes and positions of kidney stone. Valuable image preprocessing is an essential step to improve the performance of region of interest (ROI) segmentation by removing unwanted region (non ROI), noise and disturbance. The research aims to conduct comparative study of the three different preprocessing techniques for the noise removal from the CT image of kidney stone. Three noise removal techniques are computed based on the size-based thresholding (method I), shape-based thresholding(method II) and hybrid thresholding algorithm (method III). T he methods aim to enhance their readability and to assist the segmentation process in the kidney stone diagnosis system. Digitized transverse abdomen CT images from 75 patients with kidney stone cases were done in statistical analysis and validation. The estimation of coordinate points in the stone region was measured independently by the expert radiologists to get the validation data for the analysis. The results show that the proposed method I, II and III have a sensitivity of 90.91%, 92.93% and 68.69%, respectively. The execution times of overall process were 9.44 sec, 10.14 sec and 34.5 in average, respectively.
Fasilitas yang diberikan oleh jaringan media sosial dapat memberikan kebebasan bagi penggunanya u... more Fasilitas yang diberikan oleh jaringan media sosial dapat memberikan kebebasan bagi penggunanya untuk saling berkomunikasi. Namun, kebebasan yang tidak dibatasi dapat memberikan kesempatan bagi penggunanya untuk menyerang seseorang atau sebuah organisasi dengan menggunakan ujaran kebencian. Oleh karena itu, dibutuhkannya sistem klasifikasi teks untuk mengatasi ujaran kebencian yang terdapat di jaringan media sosial. Untuk membuat sebuah sistem klasifikasi tersebut, diperlukan sebuah data latih berupa teks ujaran kebencian yang terdapat di jaringan media sosial. Akan tetapi, teks ujaran kebencian tersebut susah ditemukan yang dimana hal ini dapat membuat distribusi data latih menjadi tidak seimbang (imbalanced data). Terdapat beberapa metode untuk menyelesaikan masalah imbalanced data, salah satunya dengan menggunakan metode oversampling. Penelitian ini bertujuan untuk membandingkan lima metode oversampling yaitu SMOTE, Borderline-SMOTE ver.1, Borderline-SMOTE ver.2, SMOTE -SVM dan m...
Most of the digital information is available in English language. However, Indonesian people do n... more Most of the digital information is available in English language. However, Indonesian people do not use English as the daily conversation. This makes the English proficiency of most Indonesian becomes very low. To overcome this situation, the development of Machine Translation (MT) is needed which maps English words into Indonesian words in one-to-many, many-to-one, or many- to-many. Thus, a method should be provided to handle these words mapping. This paper proposed an MT technique using statistical approach to solve the problem. By using the technique, the English–Indonesian translation of a source word becomes more adaptable to the word context within a sentence.
2017 International Conference on Advanced Computing and Applications (ACOMP), 2017
The role of security becomes an essential issue since the rapid growth of numerous digital commun... more The role of security becomes an essential issue since the rapid growth of numerous digital communication and multimedia system. In many applications, image security requirement is increased due to the importance of the digital image to be protected from unauthorised access. Image encryption and decryption schemes are one of the best ways to ensure high security. In recent time, chaos encryption technology has been advanced besides the most common of encryption algorithms namely AES, DES, Blowfish, El-Gamal and RSA. Image encryption and decryption consume a considerable amount of time because of a huge amount of image size. Considering both of time consumption and accuracy are the most decisive points for image security system, the efficient algorithm should be applied depending on the required security level of the application. This paper provides the evaluation of six different image encryption techniques namely AES, DES, Blowfish, RSA, El-Gamal and chaos techniques. The usefulness and effectiveness of each algorithm are shown through simulation result.
International Journal of Intelligent Engineering and Systems, 2021
Electroencephalogram (EEG) based motor imagery (MI) classification requires efficient feature ext... more Electroencephalogram (EEG) based motor imagery (MI) classification requires efficient feature extraction and consistent accuracy for reliable brain-computer interface (BCI) systems. Achieving consistent accuracy in EEGMI classification is still big challenge according to the nature of EEG signal which is subject dependent. To address this problem, we propose a feature selection scheme based on Logistic Regression (LRFS) and two-stage detection (TSD) in channel instantiation approach. In TSD scheme, Linear Discriminant Analysis was utilized in first-stage detection; while Gradient Boosted Tree and k-Nearest Neighbor in second-stage detection. To evaluate the proposed method, two publicly available datasets, BCI competition III-Dataset IVa and BCI competition IV-Dataset 2a, were used. Experimental results show that the proposed method yielded excellent accuracy for both datasets with 95.21% and 94.83%, respectively. These results indicated that the proposed method has consistent accur...
One of the challenges in the oil industry is to predict well production in the absence of frequen... more One of the challenges in the oil industry is to predict well production in the absence of frequent flow measurement. Many researches have been done to develop production forecasting in the petroleum area. One of the machine learning approach utilizing higher-order neural network (HONN) have been introduced in the previous study. In this study, research focus on normalization impact to the HONN model, specifically for univariate time-series dataset. Normalization is key aspect in the pre-processing stage, moreover in neural network model.
The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Inter... more The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Interface (BCI) systems still face challenges, such as inconsistent accuracy and low classification performance due to inter-subject dependent. Therefore, this study aims to improve multi-class EEG-motor imagery using two-stage detection and voting scheme on one-versus-one approach. The EEG signal used to carry out this research was extracted through a statistical measure of narrow window sliding. Furthermore, inter and cross-subject schemes were investigated on BCI competition IV-Dataset 2a to evaluate the effectiveness of the proposed method. The experimental results showed that the proposed method produced enhanced inter and cross-subject kappa coefficient values of 0.78 and 0.68, respectively, with a low standard deviation of 0.1 for both schemes. These results further indicated that the proposed method has an ability to address inter-subject dependent for promising and reliable BCI systems.
Medical image watermarking has become a widely studied topic in recent years. The method used to ... more Medical image watermarking has become a widely studied topic in recent years. The method used to embed data into medical images must be reversible becouse tampered in medical images caused by irreversible embedding can lead to inaccurate diagnosis. Medical image watermarking should be able to identify the tampered area and recover it. This research work introduced a method for reversible watermarking of medical images. This method can embed data, detect any tampering that occurs, locate the area that has been tampered with, and then recovery that area of tampered in the medical image. The medical image is segmented into three sections. The first is the region of interest (ROI) for reversible embedding of patient records. The second is the region of non-interest (RONI) for reversible embedding of localization data for tampered and recovery areas. The third is the border area used for embedding coordinates of the selected ROI using Least Significant Bits (LSB). The proposed method has a capacity of up to 6 bpp of data and 12 % of the ROI area with very good visual quality of up to 50.86 dB. The watermarked image, when extracted, is identical from the original image except for the border area, provided that there is no tampering. If the ROI is tampered, the area of tampered can be identified and restored with results reaching up to +27.61 dB, with only a small difference in image quality-2.86 dB compared to the original image after extraction.
Hepatitis is one of the major health problems which can progress to chronic hepatitis and cancer.... more Hepatitis is one of the major health problems which can progress to chronic hepatitis and cancer. Currently, computer based diagnosis is commonly use among medical examination. The diagnosis has been examined by using the disease dataset as a reference to make the decisions. However, the dataset was incomplete because it contained many instances containing missing values. This situation can lead the results of the analysis to be biased. One method of handling missing values is Multiple Imputation. Hepatitis dataset has an arbitrary pattern of missing values. This pattern can be handled by using Markov Chain Monte Carlo (MCMC) and Fully Conditional Specification (FCS) as Multiple Imputation algorithms. The research conducted an experiment to compare combinations of Multiple Imputations algorithm and Principal Component Analysis (PCA) as instance selection. Instance selection applied to reduce data by selecting variables that contribute greatly to the dataset. The goal was to improve the accuracy of the analysis on data which had missing values with the arbitrary pattern. The results showed that FCS-PCA is the best performance with the higher accuracy (98.80%) and the lowest error rate (0.0116).
Journal of Engineering and Technological Sciences, Nov 30, 2016
Nowadays, the Internet is one of the most important things in a human's life. The unlimited acces... more Nowadays, the Internet is one of the most important things in a human's life. The unlimited access to information has the potential for people to gather any data related to their needs. However, this sophisticated technology also bears a bad side, for instance negative content information. Negative content can come in the form of images that contain pornography. This paper presents the development of a skin classification scheme as part of a negative content filtering system. The data are trained by grey-level co-occurrence matrices (GLCM) texture features and then used to classify skin color by support vector machine (SVM). The tests on skin classification in the skin and non-skin categories achieved an accuracy of 100% and 97.03%, respectively. These results indicate that the proposed scheme has potential to be implemented as part of a negative content filtering system.
Penelitian ini berisikan tentang suatu sistem deteksi wajah pada manusia dengan menggunakan metod... more Penelitian ini berisikan tentang suatu sistem deteksi wajah pada manusia dengan menggunakan metode Viola-Jones. Data yang digunakan pada penelitian ini berupa sampel gambar yang diambil dari internet secara acak sebanyak 30 citra yang terdiri atas 22 citra manusia dan 8 citra hewan. Dimensi sampel citra berukuran paling kecil adalah 219x285 pixel dan dimensi yang paling besar adalah 1536x2048 pixel. Metode Viola-Jones relatif mendapatkan hasil yang cepat, akurat, dan efisien dalam melakukan deteksi wajah pada gambar. Metode Viola-Jones merupakan algoritma yang paling banyak digunakan untuk mendeteksi wajah. Metode ini terdiri atas tiga komponen penting yaitu integral image digunakan untuk menentukan ada atau tidaknya fitur Haar tertentu pada sebuah gambar, metode AdaBoost machine learning yang digunakan untuk untuk memilih fitur Haar yang spesifik yang akan digunakan serta untuk mengatur nilai ambangnya (threshold), dan cascade classifier sebagai pengklasifikasi akhir untuk menentukan daerah wajah pada gambar dari metode ini. Urutan filter pada cascade ditentukan oleh bobot yang diberikan AdaBoost. Filter dengan bobot paling besar diletakkan pada proses pertama kali dengan tujuan untuk menghapus daerah gambar bukan wajah secepat mungkin. Dalam penelitian ini ditampilkan gambar-gambar yang terdeteksi sebagai wajah dan tidak terdeteksi sebagai wajah. Hasil penelitian ini mendapatkan nilai akurasi sistem deteksi wajah sebesar 90,9%. Hasil lain yang didapatkan adalah posisi wajah yang tegak/tidak tegak menentukan keberhasilan deteksi wajah tersebut.
International Journal of Medical Engineering and Informatics, 2017
In determining the level of tumour malignancy in lung cancer, several characteristics of lesion i... more In determining the level of tumour malignancy in lung cancer, several characteristics of lesion in the lungs need to be recognised. The characteristics include several components, namely tumour size, enhancement, irregular spiculated edge, lobulated, air bronchograms, ground glass opacity (GGO) and density. This study identifies GGO lesion characteristics using CT image datasets obtained from Sardjito Public Hospital, Indonesia. The initial stage conducted is a cropping process performed by a radiologist so that the research's focus is merely on the lesion. The next process is the feature extraction by using grey level co-occurrence matrices (GLCM) with four features, namely energy, contrast, correlation and homogeneity. The classification stage is carried out after the extraction stage which is followed by features selection. Having selected two most dominant features from total of 16 features, the proposed method achieves accuracy of 88.8%, sensitivity of 87.5% and specificity of 90%.
One of the serious components to be maintained in rotating machinery including induction motors i... more One of the serious components to be maintained in rotating machinery including induction motors is bearings. Broken bearing diagnosis is a vital activity in maintaining electrical machines. Researchers have explored the use of machine learning for diagnostic purposes, both shallow and deep architecture. This study experimentally explores the progress of dislocated time sequences–deep neural network (DTS–DNN) used to improve multi-class broken bearing diagnosis by using public data from Case Western Reserve University. Deep architectures can be utilized with the purpose of simplifying or avoiding any traditional feature extraction process. DNN is utilized for avoiding the pooling operation in Convolution neural network that could remove important information. The obtained results were compared with the present techniques. The examination resulted in 99.42% average accuracy which is higher than the present techniques.
TELKOMNIKA (Telecommunication Computing Electronics and Control)
Breast cancer is one of the dominant causes of death in the world. Mammography is the standard fo... more Breast cancer is one of the dominant causes of death in the world. Mammography is the standard for early detection of breast cancer. In examining mammograms, the overall parenchyma pattern of the left and right breast was placed side by side for symmetry assessed of left and right breast tissue by radiologist. Thus, in building computer-aided diagnosis (CAD) system for screening mammography, it is necessary to adapt the working procedure of the radiologist. In this study, 30 training images and 30 testing images from Kotabaru Oncology Clinic in Yogyakarta were used. The first step was to enhance the image quality with median filter and contrast limited adaptive histogram equalization (CLAHE). Then, feature extraction was processed by histogram-based and by gray level co-occurrence matrix (GLCM) based. Furthermore, the similarity measurement process was used to measure the difference value between selected features, i.e. angular second moment (ASM), inverse difference moment (IDM), contrast, entropy based GLCM, and energy, on the left and right mammograms. This process was intended to assess the symmetry of left and right mammograms as radiologists do in mammography screening. The obtained results of the classification between normal and abnormal images with backpropagation algorithm were accuracy of 0.933, sensitivity of 0.833, and specificity of 1.000.
2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM)
In 3D medical imaging, anatomical and other structures such as kidney stones are often identified... more In 3D medical imaging, anatomical and other structures such as kidney stones are often identified and extracted with the aid of diagnosis and assessment of disease. Automatic kidney stone segmentation from abdominal CT images is challenging on the aspects of segmentation accuracy due to its variety of size, shape and location. The performance of 3D organ segmentation algorithm is also degraded by the image complexity containing multiple organs and because of their huge size. The current need is a preprocessing algorithm to assist the segmentation process. The objective of the present study was to develop reader independent preprocessing algorithm for kidney stone detection and segmentation in CT images. Three thresholding algorithms based on intensity, size and location are applied for unwanted regions removing such as soft-organ removing, bony skeleton removing and bed-mat removing. The digitized transverse abdomen CT scans images from 30 patients with kidney stone cases were included in statistical analysis and validation. As validation data for analysis, the estimation of coordinate points in stone region was measured independently by expert radiology. Experimental results prove that the proposed preprocessing algorithm has 95.24% sensitivity as the evaluation parameter. So, it can reduce the noise and unwanted regions in each procedure with good detection.
Abstract- Self-Regulated Learning (SRL) skill can be improved by improving students' cognitiv... more Abstract- Self-Regulated Learning (SRL) skill can be improved by improving students' cognitive and metacognitive abilities. To improve metacognitive abilities, metacognitive support in learning process using e-learning needs to be included. One of the example is assisting students by giving feedbacks once students had finished doing specific avtivities. The purpose of this study was to develop a pedagogical agent with the abilities to give students feedbacks, particularly recommendations of lesson sub-materials order. Recommendations were given by considering students pretest scores (students' prior knowledge). The computations for recommendations used Collaborative Filtering and Bayesian Ranking methods. Results obtained in this study show that based on MAP (Mean Average Precision) testings, Item-based method got the highest MAP score, which was 1. Computation time for each method was calculated to find runtime complexity of each method. The results of computation time show...
2017 10th Biomedical Engineering International Conference (BMEiCON), 2017
Nowadays, an efficient image segmentation process as a preprocessing step provides important cues... more Nowadays, an efficient image segmentation process as a preprocessing step provides important cues for numerous applications in human pose estimation, computer vision, objects recognition, tracking and image analysis. Many of the existing segmentation algorithms have high computational cost because of the segmentation foreground object from the large and complex-background images. But, some objects are situated on single-color background image without this complex background. For this single-color background, simple and fast segmentation algorithm will be suggested. The main focus of this paper is to tackle this problem using a modified sectional thresholding method to speed up the segmentation process. The proposed algorithm is applied to sectional image in segmentation process. Moreover, two threshold values based on user interactive are produced to remove all pixels in the background completely. The algorithm is able to work with any background of passport-type images. The proposed algorithm was tested on twenty-eight passport-type images with different background using in Intel i5 processor (1.80 GHz) with 4 GB RAM and 500 GB hard disk. According to the analysis, the execution time of the modified sectional thresholding method is 1.5 times faster than the traditional one.
Proceedings of the International Conference on Imaging, Signal Processing and Communication, 2017
Diabetic retinopathy is one of the primary causes of blindness as complication of long term diabe... more Diabetic retinopathy is one of the primary causes of blindness as complication of long term diabetes. The permanent vision loss can be avoided by conducting early detection of retinopathy symptoms such as retinal exudates. This paper proposes a scheme to classify fundus images whether containing exudates based on analysis of extracted texture features. Removal of optic disc and detection of exudate candidate area were firstly conducted. Afterwards, some texture features consisting of five features of grey level co-occurrence matrices (GLCM), eleven features of grey level of run-length matrices (GLRLM) and six histogram-based features were extracted from candidate exudates detected. These extracted features subsequently underwent classification process by using multi-layer perceptron (MLP) classifier. The performance of proposed scheme was evaluated on 80 fundus images taken from DIARETDB1 comprising of 38 images with exudates and 42 images without exudates. The best evaluation result of classification was achieved by using five GLCM features with 95% of accuracy, 97.37% of sensitivity and 92.86% of specificity. These results indicate that the proposed scheme successfully detects exudates and also classifies fundus images either containing exudates or no exudates. In addition, the implementation of the proposed scheme is expected to assist the ophthalmologists in monitoring and diagnosing diabetic retinopathy especially on the presence of retinal exudates.
2018 11th Biomedical Engineering International Conference (BMEiCON), 2018
Accurate segmentation techniques used in the automated kidney stone detection is one of the most ... more Accurate segmentation techniques used in the automated kidney stone detection is one of the most challenging problems because of grey levels similarities of adjacent organs, variation in shapes and positions of kidney stone. Valuable image preprocessing is an essential step to improve the performance of region of interest (ROI) segmentation by removing unwanted region (non ROI), noise and disturbance. The research aims to conduct comparative study of the three different preprocessing techniques for the noise removal from the CT image of kidney stone. Three noise removal techniques are computed based on the size-based thresholding (method I), shape-based thresholding(method II) and hybrid thresholding algorithm (method III). T he methods aim to enhance their readability and to assist the segmentation process in the kidney stone diagnosis system. Digitized transverse abdomen CT images from 75 patients with kidney stone cases were done in statistical analysis and validation. The estimation of coordinate points in the stone region was measured independently by the expert radiologists to get the validation data for the analysis. The results show that the proposed method I, II and III have a sensitivity of 90.91%, 92.93% and 68.69%, respectively. The execution times of overall process were 9.44 sec, 10.14 sec and 34.5 in average, respectively.
Fasilitas yang diberikan oleh jaringan media sosial dapat memberikan kebebasan bagi penggunanya u... more Fasilitas yang diberikan oleh jaringan media sosial dapat memberikan kebebasan bagi penggunanya untuk saling berkomunikasi. Namun, kebebasan yang tidak dibatasi dapat memberikan kesempatan bagi penggunanya untuk menyerang seseorang atau sebuah organisasi dengan menggunakan ujaran kebencian. Oleh karena itu, dibutuhkannya sistem klasifikasi teks untuk mengatasi ujaran kebencian yang terdapat di jaringan media sosial. Untuk membuat sebuah sistem klasifikasi tersebut, diperlukan sebuah data latih berupa teks ujaran kebencian yang terdapat di jaringan media sosial. Akan tetapi, teks ujaran kebencian tersebut susah ditemukan yang dimana hal ini dapat membuat distribusi data latih menjadi tidak seimbang (imbalanced data). Terdapat beberapa metode untuk menyelesaikan masalah imbalanced data, salah satunya dengan menggunakan metode oversampling. Penelitian ini bertujuan untuk membandingkan lima metode oversampling yaitu SMOTE, Borderline-SMOTE ver.1, Borderline-SMOTE ver.2, SMOTE -SVM dan m...
Most of the digital information is available in English language. However, Indonesian people do n... more Most of the digital information is available in English language. However, Indonesian people do not use English as the daily conversation. This makes the English proficiency of most Indonesian becomes very low. To overcome this situation, the development of Machine Translation (MT) is needed which maps English words into Indonesian words in one-to-many, many-to-one, or many- to-many. Thus, a method should be provided to handle these words mapping. This paper proposed an MT technique using statistical approach to solve the problem. By using the technique, the English–Indonesian translation of a source word becomes more adaptable to the word context within a sentence.
2017 International Conference on Advanced Computing and Applications (ACOMP), 2017
The role of security becomes an essential issue since the rapid growth of numerous digital commun... more The role of security becomes an essential issue since the rapid growth of numerous digital communication and multimedia system. In many applications, image security requirement is increased due to the importance of the digital image to be protected from unauthorised access. Image encryption and decryption schemes are one of the best ways to ensure high security. In recent time, chaos encryption technology has been advanced besides the most common of encryption algorithms namely AES, DES, Blowfish, El-Gamal and RSA. Image encryption and decryption consume a considerable amount of time because of a huge amount of image size. Considering both of time consumption and accuracy are the most decisive points for image security system, the efficient algorithm should be applied depending on the required security level of the application. This paper provides the evaluation of six different image encryption techniques namely AES, DES, Blowfish, RSA, El-Gamal and chaos techniques. The usefulness and effectiveness of each algorithm are shown through simulation result.
International Journal of Intelligent Engineering and Systems, 2021
Electroencephalogram (EEG) based motor imagery (MI) classification requires efficient feature ext... more Electroencephalogram (EEG) based motor imagery (MI) classification requires efficient feature extraction and consistent accuracy for reliable brain-computer interface (BCI) systems. Achieving consistent accuracy in EEGMI classification is still big challenge according to the nature of EEG signal which is subject dependent. To address this problem, we propose a feature selection scheme based on Logistic Regression (LRFS) and two-stage detection (TSD) in channel instantiation approach. In TSD scheme, Linear Discriminant Analysis was utilized in first-stage detection; while Gradient Boosted Tree and k-Nearest Neighbor in second-stage detection. To evaluate the proposed method, two publicly available datasets, BCI competition III-Dataset IVa and BCI competition IV-Dataset 2a, were used. Experimental results show that the proposed method yielded excellent accuracy for both datasets with 95.21% and 94.83%, respectively. These results indicated that the proposed method has consistent accur...
One of the challenges in the oil industry is to predict well production in the absence of frequen... more One of the challenges in the oil industry is to predict well production in the absence of frequent flow measurement. Many researches have been done to develop production forecasting in the petroleum area. One of the machine learning approach utilizing higher-order neural network (HONN) have been introduced in the previous study. In this study, research focus on normalization impact to the HONN model, specifically for univariate time-series dataset. Normalization is key aspect in the pre-processing stage, moreover in neural network model.
The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Inter... more The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Interface (BCI) systems still face challenges, such as inconsistent accuracy and low classification performance due to inter-subject dependent. Therefore, this study aims to improve multi-class EEG-motor imagery using two-stage detection and voting scheme on one-versus-one approach. The EEG signal used to carry out this research was extracted through a statistical measure of narrow window sliding. Furthermore, inter and cross-subject schemes were investigated on BCI competition IV-Dataset 2a to evaluate the effectiveness of the proposed method. The experimental results showed that the proposed method produced enhanced inter and cross-subject kappa coefficient values of 0.78 and 0.68, respectively, with a low standard deviation of 0.1 for both schemes. These results further indicated that the proposed method has an ability to address inter-subject dependent for promising and reliable BCI systems.
Uploads
Papers by Teguh B Adji