International Journal for Research in Applied Science and Engineering Technology, Sep 29, 2023
Distance and online learning, often known as e-learning, has emerged as the new norm in training ... more Distance and online learning, often known as e-learning, has emerged as the new norm in training and education in light of recent technological advancements, thanks to a number of benefits like accessibility, cost, efficiency, and usability. Even though the online educational system offers advantages over the traditional educational system, preventing cheating and other improper behavior of students during classes and exams poses substantial difficulties. The 'Artificial Intelligence based Online Examination proctoring system' project offers a solution to the issues with online and distance education. The system makes use of movement detection, face recognition, and other biometric approaches to flag the student fraud during exams. As soon as the cheating or other irregularities are detected by the students, the system signals the manual proctor and also records these instances as evidence for subsequent monitoring. As a result, online exams can be administered effectively with fewer or no manual proctors present, which is more practical and affordable.
International journal for research in applied science and engineering technology, Apr 30, 2024
Melanoma is a type of skin cancer that arises from pigment-producing cells called melanocytes. Th... more Melanoma is a type of skin cancer that arises from pigment-producing cells called melanocytes. These cells are responsible for the production of melanin, the pigment that gives skin, hair and eyes their color. Melanoma is considered more dangerous than other types of cancer because it can spread (metastasize) to other parts of the body. Key features of melanoma include the formation of abnormal or cancerous cells in the skin pigment. The main cause of melanoma is exposure to ultraviolet (UV) radiation from the sun or tanning beds. Most malignant tumours can be treated if diagnosed early. Therefore, early diagnosis of skin cancer is important to save the lives of patients. With today's technology, skin cancer can be diagnosed early and treated effectively. The traditional method of detecting melanoma relies on eye exams where dermatologists review biopsy results, which is time-consuming. These facilities are less available in rural areas where doctors are often available but dermatologists are not. This study was designed to develop a system that uses deep learning techniques to identify melanoma. I. INTRODUCTION Melanoma is a serious tumour that forms in melanocytes and creates significant problems for dermatologists due to its potential to spread and nuances in the skin. Early diagnosis is critical to improving survival, and computer vision screening offers promising solutions. This specification involves several important steps: collection of skin images, segmentation to separate the disease from surrounding tissue, extraction of characteristic features of the lesions, and classification based on these features. Segmentation or border detection is particularly important because it outlines the boundaries of the lesion so that it is easy to determine the truth. By using computer vision technologies such as imaging and machine learning, doctors can improve early detection and treatment of melanoma, which can save lives. A tongue-in-cheek sign to pay particular attention to is a change in the mole's face, especially a change in size, shape, color, or texture. Unlike normal moles, melanomas often show an asymmetric shape and irregular borders. It's also important that they showcase a variety of colors rather than just one shade of brown, or so they think. The main purpose of this study is to predict the presence of melanoma using deep learning CNN to investigate various preliminary processes and changes in the CNN to achieve the accuracy of the benefits, or so they think. Ms. Gaana M et al. (March 2019) [1] published "Diagnosis of skin cancer Melanoma using Machine Learning". This work introduces some methodology and the techniques used in detecting the melanoma where data is the dermatoscopic images which are captured through a regular camera. The algorithm employed combines machine learning techniques with image processing for the detection of skin cancer. The SVM algorithm is used which was found to be most effective for detecting cancer due to its minimal disadvantages! The conclusion ss that the neural network technique is considered the best among the existing systems, by utilizing machine learning algorithms with minimal human intervention, the system can provide better and more reliable results, ultimately aiding in the early diagnosis of skin cancer! GS Gopika Krishnan et al. (March 2023) [2] published, "Skin cancer detection using Machine Learning". The dataset used in the research for skin cancer detection is from the ISIC (International Skin Image Collaboration) dataset. This dataset contains approximately 23,000 images of melanoma skin cancer. The algorithms used includes Back Propagation Algorithm for training multi-layer perceptron's in Neural Networks; Support Vector Machine (SVM) for classifying data into different classes based on a decision boundary and Convolutional Neural Networks for image classification tasks. The methodology involves three main phases. Phase 1-Pre-processing where images are collected from the ISIC dataset and performing pre-processing tasks such as removing hair, glare, and shading to enhance the efficiency of identifying texture, color, size, and shape parameters; Phase 2-Segmentation and Feature Extraction where Three segmentation methods (Otsu, Modified Otsu, and Watershed) were utilized to segment the images and extract features related to color, shape, size, and texture; and Phase 3-Model Design and Training where the researchers designed and trained the model using the Back Propagation Algorithm, Support Vector Machine, and Convolutional Neural Networks.
International Journal for Research in Applied Science and Engineering Technology, Mar 31, 2023
Parkinson disease prediction is an area of active research in healthcare and machine learning. Ev... more Parkinson disease prediction is an area of active research in healthcare and machine learning. Even though Parkinson's disease is not well-known worldwide, its negative impacts are detrimental and should be seriously considered. Furthermore, because individuals are so immersed in their busy lives, they frequently disregard the early signs of this condition, which could worsen as it progresses. There are many techniques for Parkinson disease prediction. In this paper we are going to discuss some of the possible technical solutions proposed by researchers. Keywords: Parkinson disease I. INTRODUCTION The brain and nervous system are both affected by Parkinson's disease, which is a neurodegenerative condition. The loss of dopamine-producing neurons in the basic ganglia is specifically related to it. The illness has negative effects on people, society, and money on a social, professional, and personal level. Individual symptoms that develop over time and vary from person to person can be divided into two categories. Motor symptoms include stiffness, slowness, also known as bradykinesia, facial expression, fewer swings of the arms, and resting tremor, whereas non-motor symptoms, which affect every system and component of the body, are unseen symptoms. These symptoms of autonomic dysfunction include perspiration, urination, and mood and thought disturbances. The primary objective of the study is to evaluate the effectiveness of various Supervised Algorithms for enhancing Parkinson Disease detection diagnosis. Parkinson Disease was predicted using K-Nearest Neighbor, Logistic Regression, Decision Tree, Naive Bayes, and XGBoost. The detection of Parkinson's disease is based on the use of different classifiers, such as Accuracy, F1-score, Recall, Precision, R2-score Total UPDRS Motor UPDRS and Confusion matrix. Amreen Khanum at el. [1] examined the effects of the various Supervised ML Algorithms for upgrading the diagnosis of Parkinson Disease. KNN, LR, DT, NB, and XGBoost were five machine learning techniques used to detect Parkinson's disease. The performance of the classifiers was assessed using precision, accuracy, F1-Score, and recall. Data on Parkinson's disease was obtained for this study from the UCI Machine Learning Repository. 23 speech feature sets are included in the 195 patient records that constituted this dataset. The first step was the extraction of characteristics from datasets related to Parkinson's disease. The study used five supervised learning algorithms to recognise Parkinson's illness. As a result, the performance metrics were evaluated to find the algorithm that outperformed. Muhtasim Shafi Kader at el.[2] Mushtasim Shafi Kader at the el. [] chose 195 datasets related to Parkinson's disease from the UCI machine learning library in identifying the Parkinson disease. In the specified dataset, there were 24 attributes. After training the data, they were able to identify the machine learning algorithms that were most accurate. Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest Classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Classifier, and Gradient Boosting Classifier were the nine machine learning algorithms that were utilized to predict the illness. Evaluation metrics analysis and confusion metrics analysis (Precision, Recall, F measure and Accuracy) have been used to calculate the study's outcomes. Algorithm with highest accuracy was found using the above-mentioned metrics analysis. Mohesh T et al.[3] The input is the Parkinson's disease voice dataset from the UCI device mastering library. Additionally, by combining the spiral drawing inputs of healthy individuals and Parkinson's patients, the gadget delivers accurate findings. It can be inferred that a hybrid approach accurately reads affected individuals' spiral drawings and voice data. This model aims to make this method of expertise a case of Parkinson's hence, the goal is to apply numerous machines getting to know strategies like SVM, choice Tree, for buying the maximum accurate result.
Botnets are the main threat to the security of the devices, Traditionally Botnets are in the desk... more Botnets are the main threat to the security of the devices, Traditionally Botnets are in the desktop platform ,There are different proposed techniques to detect them in the desktop platform and obtained good result in that techniques, When coming to the mobile platform the Botnets are new to this domain and they will increase rapidly in future days, Now a day’s mobiles are using like a small pc’s we are doing money transactions in mobiles and many confidential information are stored in the mobile, So providing security to the mobile platform is necessary by detecting the botnets, To detect the botnet in mobile platform already some researchers are proposed the different techniques to detect them, So in this paper we will going through a survey on different proposed techniques to detect the Botnets on mobile platform.
International Journal of Computer Networks and Applications
The growing potentialities of recent communications necessitate information security on the compu... more The growing potentialities of recent communications necessitate information security on the computer network. The various fields such as banking, E-commerce, education, and health sectors depend on the online network to communicate. Information security is becoming more significant. Hackers can get the data if it is sent as it is in an unsafe network. Therefore, security challenges like confidentiality, integrity & undetectability are essential to safeguard sensitive data from unauthorized users. To secure communicated information from a third party, it is necessary to convert the information into a scrambled form. Researchers have used various cryptographic and steganographic algorithms. The public key and private key cryptographic algorithms are suitable to scramble the input secret data. Using private key algorithms, key exchange is a challenge. Always two-level of scrambling of data is safe. After scrambling, embed it in cover media by using suitable transform domain techniques to provide higher security. In the proposed method, two-level scrambling of input secret images is carried out by applying faster processing symmetric algorithms such as Rivest Cipher 6 (RC6) & One Time Pad (OTP) to enhance the security of images. As these algorithms use the key on their own, it becomes difficult for any intruder to extract and identify the keys. Also, there is a necessity to safely send keys to the recipient. These two keys are scrambled using a public key cryptographic algorithm such as Modified Rivest-Shamir-Adleman (MRSA) algorithm. This reduces the chances of stealing the keys. Another level of security for the scrambled image is provided by embedding it in cover media using DC coefficients resulting in the stego image. Send the stego image and scrambled keys to the receiver. Simulation outcomes and analysis show that the proposed method provides two-level security for color image mediation and key authentication.
International Journal of Engineering Applied Sciences and Technology, 2019
This paper helps to solve the major problems by leveraging Machine Learning and data analysis on ... more This paper helps to solve the major problems by leveraging Machine Learning and data analysis on wine quality dataset by Training, Predicting & Evaluating Model using Decision Tree, Random Forests and predict if each wine sample is a red or white wine and predict the quality of each wine sample, which can be low, medium, or highWine is a beverage from fermented grape and other fruit juices with a lower amount of alcohol content. Quality of wine is graded based on the taste of wine and vintage.. Tasting it is an ancient process as the wine itself is. When it comes to the quality of the wine, many other factors or attributes come into consideration other than the flavour. The dataset that to analyse 'Wine Quality', represents the quality of wines (white & red) based on different physiochemical attributes (fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulphur dioxide, total sulphur dioxide, density, pH, sulphates, and alcohol). The quality score for each wine combination in the dataset varies from 0 to 10 (ranging from least to highest). This analysis will uncover some important relationships between wine chemical contents like acidity and sugar levels versus its quality. The dataset exhibits a vast and distinct chemical and acidic combination of two types of wine (white & red). By employing smart data analysis techniques, a hand full of important and interesting insights that would be helpful in predicting wine quality and type that would also be prolific for the economic/financial sector and business sector of the production company can be unearthed
Detecting text regions in natural scene images has become an important area due to its varies app... more Detecting text regions in natural scene images has become an important area due to its varies applications. Text Information Extraction (TIE) System involves detecting text regions in a given image, localizing it, extracting the text part and recognizing text using OCR. This work basically concentrates on the detection and extraction of text in natural scene images. In this work, the test image will be pre- processed using RGB to Gray conversion, binarization, Edge Detection method and Geometric based Noise removal method. The features from the pre-processed image are extracted. The extracted features are used by the trained SVM classifier to detect the text regions. After detecting text regions, characters are extracted and finally displayed.
Text extraction refers to the process of separating text region from a given image. Scene text ex... more Text extraction refers to the process of separating text region from a given image. Scene text extraction is more challenging than document test extraction because of degradationssuch as shadows, reflections from background surface, and uneven lightening conditions. In this paper, we propose an adaptive method for detecting and extracting text from natural scene images. This method is robust against the shadows and uneven lighting conditions. The proposed methoduses adaptive thresholding technique to binarize image and smoothen degradation factors mentioned. Canny edge detection is used to obtain edge image and Block operation of localization is used to remove non-text area from image. Connected component analysis is used forextracting text from image. The work is applied onimages without and with shadows and uneven lighting conditions. Experimental results showthat the performance of the proposed approach is robust for both.It was observed 1 International Journal of Pure and Applie...
Detection of text is challenging due to complexities in the scene images with text and newer appl... more Detection of text is challenging due to complexities in the scene images with text and newer applications that use them. In our work we use general image repository that contains images suitable for various applications. We use Maximally Stable Extremal Regions (MSER)with Speed Up Robust Features (SURF)to find features. For training and testing Support Vector Machine (SVM)is used. Connected Component Analysis and filtering rules are used to get final localized text from the scene images. To evaluate our work, we have used existing data sets for English text and our own dataset for multilingual text.
International Journal of Engineering Applied Sciences and Technology
A hybrid approach of data encryption and steganography is used in our work. Motivation behind thi... more A hybrid approach of data encryption and steganography is used in our work. Motivation behind this approach is to provide a simple and smart image steganographic technique which must be capable enough to provide good quality stego-image. Image steganography is a technique in which pixel intensities are used to hide the data. In this approach, the secret information is initially encrypted and then encrypted bits are embedded into an image. Logistic chaotic maps are used for Image Encryption, and LSB technique is used for embedding. To increase unpredictability, we employ different combinations of scan patterns for encryption and embedding. This approach is more secure against attack and its stego-image is indistinguishable from the original image by the human eye.
2019 International Conference on Data Science and Communication (IconDSC)
Detection of text is challenging due to complexities in the scene images with text and newer appl... more Detection of text is challenging due to complexities in the scene images with text and newer applications that use them. In our work we use general image repository that contains images suitable for various applications. We use Maximally Stable Extremal Regions (MSER) with Speed Up Robust Features (SURF) to find features. For training and testing Support Vector Machine (SVM) is used. Connected Component Analysis and filtering rules are used to get final localized text from the scene images. To evaluate our work, we have used existing data sets for English text and our own dataset for multilingual text.
International Journal for Research in Applied Science and Engineering Technology, Sep 29, 2023
Distance and online learning, often known as e-learning, has emerged as the new norm in training ... more Distance and online learning, often known as e-learning, has emerged as the new norm in training and education in light of recent technological advancements, thanks to a number of benefits like accessibility, cost, efficiency, and usability. Even though the online educational system offers advantages over the traditional educational system, preventing cheating and other improper behavior of students during classes and exams poses substantial difficulties. The 'Artificial Intelligence based Online Examination proctoring system' project offers a solution to the issues with online and distance education. The system makes use of movement detection, face recognition, and other biometric approaches to flag the student fraud during exams. As soon as the cheating or other irregularities are detected by the students, the system signals the manual proctor and also records these instances as evidence for subsequent monitoring. As a result, online exams can be administered effectively with fewer or no manual proctors present, which is more practical and affordable.
International journal for research in applied science and engineering technology, Apr 30, 2024
Melanoma is a type of skin cancer that arises from pigment-producing cells called melanocytes. Th... more Melanoma is a type of skin cancer that arises from pigment-producing cells called melanocytes. These cells are responsible for the production of melanin, the pigment that gives skin, hair and eyes their color. Melanoma is considered more dangerous than other types of cancer because it can spread (metastasize) to other parts of the body. Key features of melanoma include the formation of abnormal or cancerous cells in the skin pigment. The main cause of melanoma is exposure to ultraviolet (UV) radiation from the sun or tanning beds. Most malignant tumours can be treated if diagnosed early. Therefore, early diagnosis of skin cancer is important to save the lives of patients. With today's technology, skin cancer can be diagnosed early and treated effectively. The traditional method of detecting melanoma relies on eye exams where dermatologists review biopsy results, which is time-consuming. These facilities are less available in rural areas where doctors are often available but dermatologists are not. This study was designed to develop a system that uses deep learning techniques to identify melanoma. I. INTRODUCTION Melanoma is a serious tumour that forms in melanocytes and creates significant problems for dermatologists due to its potential to spread and nuances in the skin. Early diagnosis is critical to improving survival, and computer vision screening offers promising solutions. This specification involves several important steps: collection of skin images, segmentation to separate the disease from surrounding tissue, extraction of characteristic features of the lesions, and classification based on these features. Segmentation or border detection is particularly important because it outlines the boundaries of the lesion so that it is easy to determine the truth. By using computer vision technologies such as imaging and machine learning, doctors can improve early detection and treatment of melanoma, which can save lives. A tongue-in-cheek sign to pay particular attention to is a change in the mole's face, especially a change in size, shape, color, or texture. Unlike normal moles, melanomas often show an asymmetric shape and irregular borders. It's also important that they showcase a variety of colors rather than just one shade of brown, or so they think. The main purpose of this study is to predict the presence of melanoma using deep learning CNN to investigate various preliminary processes and changes in the CNN to achieve the accuracy of the benefits, or so they think. Ms. Gaana M et al. (March 2019) [1] published "Diagnosis of skin cancer Melanoma using Machine Learning". This work introduces some methodology and the techniques used in detecting the melanoma where data is the dermatoscopic images which are captured through a regular camera. The algorithm employed combines machine learning techniques with image processing for the detection of skin cancer. The SVM algorithm is used which was found to be most effective for detecting cancer due to its minimal disadvantages! The conclusion ss that the neural network technique is considered the best among the existing systems, by utilizing machine learning algorithms with minimal human intervention, the system can provide better and more reliable results, ultimately aiding in the early diagnosis of skin cancer! GS Gopika Krishnan et al. (March 2023) [2] published, "Skin cancer detection using Machine Learning". The dataset used in the research for skin cancer detection is from the ISIC (International Skin Image Collaboration) dataset. This dataset contains approximately 23,000 images of melanoma skin cancer. The algorithms used includes Back Propagation Algorithm for training multi-layer perceptron's in Neural Networks; Support Vector Machine (SVM) for classifying data into different classes based on a decision boundary and Convolutional Neural Networks for image classification tasks. The methodology involves three main phases. Phase 1-Pre-processing where images are collected from the ISIC dataset and performing pre-processing tasks such as removing hair, glare, and shading to enhance the efficiency of identifying texture, color, size, and shape parameters; Phase 2-Segmentation and Feature Extraction where Three segmentation methods (Otsu, Modified Otsu, and Watershed) were utilized to segment the images and extract features related to color, shape, size, and texture; and Phase 3-Model Design and Training where the researchers designed and trained the model using the Back Propagation Algorithm, Support Vector Machine, and Convolutional Neural Networks.
International Journal for Research in Applied Science and Engineering Technology, Mar 31, 2023
Parkinson disease prediction is an area of active research in healthcare and machine learning. Ev... more Parkinson disease prediction is an area of active research in healthcare and machine learning. Even though Parkinson's disease is not well-known worldwide, its negative impacts are detrimental and should be seriously considered. Furthermore, because individuals are so immersed in their busy lives, they frequently disregard the early signs of this condition, which could worsen as it progresses. There are many techniques for Parkinson disease prediction. In this paper we are going to discuss some of the possible technical solutions proposed by researchers. Keywords: Parkinson disease I. INTRODUCTION The brain and nervous system are both affected by Parkinson's disease, which is a neurodegenerative condition. The loss of dopamine-producing neurons in the basic ganglia is specifically related to it. The illness has negative effects on people, society, and money on a social, professional, and personal level. Individual symptoms that develop over time and vary from person to person can be divided into two categories. Motor symptoms include stiffness, slowness, also known as bradykinesia, facial expression, fewer swings of the arms, and resting tremor, whereas non-motor symptoms, which affect every system and component of the body, are unseen symptoms. These symptoms of autonomic dysfunction include perspiration, urination, and mood and thought disturbances. The primary objective of the study is to evaluate the effectiveness of various Supervised Algorithms for enhancing Parkinson Disease detection diagnosis. Parkinson Disease was predicted using K-Nearest Neighbor, Logistic Regression, Decision Tree, Naive Bayes, and XGBoost. The detection of Parkinson's disease is based on the use of different classifiers, such as Accuracy, F1-score, Recall, Precision, R2-score Total UPDRS Motor UPDRS and Confusion matrix. Amreen Khanum at el. [1] examined the effects of the various Supervised ML Algorithms for upgrading the diagnosis of Parkinson Disease. KNN, LR, DT, NB, and XGBoost were five machine learning techniques used to detect Parkinson's disease. The performance of the classifiers was assessed using precision, accuracy, F1-Score, and recall. Data on Parkinson's disease was obtained for this study from the UCI Machine Learning Repository. 23 speech feature sets are included in the 195 patient records that constituted this dataset. The first step was the extraction of characteristics from datasets related to Parkinson's disease. The study used five supervised learning algorithms to recognise Parkinson's illness. As a result, the performance metrics were evaluated to find the algorithm that outperformed. Muhtasim Shafi Kader at el.[2] Mushtasim Shafi Kader at the el. [] chose 195 datasets related to Parkinson's disease from the UCI machine learning library in identifying the Parkinson disease. In the specified dataset, there were 24 attributes. After training the data, they were able to identify the machine learning algorithms that were most accurate. Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest Classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Classifier, and Gradient Boosting Classifier were the nine machine learning algorithms that were utilized to predict the illness. Evaluation metrics analysis and confusion metrics analysis (Precision, Recall, F measure and Accuracy) have been used to calculate the study's outcomes. Algorithm with highest accuracy was found using the above-mentioned metrics analysis. Mohesh T et al.[3] The input is the Parkinson's disease voice dataset from the UCI device mastering library. Additionally, by combining the spiral drawing inputs of healthy individuals and Parkinson's patients, the gadget delivers accurate findings. It can be inferred that a hybrid approach accurately reads affected individuals' spiral drawings and voice data. This model aims to make this method of expertise a case of Parkinson's hence, the goal is to apply numerous machines getting to know strategies like SVM, choice Tree, for buying the maximum accurate result.
Botnets are the main threat to the security of the devices, Traditionally Botnets are in the desk... more Botnets are the main threat to the security of the devices, Traditionally Botnets are in the desktop platform ,There are different proposed techniques to detect them in the desktop platform and obtained good result in that techniques, When coming to the mobile platform the Botnets are new to this domain and they will increase rapidly in future days, Now a day’s mobiles are using like a small pc’s we are doing money transactions in mobiles and many confidential information are stored in the mobile, So providing security to the mobile platform is necessary by detecting the botnets, To detect the botnet in mobile platform already some researchers are proposed the different techniques to detect them, So in this paper we will going through a survey on different proposed techniques to detect the Botnets on mobile platform.
International Journal of Computer Networks and Applications
The growing potentialities of recent communications necessitate information security on the compu... more The growing potentialities of recent communications necessitate information security on the computer network. The various fields such as banking, E-commerce, education, and health sectors depend on the online network to communicate. Information security is becoming more significant. Hackers can get the data if it is sent as it is in an unsafe network. Therefore, security challenges like confidentiality, integrity & undetectability are essential to safeguard sensitive data from unauthorized users. To secure communicated information from a third party, it is necessary to convert the information into a scrambled form. Researchers have used various cryptographic and steganographic algorithms. The public key and private key cryptographic algorithms are suitable to scramble the input secret data. Using private key algorithms, key exchange is a challenge. Always two-level of scrambling of data is safe. After scrambling, embed it in cover media by using suitable transform domain techniques to provide higher security. In the proposed method, two-level scrambling of input secret images is carried out by applying faster processing symmetric algorithms such as Rivest Cipher 6 (RC6) & One Time Pad (OTP) to enhance the security of images. As these algorithms use the key on their own, it becomes difficult for any intruder to extract and identify the keys. Also, there is a necessity to safely send keys to the recipient. These two keys are scrambled using a public key cryptographic algorithm such as Modified Rivest-Shamir-Adleman (MRSA) algorithm. This reduces the chances of stealing the keys. Another level of security for the scrambled image is provided by embedding it in cover media using DC coefficients resulting in the stego image. Send the stego image and scrambled keys to the receiver. Simulation outcomes and analysis show that the proposed method provides two-level security for color image mediation and key authentication.
International Journal of Engineering Applied Sciences and Technology, 2019
This paper helps to solve the major problems by leveraging Machine Learning and data analysis on ... more This paper helps to solve the major problems by leveraging Machine Learning and data analysis on wine quality dataset by Training, Predicting & Evaluating Model using Decision Tree, Random Forests and predict if each wine sample is a red or white wine and predict the quality of each wine sample, which can be low, medium, or highWine is a beverage from fermented grape and other fruit juices with a lower amount of alcohol content. Quality of wine is graded based on the taste of wine and vintage.. Tasting it is an ancient process as the wine itself is. When it comes to the quality of the wine, many other factors or attributes come into consideration other than the flavour. The dataset that to analyse 'Wine Quality', represents the quality of wines (white & red) based on different physiochemical attributes (fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulphur dioxide, total sulphur dioxide, density, pH, sulphates, and alcohol). The quality score for each wine combination in the dataset varies from 0 to 10 (ranging from least to highest). This analysis will uncover some important relationships between wine chemical contents like acidity and sugar levels versus its quality. The dataset exhibits a vast and distinct chemical and acidic combination of two types of wine (white & red). By employing smart data analysis techniques, a hand full of important and interesting insights that would be helpful in predicting wine quality and type that would also be prolific for the economic/financial sector and business sector of the production company can be unearthed
Detecting text regions in natural scene images has become an important area due to its varies app... more Detecting text regions in natural scene images has become an important area due to its varies applications. Text Information Extraction (TIE) System involves detecting text regions in a given image, localizing it, extracting the text part and recognizing text using OCR. This work basically concentrates on the detection and extraction of text in natural scene images. In this work, the test image will be pre- processed using RGB to Gray conversion, binarization, Edge Detection method and Geometric based Noise removal method. The features from the pre-processed image are extracted. The extracted features are used by the trained SVM classifier to detect the text regions. After detecting text regions, characters are extracted and finally displayed.
Text extraction refers to the process of separating text region from a given image. Scene text ex... more Text extraction refers to the process of separating text region from a given image. Scene text extraction is more challenging than document test extraction because of degradationssuch as shadows, reflections from background surface, and uneven lightening conditions. In this paper, we propose an adaptive method for detecting and extracting text from natural scene images. This method is robust against the shadows and uneven lighting conditions. The proposed methoduses adaptive thresholding technique to binarize image and smoothen degradation factors mentioned. Canny edge detection is used to obtain edge image and Block operation of localization is used to remove non-text area from image. Connected component analysis is used forextracting text from image. The work is applied onimages without and with shadows and uneven lighting conditions. Experimental results showthat the performance of the proposed approach is robust for both.It was observed 1 International Journal of Pure and Applie...
Detection of text is challenging due to complexities in the scene images with text and newer appl... more Detection of text is challenging due to complexities in the scene images with text and newer applications that use them. In our work we use general image repository that contains images suitable for various applications. We use Maximally Stable Extremal Regions (MSER)with Speed Up Robust Features (SURF)to find features. For training and testing Support Vector Machine (SVM)is used. Connected Component Analysis and filtering rules are used to get final localized text from the scene images. To evaluate our work, we have used existing data sets for English text and our own dataset for multilingual text.
International Journal of Engineering Applied Sciences and Technology
A hybrid approach of data encryption and steganography is used in our work. Motivation behind thi... more A hybrid approach of data encryption and steganography is used in our work. Motivation behind this approach is to provide a simple and smart image steganographic technique which must be capable enough to provide good quality stego-image. Image steganography is a technique in which pixel intensities are used to hide the data. In this approach, the secret information is initially encrypted and then encrypted bits are embedded into an image. Logistic chaotic maps are used for Image Encryption, and LSB technique is used for embedding. To increase unpredictability, we employ different combinations of scan patterns for encryption and embedding. This approach is more secure against attack and its stego-image is indistinguishable from the original image by the human eye.
2019 International Conference on Data Science and Communication (IconDSC)
Detection of text is challenging due to complexities in the scene images with text and newer appl... more Detection of text is challenging due to complexities in the scene images with text and newer applications that use them. In our work we use general image repository that contains images suitable for various applications. We use Maximally Stable Extremal Regions (MSER) with Speed Up Robust Features (SURF) to find features. For training and testing Support Vector Machine (SVM) is used. Connected Component Analysis and filtering rules are used to get final localized text from the scene images. To evaluate our work, we have used existing data sets for English text and our own dataset for multilingual text.
Uploads
Papers by Sankhya Nayak