2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2015
In this paper, we propose a novel deep convex network method for domain adaptation in multitempor... more In this paper, we propose a novel deep convex network method for domain adaptation in multitemporal remote sensing imagery. We fuse the capabilities of the extreme learning machine (ELM) classifier and local feature descriptor techniques to boost the classification accuracy. We use the Affine Scale Invariant Feature Transform (ASIFT) to extract the key points from the image pair, i.e. source and target domain images. The neural network consist of two layers, one layer uses the keypoints extracted by ASIFT to map the training points of the source image to the target image, while layer 2 is used for the purpose of classification. Experimental results obtained on multitemporal VHR images acquired by the IKONOS2 confirm the promising capability of the proposed method.
This paper deals with the problem of the classification of large-scale very high-resolution (VHR)... more This paper deals with the problem of the classification of large-scale very high-resolution (VHR) remote sensing (RS) images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class). Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN) to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers) is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the ...
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the wor... more Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of tumor when it occurs within the liver region by analyzing images of tissue taken from a biopsy of this tumor. Working within this stage requires effort, time, and accumulated experience that must be possessed by a tissue expert to determine whether this tumor is malignant and needs treatment. Thus, a histology expert can make use of this model to obtain an initial diagnosis. This study aims to propose a deep learning model using convolutional neural networks (CNNs), which are able to transfer knowledge from pre-trained global models and decant this knowledge into a single model to help diagnose liver tumors from CT scans. Thus, we obtained a hybrid model capable of detecting CT images of a biopsy of a liver tumor. The best r...
Abstract Collaborative filtering has been the most straightforward and most preferable approach i... more Abstract Collaborative filtering has been the most straightforward and most preferable approach in the recommender systems. This technique recommends an item to a target user from the preferences of top-k similar neighbors. In a sparse data scenario, the recommendation accuracy of the collaborative filtering degrades significantly due to the limitations of existing various similarity measures. Such constraints offer an open scope for enhancing the accuracy of optimized user-specific recommendations. Many techniques have been utilized for this, like Particle Swarm Optimization and other evolutionary collaborative filtering algorithms. The proposed approach utilizes the Apriori algorithm to form users’ profiles from the items’ ratings and categorical attributes. The user profile creation is performed using the apriori algorithm. The profile of each user involves the likes and disliking of categorical characteristics of objects by users. In the collected MovieLens dataset, the efficiency of the proposed recommendation approach is tested. The comparative results show proof that the proposed novel algorithm outperforms other prominent collaborative filtering algorithms on the MovieLens datasets based on rating prediction accuracy.
Searching images from the large image databases is one of the potential research areas of multime... more Searching images from the large image databases is one of the potential research areas of multimedia research. The most challenging task for nay CBIR system is to capture the high level semantic of user. The researchers of multimedia domain are trying to fix this issue with the help of Relevance Feedback (RF). However existing RF based approaches needs a number of iteration to fulfill user's requirements. This paper proposed a novel methodology to achieve better results in early iteration to reduce the user interaction with the system. In previous research work it is reported that SVM based RF approach generating better results for CBIR. Therefore, this paper focused on SVM based RF approach. To enhance the performance of SVM based RF approach this research work applied Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) before applying SVM on user feedback. The main objective of using these meta-heuristic was to increase the positive image sample size from SVM. Firstly steps PSO is applied by incorporating the user feedback and secondly GA is applied on the result generated through PSO, finally SVM is applied using the positive sample generated through GA. The proposed technique is named as Particle Swarm Optimization Genetic Algorithm-Support Vector Machine Relevance Feedback (PSO-G A-SVM-RF). Precisions, recall and F-score are used as performance metrics for the assessment and validation of PSO-GA-SVM-RF approach and experiments are conducted on coral image dataset having 10908 images. From experimental results it is proved that PSO-GA-SVM-RF approach outperformed then various well known CBIR approaches.
COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths contin... more COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths continues to climb despite the development of vaccines and new strains of the virus have appeared. The early and precise recognition of COVID-19 are key in viably treating patients and containing the pandemic on the whole. Deep learning technology has been shown to be a significant tool in diagnosing COVID-19 and in assisting radiologists to detect anomalies and numerous diseases during this epidemic. This research seeks to provide an overview of novel deep learning-based applications for medical imaging modalities, computer tomography (CT) and chest X-rays (CXR), for the detection and classification COVID-19. First, we give an overview of the taxonomy of medical imaging and present a summary of types of deep learning (DL) methods. Then, utilizing deep learning techniques, we present an overview of systems created for COVID-19 detection and classification. We also give a rundown of the most we...
2013 World Congress on Computer and Information Technology (WCCIT), 2013
Minimizing data overheads in packet networks is an essential performance promotion issue. One of ... more Minimizing data overheads in packet networks is an essential performance promotion issue. One of the past solutions was to select a single optimum packet size that minimizes the combined data overhead factor resulting from both operational and blank padding requirements. Such a packet size is associated with the characteristics of the random streams of messages applied to the network. This paper provides a different method that lead to improved minimization of the overheads. The method is based on selecting multiple packet sizes, with each being concerned with minimizing data overheads of messages falling within a certain range of length. The performance of the method was studied using computer simulation. The results obtained illustrate that in comparison with the old method, the new method can reduce the combined data overhead factor by approximately 25%. Future use of the method would provide more efficient data flow through packet networks.
In this article, we propose a novel approach based on convolutional features and sparse autoencod... more In this article, we propose a novel approach based on convolutional features and sparse autoencoder (AE) for scene-level land-use (LU) classification. This approach starts by generating an initial feature representation of the scenes under analysis from a deep convolutional neural network (CNN) pre-learned on a large amount of labelled data from an auxiliary domain. Then these convolutional features are fed as input to a sparse AE for learning a new suitable representation in an unsupervised manner. After this pre-training phase, we propose two different scenarios for building the classification system. In the first scenario, we add a softmax layer on the top of the AE encoding layer and then fine-tune the resulting network in a supervised manner using the target training images available at hand. Then we classify the test images based on the posterior probabilities provided by the softmax layer. In the second scenario, we view the classification problem from a reconstruction perspe...
Ear print is an imminent biometric modality that has been attracting increasing attention in the ... more Ear print is an imminent biometric modality that has been attracting increasing attention in the biometric community. However, compared with well-established modalities, such as face and fingerprints, a limited number of contributions has been offered on ear imaging. Moreover, only several studies address the aspect of ear characterization (i.e., feature design). In this respect, in this paper, we propose a novel descriptor for ear recognition. The proposed descriptor, namely, dense local phase quantization (DLPQ) is based on the phase responses, which is generated using the well-known LPQ descriptor. Furthermore, local dense histograms are extracted from the horizontal stripes of the phase maps followed by a pooling operation to address viewpoint changes and, finally, concatenated into an ear descriptor. Although the proposed DLPQ descriptor is built on the traditional LPQ, we particularly show that drastic improvements (of over 20%) are attained with respect to this latter descriptor on two benchmark data sets. Furthermore, the proposed descriptor stands out among recent ear descriptors from the literature. INDEX TERMS Ear imaging, biometrics, ear recognition, feature design, local histograms.
A new speech feature extraction technique called moving average multi directional local features ... more A new speech feature extraction technique called moving average multi directional local features (MA-MDLF) is presented in this paper. This method is based on linear regression (LR) and moving average (MA) in the time–frequency plane. Three-point LR is taken along time axis and frequency axis, and 3 points MA is taken along 45° and 135° in the time–frequency plane. The LR captures the voice onset\offset, formant contour, while the moving average captures the dynamics on time–frequency axes which can be seen as voiceprints. The MA-MDLF performance is compared to commonly used speech features in speaker recognition. The comparison is performed in a speaker recognition system (SRS) for three different conditions, namely clean speech, mobile speech, and cross channel. MA-MDLF has shown better performance than the baseline MFCC, RASTA-PLP and LPCC. In clean and mobile speech, MA-MDLF feature performs the best and also in the cross channel task MA-MDLF performed excellent. We also evaluated the MA-MDLF using three speech databases, namely KSU, LDC Babylon and TIMITdatabases, and found that MA-MDLF outperformed the other commonly used features with speech from all the three databases. The first and second databases are for Arabic speech while third is for English speech.
Pedestrian path prediction is an emerging topic in the crowd visual analysis domain, notwithstand... more Pedestrian path prediction is an emerging topic in the crowd visual analysis domain, notwithstanding its practical importance in many respects. To date, the few contributions in the literature proposed quite straightforward approaches, and only a few of them have taken into account the interaction between pedestrians as a paramount cue in forecasting their potential walking preferences in a given scene. Moreover, the typical trend was to evaluate the proposed algorithms on sparse scenarios. To cope with more realistic cases, in this paper, we present an efficient approach for pedestrian path prediction in densely crowded scenes. The proposed approach initiates by extracting motion features related to the target pedestrian and his/her neighbors. Second, in order to further increase the representativeness of the extracted motion cues, an autoencoder feature learning model is considered, whose outcome finally feeds a Gaussian process regression prediction model to infer the potential future trajectories of the target pedestrians given their walking records in the scene. Experimental results demonstrate that our framework scores plausible results and outperforms traditional methods in the literature. INDEX TERMS Crowd analysis, walking path prediction, motion modeling, computer vision.
In this paper, we propose a transfer learning approach for Arrhythmia Detection and Classificatio... more In this paper, we propose a transfer learning approach for Arrhythmia Detection and Classification in Cross ECG Databases. This approach relies on a deep convolutional neural network (CNN) pretrained on an auxiliary domain (called ImageNet) with very large labelled images coupled with an additional network composed of fully connected layers. As the pretrained CNN accepts only RGB images as the input, we apply continuous wavelet transform (CWT) to the ECG signals under analysis to generate an over-complete time–frequency representation. Then, we feed the resulting image-like representations as inputs into the pretrained CNN to generate the CNN features. Next, we train the additional fully connected network on the ECG labeled data represented by the CNN features in a supervised way by minimizing cross-entropy error with dropout regularization. The experiments reported in the MIT-BIH arrhythmia, the INCART and the SVDB databases show that the proposed method can achieve better results for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB) compared to state-of-the-art methods.
IEEE Transactions on Geoscience and Remote Sensing
In this paper, we present a domain adaptation network to deal with classification scenarios subje... more In this paper, we present a domain adaptation network to deal with classification scenarios subjected to the data shift problem (i.e., labeled and unlabeled images acquired with different sensors and over completely different geographical areas). We rely on the power of pretrained convolutional neural networks (CNNs) to generate an initial feature representation of the labeled and unlabeled images under analysis, referred as source and target domains, respectively. Then we feed the resulting features to an extra network placed on the top of the pretrained CNN for further learning. During the fine-tuning phase, we learn the weights of this network by jointly minimizing three regularization terms, which are: 1) the cross-entropy error on the labeled source data; 2) the maximum mean discrepancy between the source and target data distributions; and 3) the geometrical structure of the target data. Furthermore, to obtain robust hidden representations we propose a mini-batch gradient-based optimization method with a dynamic sample size for the local alignment of the source and target distributions. To validate the method, in the experiments we use the University of California Merced data set and a new multisensor data set acquired over several regions of the Kingdom of Saudi Arabia. The experiments show that: 1) pretrained CNNs offer an interesting solution for image classification compared to state-of-the-art methods; 2) their performances can be degraded when dealing with data sets subjected to the data shift problem; and 3) how the proposed approach represents a promising solution for effectively handling this issue.
2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2015
In this paper, we propose a novel deep convex network method for domain adaptation in multitempor... more In this paper, we propose a novel deep convex network method for domain adaptation in multitemporal remote sensing imagery. We fuse the capabilities of the extreme learning machine (ELM) classifier and local feature descriptor techniques to boost the classification accuracy. We use the Affine Scale Invariant Feature Transform (ASIFT) to extract the key points from the image pair, i.e. source and target domain images. The neural network consist of two layers, one layer uses the keypoints extracted by ASIFT to map the training points of the source image to the target image, while layer 2 is used for the purpose of classification. Experimental results obtained on multitemporal VHR images acquired by the IKONOS2 confirm the promising capability of the proposed method.
This paper deals with the problem of the classification of large-scale very high-resolution (VHR)... more This paper deals with the problem of the classification of large-scale very high-resolution (VHR) remote sensing (RS) images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class). Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN) to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers) is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the ...
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the wor... more Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of tumor when it occurs within the liver region by analyzing images of tissue taken from a biopsy of this tumor. Working within this stage requires effort, time, and accumulated experience that must be possessed by a tissue expert to determine whether this tumor is malignant and needs treatment. Thus, a histology expert can make use of this model to obtain an initial diagnosis. This study aims to propose a deep learning model using convolutional neural networks (CNNs), which are able to transfer knowledge from pre-trained global models and decant this knowledge into a single model to help diagnose liver tumors from CT scans. Thus, we obtained a hybrid model capable of detecting CT images of a biopsy of a liver tumor. The best r...
Abstract Collaborative filtering has been the most straightforward and most preferable approach i... more Abstract Collaborative filtering has been the most straightforward and most preferable approach in the recommender systems. This technique recommends an item to a target user from the preferences of top-k similar neighbors. In a sparse data scenario, the recommendation accuracy of the collaborative filtering degrades significantly due to the limitations of existing various similarity measures. Such constraints offer an open scope for enhancing the accuracy of optimized user-specific recommendations. Many techniques have been utilized for this, like Particle Swarm Optimization and other evolutionary collaborative filtering algorithms. The proposed approach utilizes the Apriori algorithm to form users’ profiles from the items’ ratings and categorical attributes. The user profile creation is performed using the apriori algorithm. The profile of each user involves the likes and disliking of categorical characteristics of objects by users. In the collected MovieLens dataset, the efficiency of the proposed recommendation approach is tested. The comparative results show proof that the proposed novel algorithm outperforms other prominent collaborative filtering algorithms on the MovieLens datasets based on rating prediction accuracy.
Searching images from the large image databases is one of the potential research areas of multime... more Searching images from the large image databases is one of the potential research areas of multimedia research. The most challenging task for nay CBIR system is to capture the high level semantic of user. The researchers of multimedia domain are trying to fix this issue with the help of Relevance Feedback (RF). However existing RF based approaches needs a number of iteration to fulfill user's requirements. This paper proposed a novel methodology to achieve better results in early iteration to reduce the user interaction with the system. In previous research work it is reported that SVM based RF approach generating better results for CBIR. Therefore, this paper focused on SVM based RF approach. To enhance the performance of SVM based RF approach this research work applied Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) before applying SVM on user feedback. The main objective of using these meta-heuristic was to increase the positive image sample size from SVM. Firstly steps PSO is applied by incorporating the user feedback and secondly GA is applied on the result generated through PSO, finally SVM is applied using the positive sample generated through GA. The proposed technique is named as Particle Swarm Optimization Genetic Algorithm-Support Vector Machine Relevance Feedback (PSO-G A-SVM-RF). Precisions, recall and F-score are used as performance metrics for the assessment and validation of PSO-GA-SVM-RF approach and experiments are conducted on coral image dataset having 10908 images. From experimental results it is proved that PSO-GA-SVM-RF approach outperformed then various well known CBIR approaches.
COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths contin... more COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths continues to climb despite the development of vaccines and new strains of the virus have appeared. The early and precise recognition of COVID-19 are key in viably treating patients and containing the pandemic on the whole. Deep learning technology has been shown to be a significant tool in diagnosing COVID-19 and in assisting radiologists to detect anomalies and numerous diseases during this epidemic. This research seeks to provide an overview of novel deep learning-based applications for medical imaging modalities, computer tomography (CT) and chest X-rays (CXR), for the detection and classification COVID-19. First, we give an overview of the taxonomy of medical imaging and present a summary of types of deep learning (DL) methods. Then, utilizing deep learning techniques, we present an overview of systems created for COVID-19 detection and classification. We also give a rundown of the most we...
2013 World Congress on Computer and Information Technology (WCCIT), 2013
Minimizing data overheads in packet networks is an essential performance promotion issue. One of ... more Minimizing data overheads in packet networks is an essential performance promotion issue. One of the past solutions was to select a single optimum packet size that minimizes the combined data overhead factor resulting from both operational and blank padding requirements. Such a packet size is associated with the characteristics of the random streams of messages applied to the network. This paper provides a different method that lead to improved minimization of the overheads. The method is based on selecting multiple packet sizes, with each being concerned with minimizing data overheads of messages falling within a certain range of length. The performance of the method was studied using computer simulation. The results obtained illustrate that in comparison with the old method, the new method can reduce the combined data overhead factor by approximately 25%. Future use of the method would provide more efficient data flow through packet networks.
In this article, we propose a novel approach based on convolutional features and sparse autoencod... more In this article, we propose a novel approach based on convolutional features and sparse autoencoder (AE) for scene-level land-use (LU) classification. This approach starts by generating an initial feature representation of the scenes under analysis from a deep convolutional neural network (CNN) pre-learned on a large amount of labelled data from an auxiliary domain. Then these convolutional features are fed as input to a sparse AE for learning a new suitable representation in an unsupervised manner. After this pre-training phase, we propose two different scenarios for building the classification system. In the first scenario, we add a softmax layer on the top of the AE encoding layer and then fine-tune the resulting network in a supervised manner using the target training images available at hand. Then we classify the test images based on the posterior probabilities provided by the softmax layer. In the second scenario, we view the classification problem from a reconstruction perspe...
Ear print is an imminent biometric modality that has been attracting increasing attention in the ... more Ear print is an imminent biometric modality that has been attracting increasing attention in the biometric community. However, compared with well-established modalities, such as face and fingerprints, a limited number of contributions has been offered on ear imaging. Moreover, only several studies address the aspect of ear characterization (i.e., feature design). In this respect, in this paper, we propose a novel descriptor for ear recognition. The proposed descriptor, namely, dense local phase quantization (DLPQ) is based on the phase responses, which is generated using the well-known LPQ descriptor. Furthermore, local dense histograms are extracted from the horizontal stripes of the phase maps followed by a pooling operation to address viewpoint changes and, finally, concatenated into an ear descriptor. Although the proposed DLPQ descriptor is built on the traditional LPQ, we particularly show that drastic improvements (of over 20%) are attained with respect to this latter descriptor on two benchmark data sets. Furthermore, the proposed descriptor stands out among recent ear descriptors from the literature. INDEX TERMS Ear imaging, biometrics, ear recognition, feature design, local histograms.
A new speech feature extraction technique called moving average multi directional local features ... more A new speech feature extraction technique called moving average multi directional local features (MA-MDLF) is presented in this paper. This method is based on linear regression (LR) and moving average (MA) in the time–frequency plane. Three-point LR is taken along time axis and frequency axis, and 3 points MA is taken along 45° and 135° in the time–frequency plane. The LR captures the voice onset\offset, formant contour, while the moving average captures the dynamics on time–frequency axes which can be seen as voiceprints. The MA-MDLF performance is compared to commonly used speech features in speaker recognition. The comparison is performed in a speaker recognition system (SRS) for three different conditions, namely clean speech, mobile speech, and cross channel. MA-MDLF has shown better performance than the baseline MFCC, RASTA-PLP and LPCC. In clean and mobile speech, MA-MDLF feature performs the best and also in the cross channel task MA-MDLF performed excellent. We also evaluated the MA-MDLF using three speech databases, namely KSU, LDC Babylon and TIMITdatabases, and found that MA-MDLF outperformed the other commonly used features with speech from all the three databases. The first and second databases are for Arabic speech while third is for English speech.
Pedestrian path prediction is an emerging topic in the crowd visual analysis domain, notwithstand... more Pedestrian path prediction is an emerging topic in the crowd visual analysis domain, notwithstanding its practical importance in many respects. To date, the few contributions in the literature proposed quite straightforward approaches, and only a few of them have taken into account the interaction between pedestrians as a paramount cue in forecasting their potential walking preferences in a given scene. Moreover, the typical trend was to evaluate the proposed algorithms on sparse scenarios. To cope with more realistic cases, in this paper, we present an efficient approach for pedestrian path prediction in densely crowded scenes. The proposed approach initiates by extracting motion features related to the target pedestrian and his/her neighbors. Second, in order to further increase the representativeness of the extracted motion cues, an autoencoder feature learning model is considered, whose outcome finally feeds a Gaussian process regression prediction model to infer the potential future trajectories of the target pedestrians given their walking records in the scene. Experimental results demonstrate that our framework scores plausible results and outperforms traditional methods in the literature. INDEX TERMS Crowd analysis, walking path prediction, motion modeling, computer vision.
In this paper, we propose a transfer learning approach for Arrhythmia Detection and Classificatio... more In this paper, we propose a transfer learning approach for Arrhythmia Detection and Classification in Cross ECG Databases. This approach relies on a deep convolutional neural network (CNN) pretrained on an auxiliary domain (called ImageNet) with very large labelled images coupled with an additional network composed of fully connected layers. As the pretrained CNN accepts only RGB images as the input, we apply continuous wavelet transform (CWT) to the ECG signals under analysis to generate an over-complete time–frequency representation. Then, we feed the resulting image-like representations as inputs into the pretrained CNN to generate the CNN features. Next, we train the additional fully connected network on the ECG labeled data represented by the CNN features in a supervised way by minimizing cross-entropy error with dropout regularization. The experiments reported in the MIT-BIH arrhythmia, the INCART and the SVDB databases show that the proposed method can achieve better results for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB) compared to state-of-the-art methods.
IEEE Transactions on Geoscience and Remote Sensing
In this paper, we present a domain adaptation network to deal with classification scenarios subje... more In this paper, we present a domain adaptation network to deal with classification scenarios subjected to the data shift problem (i.e., labeled and unlabeled images acquired with different sensors and over completely different geographical areas). We rely on the power of pretrained convolutional neural networks (CNNs) to generate an initial feature representation of the labeled and unlabeled images under analysis, referred as source and target domains, respectively. Then we feed the resulting features to an extra network placed on the top of the pretrained CNN for further learning. During the fine-tuning phase, we learn the weights of this network by jointly minimizing three regularization terms, which are: 1) the cross-entropy error on the labeled source data; 2) the maximum mean discrepancy between the source and target data distributions; and 3) the geometrical structure of the target data. Furthermore, to obtain robust hidden representations we propose a mini-batch gradient-based optimization method with a dynamic sample size for the local alignment of the source and target distributions. To validate the method, in the experiments we use the University of California Merced data set and a new multisensor data set acquired over several regions of the Kingdom of Saudi Arabia. The experiments show that: 1) pretrained CNNs offer an interesting solution for image classification compared to state-of-the-art methods; 2) their performances can be degraded when dealing with data sets subjected to the data shift problem; and 3) how the proposed approach represents a promising solution for effectively handling this issue.
Uploads
Papers by Esam Othman