Papers by khulumani sibanda
International journal of computer networks and applications, Sep 30, 2023
CR (cognitive radio) technology has become an attractive field of research owing to the increased... more CR (cognitive radio) technology has become an attractive field of research owing to the increased demand for spectrum resources. One of the duties of this technology is spectrum sensing which involves the opportunistic identification of vacant frequency bands for occupation by unlicensed users. Various traditional and state of art Machine-Learning algorithms have been proposed for sensing these vacant frequency bands. However, the common drawbacks of the proposed traditional techniques are degraded performance at low signal-to-noise ratios (SNR) as well as the requirement for prior information about the licensed user signal characteristics. More so, several Machine-Learning / Deep Learning techniques depend on simulated, supervised, and static (batch) spectrum datasets with synthesized features, which is not the case with real-world networks. Hence, this study aims to optimize realtime and dynamic spectrum sensing in wireless networks by establishing and evaluating a K-means-LSTM novice model (artifact) that is robust to low SNR and doesn't require a supervised spectrum dataset. Firstly, the unsupervised spectrum dataset was collected by an RTL-SDR dongle and labelled by the K-means algorithm in MATLAB. The labelled spectrum dataset was utilized for training the LSTM algorithm. The resultant LSTM model's performance was evaluated and compared to other commonly used spectrum detection models. Findings revealed that the proposed model established from the K-Means and LSTM algorithms yielded a Pd (detection probability) of 94%, Pfa (false-alarm probability) of 71%, and an accuracy of 97% at low SNR such as-20 dB, a performance which was superior to other models' performance. Using our proposed model, it is possible to optimize real-time spectrum sensing at low SNR without a prior supervised spectrum dataset.
International journal of fog computing, 2020
Wireless mesh networks (WMNs) are known for possessing good attributes such as low up-front cost,... more Wireless mesh networks (WMNs) are known for possessing good attributes such as low up-front cost, easy network maintenance, and reliable service coverage. This has largely made them to be adopted in various environments such as; school campus networks, community networking, pervasive healthcare, office and home automation, emergency rescue operations and ubiquitous wireless networks. The routing nodes are equipped with self-organized and self-configuring capabilities. However, the routing mechanisms of WMNs depend on the collaboration of all participating nodes for reliable network performance. The authors of this paper have noted that most routing algorithms proposed for WMNs in the last few years are designed with the assumption that all the participating nodes will collaboratively be involved in relaying the data packets originated from a source to a multi-hop destination. Such design approach however exposes WMNs to vulnerability such as malicious packet drop attack. This paper presents an evaluation of the effect of the black hole attack with other influential factors in WMNs. In this study, NS-3 simulator was used with AODV as the routing protocol. The results show that the packet delivery ratio and throughput of WMN under attack decreases sharply as compared to WMN free from attack. On an average, 47.41% of the transmitted data packets were dropped in presence of black hole attack.
Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its abili... more Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” to support IoT devices that are resource-constrained. Although fog computing offers a lot of benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such challenge is the allocation of resources to IoT tasks to match their computational needs and QoS requirements whilst meeting task deadlines. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer. Of course, this helps in minimizing the communication delay, however, most tasks would end up missing their deadlines as many delays are experienced when fog node is deciding to process part of the task or offloading it to the next fog node. In this paper, we propose and introduce a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway whose goal is to decide where and when a task is to be offloaded either to the fog layer or the cloud layer based on their priority needs, computational needs and QoS requirements and minimize round-trip time. The study followed the four phases of the top-down methodology. To test the efficiency and effectiveness of the RAS, a model was evaluated in a simulated smart home setup. The important metrics that were used are the queuing time, offloading time and throughput. The results showed that RAS helps in minimizing the round-trip time, increase throughput and improve QoS. Furthermore, the approach addressed the starvation problem, which was affecting low priority tasks. Most importantly, the results provide evidence that if resource allocation and assignment are done properly, round-trip time (queuing time and offloading time) can be reduced and QoS can be improved in fog computing.
International Journal of Computer Applications, Nov 18, 2014
Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fi... more Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. One area in which ANNs have featured prominently is in the forecasting of TCP/IP network traffic trends. Their ability to model almost any kind of function regardless of its degree of nonlinearity, positions them as good candidates for predicting self-similar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287's Neural Network toolbox version 5.0.2 (R2007a) was used for our experiments. We found from the simulation experiments that, generally small learning rates produced consistent and better results, whereas large learning rates appeared to cause oscillations and inconsistent results. Depending on the difficulty of the problem at hand, it is advisable to set the learning rate to 0.1 for the standard Backpropagation algorithm and to either 0.1 or 0.2 if used in conjunction with the momentum term of 0.5 or 0.6. We advise minimal use of the momentum term as it greatly interferes with the training process of ANNs. While experimental results cannot cover all practical situations, our results do help to explain common behavior which does not agree with some theoretical expectations.
International journal of computer science and application, 2015
Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fi... more Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. They have proved to be one of the most powerful tools in the domain of forecasting and analysis of various time series. The ability to model almost any kind of function regardless of its degree of nonlinearity, positions ANNs as good candidates for predicting and modelling selfsimilar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287's Neural Network toolbox version 5.0.2 (R2007a) was used for our experiments. The results are found to be promising in terms of ease of design and use of ANNs. We found from the experiments that, depending on the difficulty of the problem at hand, it is advisable to set the learning rate to 0.1 for the standard Backpropagation algorithm and to either 0.1 or 0.2 if used in conjunction with the momentum term of 0.5 or 0.6. We advise minimal use of the momentum term as it greatly interferes with the training process of ANNs. Although the information obtained from the tests carried out in this paper is specific to the problem considered, it provides users of Backpropagation networks with a valuable guide on the behaviour of ANNs under a wide range of operating conditions. It is important to note that the guidelines accrued from this paper are of an assistive and not necessarily restrictive nature to potential ANN modellers.
International journal of computer applications, Mar 18, 2015
In this paper we empirically investigate various sizes of training sets with the aim of determini... more In this paper we empirically investigate various sizes of training sets with the aim of determining the optimum training set size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. We found from both the simulation experiments and literature that the best training set size can be obtained by selecting training samples randomly, between the interval 5 × and 10 × in number, depending on the difficulty of the problem under consideration.
2008 Third International Conference on Broadband Communications, Information Technology & Biomedical Applications, 2008
Network deployment in rural areas of developing nations is a challenge to social conscience and t... more Network deployment in rural areas of developing nations is a challenge to social conscience and technical capability and affordability. Because most rural areas have no copper telecommunication legacy, they can leapfrog to using wireless technologies. These technologies allow "bypassing stages in capacity building or investment through which countries were previously required to pass during the process of economic development [11]." In this work we discuss challenges of deploying broadband networks in rural areas. WiMAX technology is presented as a technology that can now provide broadband connectivity in developing regions. We further discuss Dwesa (SA) communication network which used WiMAX technology to provide internet connectivity to a rural community in the Eastern Cape, South Africa.
Indian Journal of Computer Science and Engineering
Spectrum scarcity is a prevalent problem in wireless networks due to network regulatory bodies' s... more Spectrum scarcity is a prevalent problem in wireless networks due to network regulatory bodies' strict allotment of the spectrum (frequency bands) to licensed users. Such an operation implies that the unlicensed users (secondary wireless spectrum users) have to evacuate the spectrum when the primary wireless spectrum users (licensed users) are utilizing the frequency bands to avoid interference. Cognitive radio alleviates the spectrum shortage by detecting unoccupied frequency bands. This reduces the underutilization of frequency bands in wireless networks. There have been numerous related studies on spectrum sensing, however, few studies have conducted a bibliometric analysis on this subject. This study's goal was to conduct a bibliometric analysis on the optimization of spectrum sensing. The PRISMA methodology was the basis for the bibliometric analysis to identify the limitations of the existing spectrumsensing techniques. The findings revealed that various machine-learning or hybrid models outperformed the traditional techniques such as matched filter and energy detectors at the lowest signal-to-noise ratio (SNR). SNR is the ratio of the desired signal's magnitude to the background noise magnitude. This study, therefore, recommends researchers propose alternative techniques to optimize (improve) spectrum sensing in wireless networks. More work should be done to develop models that optimize spectrum sensing at low SNR.
Indian Journal of Computer Science and Engineering
Spectrum scarcity is a prevalent problem in wireless networks due to network regulatory bodies&am... more Spectrum scarcity is a prevalent problem in wireless networks due to network regulatory bodies' strict allotment of the spectrum (frequency bands) to licensed users. Such an operation implies that the unlicensed users (secondary wireless spectrum users) have to evacuate the spectrum when the primary wireless spectrum users (licensed users) are utilizing the frequency bands to avoid interference. Cognitive radio alleviates the spectrum shortage by detecting unoccupied frequency bands. This reduces the underutilization of frequency bands in wireless networks. There have been numerous related studies on spectrum sensing, however, few studies have conducted a bibliometric analysis on this subject. This study's goal was to conduct a bibliometric analysis on the optimization of spectrum sensing. The PRISMA methodology was the basis for the bibliometric analysis to identify the limitations of the existing spectrumsensing techniques. The findings revealed that various machine-learning or hybrid models outperformed the traditional techniques such as matched filter and energy detectors at the lowest signal-to-noise ratio (SNR). SNR is the ratio of the desired signal's magnitude to the background noise magnitude. This study, therefore, recommends researchers propose alternative techniques to optimize (improve) spectrum sensing in wireless networks. More work should be done to develop models that optimize spectrum sensing at low SNR.
2022 1st Zimbabwe Conference of Information and Communication Technologies (ZCICT)
2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), 2020
There are two types of customers in the telecommunication industry; the pre-paid and the contract... more There are two types of customers in the telecommunication industry; the pre-paid and the contract customers. In South Africa it is the pre-paid customers that keep telcos constantly worried because such customers do not have anything binding them to the company, they can leave and join a competitor at any time. To retain such customers, telcos need to customise suitable solutions especially for those customers that are agitating and can churn at any time. This needs customer churn prediction models that would take advantage of big data analytics and provide the telco industry with a real time solution. The purpose of this study was to develop a real time customer churn prediction model. The study used the CRISP-DM methodology and the three machine learning algorithms for implementation. Watson Studio software was used for the model prototype deployment. The study used the confusion matrix to unpack a number of performance measures. The results showed that all the models had some degree of misclassification, however the misclassification rate of the Logistic Regression was very minimal (2.2%) as differentiated from the Random Forest and the Decision Tree, which had misclassification rates of 20.8% and 21.7% respectively. The results further showed that both Random Forest and the Decision Tree had good accuracy rates of 78.3% and 79.2% respectively, although they were still not better than that of the Logistic Regression. Despite the two having good accuracy rates, they had the highest rates of misclassification of class events. The conclusion we drew from this was that, accuracy is not a dependable measure for determining model performance.
Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fi... more Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. One area in which ANNs have featured prominently is in the forecasting of TCP/IP network traffic trends. Their ability to model almost any kind of function regardless of its degree of nonlinearity, positions them as good candidates for predicting self-similar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287’s Neural Network toolbox version 5.0.2 (R2007a) was...
The aim of this research was to assess the level of compliance among the regulated polluters and ... more The aim of this research was to assess the level of compliance among the regulated polluters and also the effectiveness of enforcement by the regulating authority culminating in the identification of the factors which positively affect compliance and enforcement as well as the factors which negatively affect compliance and enforcement. The Table of the Eleven methodology was employed. Two questionnaires which were mirror images of each other and based on the Table of Eleven questions were administered to ten agents from the target group (industries) and to nine enforcement agents from the Environmental Management Agency and the local authority. Field observations were also done in which the researcher was a non-participant observer. The iT-11 version was then used to process the data and two outputs were produced which are compliance profile and compliance estimates. The compliance profile results showed the factors strongly encouraging compliance, weakly encouraging compliance, str...
2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), 2020
GENERAL COMMENTS This is a very useful paper by the leading expert on the Urals that summaries pr... more GENERAL COMMENTS This is a very useful paper by the leading expert on the Urals that summaries progress in Urals research during the past decade after the great expansion of western collaboration in the 90s. Dr. Puchkov writes well, remarkably well for a non-native English speaker, but the manuscript could benefit from some additional editing by a native English speaker. I have made some suggestions. It is mostly a question of changing just a few words here and there. The paper is well structured and full of interesting information. I thoroughly enjoyed
Technological advancement is changing the face of remote fetal monitoring in pregnant women. This... more Technological advancement is changing the face of remote fetal monitoring in pregnant women. This has also seen a number of proprietary applications and standards evolving to integrated standards where various remote monitoring applications communicate with each other beyond boundaries. This is anticipated to improve access to patient information, individual health monitoring, reduce errors and increase evidence-based care delivery and continuous flow of healthcare information, in and out of the remote, marginalized communities. This paper describes FHR monitoring techniques in brief as well as various interferences which affect FECG when using these techniques for remote fetal heart rate monitoring. This paper further recommends a standard framework for remote fetal health monitoring and explains in details the requirements for the proposed framework.
International Journal of Computer Applications, 2014
Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fi... more Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. One area in which ANNs have featured prominently is in the forecasting of TCP/IP network traffic trends. Their ability to model almost any kind of function regardless of its degree of nonlinearity, positions them as good candidates for predicting self-similar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287’s Neural Network toolbox version 5.0.2 (R2007a) was...
Uploads
Papers by khulumani sibanda