Papers by Mohammad Zubair Khan
IEEE Access, 2021
Blind-spots, where wireless signals do not reach within the coverage range, often emerge in a dyn... more Blind-spots, where wireless signals do not reach within the coverage range, often emerge in a dynamic environment due to obstacles, geographical location or mobility of cellular users (CUs). Thus greatly reducing the overall system performance in terms of coverage and throughput. Relay-aided cognitive Device to Device (cD2D) communication system underlying the 5G cellular network can help mitigate blind-spots. Cognitive capability helps D2D users to acquire the spectrum opportunistically for proximity communication and establish a semi-independent network underlying the 5G network, which not only offloads 5G-New Radio (NR) base station but also enhances the overall system performance. In this work, we have developed a relay-aided cognitive D2D network that helps CUs falling into the blind-spots to retain access to the 5G network and increase wireless coverage. Relay selection requires mutual consent between the relay and the device in the blind-spot. The in-coverage devices are tempted to act as relays through incentive-based mechanism. For enhanced system performance a suitable match among the devices in blind-spots and the relays is required. cD2D enabled relay selection algorithm (cDERSA) is proposed in this work, in which a cognitive D2D user (cDU), which is a CU falling in the blind-spot, establishes a relayed cD2D link to access 5G-NR gNodeB. All cDUs as well as the tempted relays, i.e. cognitive D2D relays (cDRs), first scan their surroundings for devices capable of D2D communication and based on multi-criteria objective functions, build a priority table. A stable marriage problem is formulated and solved using a unique, stable, distributed, and efficient matching algorithm based on the Gale-Shapley algorithm. A new incentive mechanism is also developed to keep relays motivated to share their resources. Simulation is performed and their results show improvement in throughput and average user satisfaction, which validates our proposed cDERSA.
International Journal of Advanced Computer Science and Applications, 2021
The rapid expansion of communication and computational technology provides us the opportunity to ... more The rapid expansion of communication and computational technology provides us the opportunity to deal with the bulk nature of dynamic data. The classical computing style is not much effective for such mission-critical data analysis and processing. Therefore, cloud computing is become popular for addressing and dealing with data. Cloud computing involves a large computational and network infrastructure that requires a significant amount of power and generates carbon footprints (CO 2). In this context, we can minimize the cloud's energy consumption by controlling and switching off ideal machines. Therefore, in this paper, we propose a proactive virtual machine (VM) scheduling technique that can deal with frequent migration of VMs and minimize the energy consumption of the cloud using unsupervised learning techniques. The main objective of the proposed work is to reduce the energy consumption of cloud datacenters through effective utilization of cloud resources by predicting the future demand of resources. In this context four different clustering algorithms, namely K-Means, SOM (Self Organizing Map), FCM (Fuzzy C Means), and K-Mediod are used to develop the proposed proactive VM scheduling and find which type of clustering algorithm is best suitable for reducing the energy uses through proactive VM scheduling. This predictive load-aware VM scheduling technique is evaluated and simulated using the Cloud-Sim simulator. In order to demonstrate the effectiveness of the proposed scheduling technique, the workload trace of 29 days released by Google in 2019 is used. The experimental outcomes are summarized in different performance matrices, such as the energy consumed and the average processing time. Finally, by concluding the efforts made, we also suggest future research directions.
International Journal of Innovative Technology and Exploring Engineering, 2020
This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art ... more This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating p...
IEEE Access, 2021
It has been proven that Internet of Things (IoT) platforms can improve the performance and effici... more It has been proven that Internet of Things (IoT) platforms can improve the performance and efficiency of a wide range of processes. With the acceptance of IoT as a major part of the technology of Industry 4.0, the notion of leveraging the Internet in industries to enable automation and reconfigure existing industrial processes has greatly evolved. By introducing smart technology and intelligent processes, the Industrial Internet of Things (IIoT) is committed to bringing high operational efficiency, enhanced productivity, and effective management to industrial assets. Despite this, the reliance of IIoT on central architecture presents numerous challenges, including the security and maintenance of smart devices, privacy issues owing to third-party participation, and massive computations conducted by a central entity, all of which prevent its widespread adoption in businesses. Emerging blockchain technologies have the potential to transform IIoT platforms and applications. A distributed and decentralized approach followed by blockchain might offer interesting solutions to the challenges raised by IIoT. Furthermore, 5G networks are expected to deliver excellent solutions to meet the demands of decentralized systems, with a focus on applicationspecific vulnerabilities. Blockchain and IIoT, enabled by 5G, is a viable option to fully explore the potential of contemporary industry. In this context, this article analyzes and examines recent achievements to highlight the major obstacles in blockchain-IIoT convergence and presents a framework for potential solutions. A well-organized literature review by analyzing the existing work in three primary areas: blockchain consensus algorithms used in existing IoT and IIoT applications, blockchain for 5G-enabled IoT networks, and blockchain in industry have been performed, with major findings summarized in each area. Directions for the future are also provided and intend to assist researchers in understanding the full potential of these innovations.
Machine Learning (ML) systems has been used in healthcare to recognize and diagnose diseases usin... more Machine Learning (ML) systems has been used in healthcare to recognize and diagnose diseases using patient’s data. The use of ML in technology has reformed and improved healthcare by automatically detecting and diagnosing diseases which in turn improve patient’s health and saves lives. Therefore, in this study, ML algorithms are used to predict death and recovery of patients. Using several ML algorithms the death or recovery of patients was predicated. The Naïve Bayes and Bagged Trees algorithms provided the best performance rates of 79% and 77% respectively. However, in terms of accuracy, the Medium Tree and ensemble method Boosted Tree classification algorithms showed 89% accuracy. This study showed that using ML technology could alert healthcare providers to provide faster treatment for high risk COVID-19 patients which in turn save lives and improve quality of healthcare service.
Journal of Software Engineering and Applications, 2010
Fuzzy Logic is used to derive the optimal inventory policies in the Supply Chain (SC) numbers. We... more Fuzzy Logic is used to derive the optimal inventory policies in the Supply Chain (SC) numbers. We examine the performance of the optimal inventory policies by cutting the costs and increasing the supply chain management efficiency. The proposed inventory policy uses multi-agent and Fuzzy logic, and provides managerial insights on the impact of the decision making in all the SC numbers. In particular, we focus on the way in which our agent purchases components using a mixed procurement strategy (combining long and short term planning) and how it sets its prices according to the prevailing market conditions and its own inventory level (because this adaptivity and flexibility are key to its success). In modern global market, one of the most important issues of the supply chain (SC) management is to satisfy changing customer demands and enterprises should enhance the long-term advantage through the optimal inventory control. In this paper an intelligent multi-agent system to simulate supply chain management has been developed.
Invertis Journal of Science & Technology, 2021
Algorithms for Intelligent Systems, 2021
2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), 2021
Cloud is most recognized environment of computing due to the widely availability of its services ... more Cloud is most recognized environment of computing due to the widely availability of its services to the users. Cloud data centers have adopted virtualization of resources technically known as virtual machines (VMs) for the effective and efficient computation of services. But power consumption through the cloud data centers (CDCs) highlighted as one of the major issues responsible to reduce the quality of services of Cloud computing. Because a large number of electronic resources like processing nodes, servers, disk storage, and networking nodes (i.e. switch/routers) in CDC consume a high degree of energy during computation. Energy is also consumed by cooling plants to cool the data centers as it produces a huge amount of heat (CO2) surrounding it. Due to the high energy consumption, the cloud providers pay a high cost of energy computation and it also contributes more carbon emissions to the atmosphere. Reducing energy consumption and CO2 emission is a major challenge in turning the computation from cloud computing to green computing. So, our main focus in this paper is to develop an energy-saving approach for cloud computing based on resource management. Due to variations in the request of services by the users, the demand for resources also varies during the computation. The amount of users’ request are not always same for all 24 hours in a day. Some period of time it is very low and at that period of time, many resources become idle and consume some fixed amount of energy that is a wastage of energy. In this manner, the proposed ESACR approach first define active (required) and idle (non-required) resources as per the users’ request for services at any instance; after that, it converts the idle resources into an energy-saving mode (i.e., turn OFF) unless it is not required for the service of users’ request. So that the saving of energy will increased, if at any instance all non-useable resources is define in OFF mode.
International Journal of Mathematical Sciences and Computing, 2021
Cloud computing is a widely acceptable computing environment, and its services are also widely av... more Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO 2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact.
PeerJ Computer Science, 2021
DataStream mining is a challenging task for researchers because of the change in data distributio... more DataStream mining is a challenging task for researchers because of the change in data distribution during classification, known as concept drift. Drift detection algorithms emphasize detecting the drift. The drift detection algorithm needs to be very sensitive to change in data distribution for detecting the maximum number of drifts in the data stream. But highly sensitive drift detectors lead to higher false-positive drift detections. This paper proposed a Drift Detection-based Adaptive Ensemble classifier for sentiment analysis and opinion mining, which uses these false-positive drift detections to benefit and minimize the negative impact of false-positive drift detection signals. The proposed method creates and adds a new classifier to the ensemble whenever a drift happens. A weighting mechanism is implemented, which provides weights to each classifier in the ensemble. The weight of the classifier decides the contribution of each classifier in the final classification results. Th...
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
Big Data research is playing a leading role in investigating a wide group of issues fundamentally... more Big Data research is playing a leading role in investigating a wide group of issues fundamentally emerging concerning Database, Data Warehousing, and Data Mining research. Analytics research is intended to develop complex procedures running over large-scale data repositories with the objective of extracting useful knowledge hidden in such repositories. A standout amongst the most noteworthy application situations where Big Data emerge is, without uncertainty, logical figuring. Here, researchers and analysts create immense measures of information everyday by means of investigations (e.g., disciplines like high vitality material science, space science, bioinformatics, etc.). Nevertheless, separating helpful learning for basic leadership purposes from these enormous, vast scale data repositories are practically inconceivable for genuine Data Base Management Systems (DBMS), is inspired investigation tools.
Comput. Electr. Eng., 2021
In this paper, CMOS technology-based neural session key generation is proposed for integration wi... more In this paper, CMOS technology-based neural session key generation is proposed for integration with the Internet-of-Things (IoT) to enhancing security. Recent technological developments in the IoT era enable improved strategies to exacerbate the maintenance of energy efficiency and stability issues. The existing security solutions do not properly address IoT’s security. A small logic area ASIC implementation of a re-keying enabled Triple Layer Vector-Valued Neural Network (TLVVNN) using CMOS architectures with measurements of 65 and 130 nanometers are proposed for integration with IoT. The paper aims to defend IoT devices using TLVVNN synchronization to enhance security. For a 20% weight misalignment in the re-keying phase, the synchronization period may be decreased from 1.25 ms to less than 0.7 ms, according to behavioral simulations. Experiments to verify the proposed technique’s performance are conducted, and the findings demonstrate that the proposed method has greater performa...
2021 International Congress of Advanced Technology and Engineering (ICOTEN), 2021
Incremental forming is one of the non-traditional forming processes which is widely used in rapid... more Incremental forming is one of the non-traditional forming processes which is widely used in rapid prototyping and customized component manufacturing. One of the challenges encountered in single stage single point incremental forming (SSSPIF) is difficulty in achieving greater wall angle for a considerable depth. In this research work, the investigation is carried out by experimental and numerical simulation for reaching the maximum wall angle to a possible depth without any defects in SSSPIF. SSSPIF of truncated cone shaped component from 1mm thick AISI304 austenitic stainless steel are made at a different wall angles. Also, numerical simulation using LS-DYNA explicit solver is performed and the results are validated with the experimental values. Components with the wall angle of 64o is successfully made without any defects made in a single stage forming for a depth of 45 mm within the experimented process parameters. Major strain, minor strain and thickness distribution in the sheet material due to forming process are obtianed from experiments and finite element analysis (FEA). From the results of both experiment and FEA, it is observed that the major strain, minor strain and thinning effects are higher in the region below the major diameter of the truncated cone at all experimented wall angles. Also the FEA results have shown good agreement with the experimental values. Further it is seen that the strains are increasing with the increase of wall angle.
2018 International Conference on System Modeling & Advancement in Research Trends (SMART), 2018
Interconnection Network Topology plays a very important role in style of distributed systems. As ... more Interconnection Network Topology plays a very important role in style of distributed systems. As a result of performance of load balancing/task allocations techniques is depends on performance of network. Therefore before discussing optimized task allocation/load equalization theme there’s a requirement of investigate the interconnection topology. Range of interconnection networks is offered in literature like hypercube, mesh, star, tree, de Bruijn and etc. networks. therefore we’ve got planned another constant degree, scalable straightforward routing networks named as concentric rings (m) in this paper, during which rings can be increased as per demand and our results shows that the overheads at significant load will to decreases conjointly.
Pervasive Healthcare, 2021
Computer Systems Science and Engineering, 2022
Software reliability is the primary concern of software development organizations, and the expone... more Software reliability is the primary concern of software development organizations, and the exponentially increasing demand for reliable software requires modeling techniques to be developed in the present era. Small unnoticeable drifts in the software can culminate into a disaster. Early removal of these errors helps the organization improve and enhance the software's reliability and save money, time, and effort. Many soft computing techniques are available to get solutions for critical problems but selecting the appropriate technique is a big challenge. This paper proposed an efficient algorithm that can be used for the prediction of software reliability. The proposed algorithm is implemented using a hybrid approach named Neuro-Fuzzy Inference System and has also been applied to test data. In this work, a comparison among different techniques of soft computing has been performed. After testing and training the real time data with the reliability prediction in terms of mean relative error and mean absolute relative error as 0.0060 and 0.0121, respectively, the claim has been verified. The results claim that the proposed algorithm predicts attractive outcomes in terms of mean absolute relative error plus mean relative error compared to the other existing models that justify the reliability prediction of the proposed model. Thus, this novel technique intends to make this model as simple as possible to improve the software reliability.
Handbook of Research on Future Opportunities for Technology Management Education, 2021
Big data helps drive quality, customize many services, satisfy clients, and increase profit in bu... more Big data helps drive quality, customize many services, satisfy clients, and increase profit in business organizations. This chapter discusses advanced analytical strategies, together with the data classification, multivariate and regression analysis on interpreting the text analysis and specific data management technologies and many tools that scientifically support advanced data analytics management with big data knowledge. This work also highlighted the deliverables, converting degree associate analytics project to associate degree in progress quality of a data organization's management operation, and making helpful, clear, and visual analytical outputs supported on the basis of actionable knowledge.
Uploads
Papers by Mohammad Zubair Khan