Papers by Junaid khan khan
Journal of the Pakistan Institute of Chemical Engineers
Coal is blackish color sedimentary rock which occurred in layer forms. There are certain ranks of... more Coal is blackish color sedimentary rock which occurred in layer forms. There are certain ranks of Coal which can be found in different areas of World. It can extract from different mining techniques. The analysis of coal and biomass blends. By using this technology the low grade coal converts into a valuable material which is used as fuel for domestic, commercial and industrial purpose. Biomass blends are made of Pakistani coal with different biomass, bagasse, coconut shell, Coconut waste and saw dust i.e. High Gross calorific value coal is used to blend with biomass of high calorific value and low ash content. Purpose of low ash content is not to effect the environment. The technique was to determine whether which type either Biomass and coal blend to be used for the burning and to give an idea for their use in generating steam for energy production. Which can be used in different types of Gasifiers/boilers i.e circulating fluidized bed. Gross calorific value was determined on the ...
Remote Sensing
Land subsidence is a major concern in vastly growing metropolitans worldwide. The most serious ri... more Land subsidence is a major concern in vastly growing metropolitans worldwide. The most serious risks in this scenario are linked to groundwater extraction and urban development. Pakistan’s fourth-largest city, Rawalpindi, and its twin Islamabad, located at the northern edge of the Potwar Plateau, are witnessing extensive urban expansion. Groundwater (tube-wells) is residents’ primary daily water supply in these metropolitan areas. Unnecessarily pumping and the local inhabitant’s excessive demand for groundwater disturb the sub-surface’s viability. The Persistent Scatterer Interferometry Synthetic Aperture Radar (PS-InSAR) approach, along with Sentinel-1 Synthetic Aperture Radar (SAR) imagery, were used to track land subsidence in Rawalpindi-Islamabad. The SARPROZ application was used to study a set of Sentinel-1 imagery obtained from January 2019 to June 2021 along descending and ascending orbits to estimate ground subsidence in the Rawalpindi-Islamabad area. The results show a sign...
Plant Viruses As Molecular Pathogens
... Flint, 1997; Spall et al., 1997). Plant RNA viruses that ex-press polyproteins include viruse... more ... Flint, 1997; Spall et al., 1997). Plant RNA viruses that ex-press polyproteins include viruses from diverse genera, such as eomo-, nepo-, poty-, carla-, clostero-, and ty mo viruses (Spall et al., 1997). By ac-tivity analyses and ...
Proceedings of the 10th International Conference on Smart Cities and Green ICT Systems, 2021
An accurate model of building interiors with detailed annotations is critical to protecting the f... more An accurate model of building interiors with detailed annotations is critical to protecting the first responders' safety and building occupants during emergency operations. In collaboration with the City of Memphis, we collected extensive LiDAR and image data for the city's buildings. We apply machine learning techniques to detect and classify objects of interest for first responders and create a comprehensive 3D indoor space database with annotated safety-related objects. This paper documents the challenges we encountered in data collection and processing, and it presents a complete 3D mapping and labeling system for the environments inside and adjacent to buildings. Moreover, we use a case study to illustrate our process and show preliminary evaluation results.
AIMS Mathematics, 2021
In this paper, we develop a theory for Standard bases of $ K $-subalgebras in $ K[[t_{1}, t_{2}, ... more In this paper, we develop a theory for Standard bases of $ K $-subalgebras in $ K[[t_{1}, t_{2}, \ldots, t_{m}]] [x_{1}, x_{2}, ..., x_{n}] $ over a field $ K $ with respect to a monomial ordering which is local on $ t $ variables and we call them Subalgebra Standard bases. We give an algorithm to compute subalgebra homogeneous normal form and an algorithm to compute weak subalgebra normal form which we use to develop an algorithm to construct Subalgebra Standard bases. Throughout this paper, we assume that subalgebras are finitely generated.
2018 International Conference on Computing, Networking and Communications (ICNC), 2018
Information-centric networks enables a multitude of nodes, in particular near the end-users, to p... more Information-centric networks enables a multitude of nodes, in particular near the end-users, to provide storage and communication. At the edge, nodes can connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within the topology of the edge network. The centrality is then used to distinguish nodes at the edge of the network. We argue that, for a network with caches, graph centrality is not an appropriate metric. Indeed, a node with low connectivity (and thereby low centrality) that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce a popularity-weighted contentbased centrality (P-CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements P-CBC on three random instances of large scale realistic network topology comprising 2, 896 nodes with three content replication levels. Results shows that P-CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate. Index Terms-Information/Content Centric Networking, Content Caching, Fog Networking, Content Offload.
International Journal of Current Microbiology and Applied Sciences, 2017
IEEE Access, 2021
This paper introduces new learning to the prediction model to enhance the prediction algorithms' ... more This paper introduces new learning to the prediction model to enhance the prediction algorithms' performance in dynamic circumstances. We have proposed a novel technique based on the alphabeta filter and deep extreme learning machine (DELM) algorithm named as learning to alpha-beta filter. The proposed method has two main components, namely the prediction unit and the learning unit. We have used the alpha-beta filter in the prediction unit, and the learning unit uses a DELM. The main problem with the conventional alpha-beta filter is that the values are generally selected via the trial-and-error technique. Once the alpha-beta values are chosen for a specific problem, they remain fixed for the entire data. It has been observed that different alpha-beta values for the same problem give different results. Hence it is essential to tune the alpha-beta values according to their historical behavior for certain values. Therefore, in the proposed method, we have addressed this problem and added the learning module to the conventional α-β filter to improve the α-β filter's performance. The DELM algorithm has been used to enhance the conventional alpha-beta filter algorithm's performance in dynamically changing conditions. The model performance has been measured using indoor environmental values of temperature and humidity. The relative improvement in the proposed learning prediction model's accuracy was 7.72% and 16.47% in RMSE and MSE metrics. The results show that the proposed model outperforms in terms of the result as compared to the conventional alpha-beta filter. INDEX TERMS Alpha-beta filter, learning algorithm, prediction algorithm, deep extreme learning machine, energy prediction.
IEEE Access, 2021
Global Software Development (GSD) projects comprise several critical cost drivers that affect the... more Global Software Development (GSD) projects comprise several critical cost drivers that affect the overall project cost and budget overhead. Thus, there is a need to amplify the existing model in GSD context to reduce the risks associated with cost overhead. Motivated by this, the current work aims at amplifying the existing algorithmic model with GSD cost drivers to get efficient estimates in the context of GSD. To achieve the targeted research objective, current state-of-the-art cost estimation techniques and GSD models are reported. Furthermore, the current study has proposed a conceptual framework to amplify the algorithmic COCOMO-II model in the GSD domain to accommodate additional cost drivers empirically validated by a systematic review and industrial practitioners. The main phases of amplification include identifying cost drivers, categorizing cost drivers, forming metrics, assignment of values, and finally altering the base model equation. Moreover, the proposed conceptual model's effectiveness is validated through expert judgment, case studies, and Magnitude of Relative Estimates (MRE). The obtained estimates are efficient, quantified, and cover additional GSD aspects than the existing models; hence we could overcome the GSD project's overall risk by implementing the model. Finally, the results indicate that the model needs further calibration and validation. INDEX TERMS Global software development, cost estimation, COCOMO-II, cost overhead.
IEEE Access, 2021
Software organization always aims at developing a quality software product using the estimated de... more Software organization always aims at developing a quality software product using the estimated development resources, effort, and time. Global Software Development (GSD) has emerged as an essential tool to ensure optimal utilization of resources, which is performed in globally distributed settings in various geographical locations. Global software engineering focuses on reducing the cost, increasing the development speed, and accessing skilled developers worldwide. Estimating the required amount of resources and effort in the distributed development environment remains a challenging task. Thus, there is a need to focus on cost estimation models in the GSD context. We nevertheless acknowledge that several cost estimation techniques have been reported. However, to the best of our knowledge, the existing cost estimation techniques/models lack considering the additional cost drivers required to compute the accurate cost estimation in the GSD context. Motivated by this, the current work aims at identifying the other cost drivers that affect the cost estimation in the context of GSD. To achieve the targeted objectives, current stateof-the-art related to existing cost estimation techniques of GSD is reported. We adopted SLR and Empirical approach to address the formulated research questions. The current study also identifies the missing factors that would help the practitioners improve the cost estimation models. The results indicate that previously conducted work ignores the additional elements necessary for the cost estimation in the GSD context. Moreover, the current work proposes a conceptual cost estimation model tailored to fit the GSD context. INDEX TERMS Global software development, distributed development, cost estimation, systematic review.
Pakistan Veterinary Journal, 2015
Received: Revised: Accepted: November 13, 2014 February 12, 2015 February 21, 2015 Gentamicin-ind... more Received: Revised: Accepted: November 13, 2014 February 12, 2015 February 21, 2015 Gentamicin-induced nephrotoxicity is associated with free radicals generation in kidney tissue. Medicinal plants contain bioactive constituents that act as natural antioxidants, which can protect the biological system from oxidative stress. In this study, phytochemical constituents of aqueous extract of Calotropis procera flowers and its efficacy on gentamicin-induced nephrotoxicity in rabbit model were evaluated. Renal function markers, total oxidant status and catalase activity were assessed along with hematological and renal histopathological analyses. Phytochemical analysis showed the presence of tannins, flavonoids, saponins, glycosides, phenols, alkaloids, steroids and ascorbic acid. Gentamicin administration led to severe changes in serum biochemical markers along with significant alteration in kidney tissue. Co-administration of plant extract with gentamicin reduced the gentamicin-induced toxi...
Latin American and Caribbean Bulletin of Medicinal and Aromatic Plants, 2015
La nefrotoxicidad es uno de los efectos secundarios mas importantes limitaciones terapeuticas de ... more La nefrotoxicidad es uno de los efectos secundarios mas importantes limitaciones terapeuticas de los antibioticos aminoglucosidos, especialmente gentamicina. La nefrotoxicidad inducida por gentamicina implica generacion de radicales libres, la reduccion en el mecanismo de defensa antioxidante y la disfuncion renal. Una serie de extractos de hierbas crudas tienen potencial para mejorar la nefrotoxicidad inducida por gentamicina debido a la presencia de varios compuestos antioxidantes. Por lo tanto, el objetivo del presente estudio fue evaluar la actividad protectora del extracto acuoso semillas de T. ammi contra la nefrotoxicidad inducida por gentamicina en conejos albinos. Los resultados mostraron que la gentamicina causo graves alteraciones en los parametros bioquimicos sericos y los marcadores de rinon, junto con alteraciones severas en los tejidos renales. Sin embargo, el extracto de T. ammi, cuando se administra junto con la gentamicina, invierte la gravedad de la nefrotoxicidad...
All Information-Centric Networking (ICN) architectures proposed to date aim at connecting users t... more All Information-Centric Networking (ICN) architectures proposed to date aim at connecting users to content directly, rather than connecting clients to servers. Surprisingly, however, although content caching is an integral of any information-Centric Network, limited work has been reported on information-centric management of caches in the context of an ICN. Indeed, approaches to cache management in networks of caches have focused on network connectivity rather than proximity to content. We introduce the Network-oriented Information-centric Centrality for Efficiency (NICE) as a new metric for cache management in information-centric networks. We propose a method to compute information-centric centrality that scales with the number of caches in a network rather than the number of content objects, which is many orders of magnitude larger. Furthermore, it can be pre-processed offline and ahead of time. We apply the NICE metric to a content replacement policy in caches, and show that a co...
The work on the theory of Groebner bases for ideals in a polynomial ring with countably infinite ... more The work on the theory of Groebner bases for ideals in a polynomial ring with countably infinite indeterminates over a field [5] has created impetus to develop the theory of Sagbi bases [6] and Sagbi Groebner bases [3] in the same polynomial ring. This paper demonstrates the construction of Sagbi basis and Sagbi Groebner basis using the technique of constructing these bases in a polynomial ring with finite indeterminates.
2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), 2019
Handwritten character recognition is among the most challenging research areas in pattern recogni... more Handwritten character recognition is among the most challenging research areas in pattern recognition and image processing. With everything going digital, applications of handwritten character recognition are emerging in offices, educational institutes, healthcare units and banks etc., where the documents that are handwritten are dealt more frequently. In this paper, a recognition system based on neural network that follows offline handwritten characters has been proposed for Latin digits and alphabets. Each of the characters that are extracted through query image is then resized dynamically to 60×40 pixels’ size and is then passed to the neural networks for the process of recognition. Dynamic resizing enables size invariance in the proposed system and also maintains the aspect ratio of the character so that the image is not distorted during resizing. Neural networks are trained with 19,422 English alphabets’ sample and 7,720 digits’ sample that are written through 150 different writers in various styles of handwriting. Experimental study realized very encouraging results which are compared with the modern methods on this subject corridor.
The Visual Computer, 2021
Face recognition is diversely used in modern biometric and security applications. Most of the cur... more Face recognition is diversely used in modern biometric and security applications. Most of the current face recognition techniques show good results in a constrained environment. However, these techniques face many problems in real-world scenarios such as low-quality images, temporal variations and facial disguises creating variations in facial features. The reason for these deteriorating results is the employment of hand-crafted features having weak generalization capabilities and neglecting the complexities associated with domain adaption in case of deep learning models. In this paper, we have studied the efficacy of deep learning methods incorporating simple noise-based data augmentation for disguise invariant face recognition (DIFR). The proposed method detects face in an image using Viola Jones face detector and classifies it using a pre-trained Convolutional Neural Network (CNN) fine-tuned for DIFR. During transfer learning, a pre-trained CNN learns generalized disguise-invariant features from facial images of several subjects to correctly identify them under varying facial disguises. We have compared four different pre-trained 2D CNNs, each with different number of learning parameters, based on their classification accuracy and execution time for selecting a suitable model for DIFR. Comprehensive experiments and comparative analysis have been conducted on six challenging facial disguise datasets. Resnet-18 gives the best trade-off between accuracy and efficiency, by achieving an average accuracy of 98.19% with an average execution time of 0.32 seconds. The promising results achieved in these experiments reflect the efficiency of the proposed method and outperforms the existing methods in all aspects.
2017 29th International Teletraffic Congress (ITC 29), 2017
Mobile users in an urban environment access content on the internet from different locations. It ... more Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60 − 85% of cache hits compared to the 30 − 40% obtained by the existing schemes and 10% in case of no coalition.
Proceedings of the 5th ACM Conference on Information-Centric Networking, 2018
All Information-Centric Networking (ICN) architectures proposed to date aim at connecting users t... more All Information-Centric Networking (ICN) architectures proposed to date aim at connecting users to content directly, rather than connecting clients to servers. Surprisingly, however, although content caching is an integral of any information-Centric Network, limited work has been reported on information-centric management of caches in the context of an ICN. Indeed, approaches to cache management in networks of caches have focused on network connectivity rather than proximity to content. We introduce the Network-oriented Information-centric Centrality for Efficiency (NICE) as a new metric for cache management in information-centric networks. We propose a method to compute information-centric centrality that scales with the number of caches in a network rather than the number of content objects, which is many orders of magnitude larger. Furthermore, it can be pre-processed offline and ahead of time. We apply the NICE metric to a content replacement policy in caches, and show that a content replacement based on NICE exhibits better performances than LRU and other policies based on topology-oriented definitions of centrality.
NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium, 2020
Using local caches is becoming a necessity to alleviate bandwidth pressure on cellular links, and... more Using local caches is becoming a necessity to alleviate bandwidth pressure on cellular links, and a number of caching approaches advocate caching popular content at nodes with high centrality, which quantifies how well connected nodes are. These approaches have been shown to outperform caching policies unrelated to node connectivity. However, caching content at highly connected nodes places poorly connected nodes with low centrality at a disadvantage: in addition to their poor connectivity, popular content is placed far from them at the more central nodes. We propose reversing the way in which node connectivity is used for the placement of content in caching networks, and introduce a Low-Centrality High-Popularity (LoCHiP) caching algorithm that populates poorly connected nodes with popular content. We conduct a thorough evaluation of LoCHiP against other centrality-based caching policies and traditional caching methods using hit rate, and hop-count to content as performance metrics. The results show that LoCHiP outperforms significantly the other methods.
Uploads
Papers by Junaid khan khan