Student-teachers' dropout is a complicated and serious issue in the learning process, with its at... more Student-teachers' dropout is a complicated and serious issue in the learning process, with its attendant negative implications on students, academic institutions, economic resources, and society. This study investigated the composite and relative impact of personal (student), academic and socioeconomic predictive variables on the student-teachers dropout. The study improves the early identification of at-risk student-teachers by developing a model that optimizes predictability. We used questionnaires and adopted a four-step logistic regression procedure on a sample of 1723 student-teachers in public teachers training colleges (TTCs) of a least-developed country (LDC). The study confirmed twin factors of academic performance and aspirations as the highest predictors of student-teachers attrition. Academic reasons for choosing TTC were significant, as vocational motivation and goals established by student-teachers early in their education help prevent dropout. Contrary to expectations, student-teachers' cultural values, parents' level of education, and cost of financing education had no significant impact on dropout decisions. This is most likely due to the Government's financial support for student-teachers in LDCs and the widespread belief that higher education can improve one's social and economic status. The findings indicate that early identification and dropout prevention efforts should integrate various support services to foster a healthy learning and retention environment.
In this work, we initiate a detailed study of the parameter-ized complexity of Minimax Approval V... more In this work, we initiate a detailed study of the parameter-ized complexity of Minimax Approval Voting. We demon-strate that the problem is W[2]-hard when parameterized by the size of the committee to be chosen, but does ad-mit a FPT algorithm when parameterized by the number of strings that is more efficient than the previous ILP-based approaches for the problem. We also consider several com-binations of parameters and provide a detailed landscape of the parameterized and kernelization complexity of the prob-lem. We also study the version of the problem where we permit outliers, that is, where the chosen committee is re-quired to satisfy a large number of voters (instead of all of them). In this context, we strengthen an APX-hardness result in the literature, and also show a simple but strong W-hardness result.
Advances in Structural and Multidisciplinary Optimization
The structural optimization of large crashworthy systems like a vehicle body in a crash loaded ca... more The structural optimization of large crashworthy systems like a vehicle body in a crash loaded case is a time consuming and costly process. The computation time can be reduced by dividing the large system (main model) into small systems called submodels. These submodels can be effectively used in the optimization to shorten the response time of the simulation. The generation of submodels by hand is challenging and requires a lot of effort and knowledge to create and validate them. This paper presents a workflow to automatically generate and validate the submodels using various mathematical functions. A submodel is a region of interest cut out from the large system which is to be analyzed in detail [1]. This detailed analysis can be useful to enhance the structural performance of a large crash system. There are two important parameters to generate a submodel using the so called connecting island algorithm, the threshold ratio and the connecting island value. These parameters are based on an evaluation function which is a structural response with time averaging and space averaging. The size of the submodel depends on these two parameters. The validation of a submodel is a four step process i.e. a local, global, mean and a response validation. These four steps measure the deviation between the submodel and their counterparts in the large system. The validation process is used to identify the quality of a submodel. It is discussed how the validation criteria effects the submodel quality and its size. The aim of this research work is to create a method for automatic generation and validation of submodels which is universally applicable to different crash models. The method is demonstrated on two different examples. The first one is an academic crash model of a cantilever frame hit by a rigid sphere. The submodels are generated and validated for different crash scenarios, where the rigid sphere hits the cantilever frame at different positions. The second is an industrial crash example of Toyota YARIS in front and side crash.
1University of Wuppertal, Faculty for Mechanical Engineering and Safety Engineering, Chair for Op... more 1University of Wuppertal, Faculty for Mechanical Engineering and Safety Engineering, Chair for Optimization of Mechanical Structures, Gaußstraße 20, 42119 Wuppertal, Germany 2Automotive Simulation Center Stuttgart e. V., Nobelstraße 15, D-70569 Stuttgart 3Gesellschaft für numerische Simulation mbH, Am Gaußberg 2, D-38114 Braunschweig 4divis intelligent solutions GmbH, Joseph-von-Fraunhofer-Straße 20, D-44227 Dortmund
Over the past decade, companies constituting a wide spectrum of industries have been focussing on... more Over the past decade, companies constituting a wide spectrum of industries have been focussing on leveraging the competencies and innovative capabilities to be found in clusters of customers and suppliers constituting their business supply chains. In today’s competitive environment, no enterprise can expect to build a product, process or competitive advantage without integrating their strategies with those of supply chain systems. In the past, what occurred inside the four walls of the business was of primary importance. In contrast, today a company’s ability to look outward to their channel alliances to gain access to sources of unique competencies and physical resources is the measure of success. In fact, the ultimate core competency an enterprise may hold today is not found in a temporary advantage it may hold, for example, in an area of product design or market brand, but rather in the ability to continuously assemble and implement market winning capabilities arising from collaborative alliances with their supply chain partners. Enterprises have increasingly come to recognise that their performance depends, to a large extent, on their role in supply chain ecologies and competitiveness of supply chains they participate in.
2021 International Conference on Intelligent Technologies (CONIT), 2021
Supervised Learning is defined as training a model with input data that includes the result itsel... more Supervised Learning is defined as training a model with input data that includes the result itself. There are large number of supervised learning algorithms and great number of models. Each model has its own merits and demerits and performs differently. There are many data preprocessing techniques and hence the combination of several data preprocessing techniques can increase the performance of the present supervised learning models. Primary data can not be fed directly to the learning model because it can hold a lot of noise. It needs to be preprocessed using various data preprocessing techniques. We have analysed and compared different data preprocessing techniques and their combinations. Comparison is done using various performance metrics and the combination of different data preprocessing is applied to different model. We have done categorical data handling, missing value treatment, feature scaling and feature extraction as the data preprocessing steps. Through the comparison, ...
In the last decade, research in high energy, short duration, pulsed fibre laser systems has yield... more In the last decade, research in high energy, short duration, pulsed fibre laser systems has yielded significant advances that have allowed fibre laser systems to increasingly dominate the commercial sector in laser machining, as well as in medical sensing, imaging, and spectroscopy [1]. Users of these products look for a number of qualities. High energy and short pulse duration are fundamentally important. Other design objectives include beam quality, wall plug efficiency, and thermal stability. However, as laser products move into areas of wider application, end users require more general specifications like hands off interfacing which doesn’t require expert knowledge, and environmental stability to ensure the laser continues to operate with longevity without maintenance or power/wavelength perturbations. We have recently demonstrated robust, passively mode-locked fibre laser designs that achieve these goals [2]. These devices use nonlinear amplifying loop mirrors (NALMs) for mode-...
More than any other infectious disease epidemic, the COVID-19 pandemic has been characterized by ... more More than any other infectious disease epidemic, the COVID-19 pandemic has been characterized by the generation of large volumes of viral genomic data at an incredible pace due to recent advances in high-throughput sequencing technologies, the rapid global spread of SARS-CoV-2, and its persistent threat to public health. However, distinguishing the most epidemiologically relevant information encoded in these vast amounts of data requires substantial effort across the research and public health communities. Studies of SARS-CoV-2 genomes have been critical in tracking the spread of variants and understanding its epidemic dynamics, and may prove crucial for controlling future epidemics and alleviating significant public health burdens. Together, genomic data and bioinformatics methods enable broad-scale investigations of the spread of SARS-CoV-2 at the local, national, and global scales and allow researchers the ability to efficiently track the emergence of novel variants, reconstruct ...
We set up our ambulatory care unit and heart failure (HF) lounge to improve management of heart f... more We set up our ambulatory care unit and heart failure (HF) lounge to improve management of heart failure (HF) patients;reduce A&E attendances and hospital admissions;optimise medicine administration;facilitate discharge from our wards and promote an holistic approach. We kept the lounge running as a 'green' zone through the Covid pandemic in 2020. Methods The programme used a combination of quality improvement and Lean methodology working with staff and patients to create new pathways The service processes were reviewed and referral pathways aligned to support community to specialist referral and redesign of service was completed. The team monitored quality outcomes such as patients access to specialist review, medication review and patient experience of the service, also identifying patterns of attendance and admission pre and post attendance to the lounge. Results The evaluation of the programme, centred on the experience of staff and patients, has shown improved informatio...
Proceedings of the 7th International Conference on Computer and Communication Technology - ICCCT-2017
All people can not do as they plan, it happens because of their habits. Therefore, habits and moo... more All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.
2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)
Following work presents the idea for an effective method of electronic warfare system that can ac... more Following work presents the idea for an effective method of electronic warfare system that can act as a useful device in military applications. The UAV is designed to be a hexa-copter and has a gun mounted on it so that it can eliminate the target with negligible risk of losing human life in combat or war-like situation. Its novelty lies in its anti-radar nature, which is adapted after analyzing various jamming and electronic countermeasure techniques. Hence as ‘Stealth’ technique for jamming is adapted by integration of ‘Radar absorbent material’ to lower the Radar Cross section (RCS), which further decreases the detectability of the UAV in radar’s region. With its ability to stay in air, it can easily be controlled and due to its integration with the high definition-low vision camera it can target the enemy precisely, without any risk of error, taking accurate coordinates of the target. The whole warfare system can be monitored on a screen of mobile or laptop depending on the situation of operation. UAV is armed with a light-weight gun which can be used in low vision, hence attacking particular enemy without any loss of other human life.
Proceedings of the Genetic and Evolutionary Computation Conference
NeuroEvolution of Augmenting Topology (NEAT) is one of the most successful algorithms for solving... more NeuroEvolution of Augmenting Topology (NEAT) is one of the most successful algorithms for solving traditional reinforcement learning (RL) tasks such as pole-balancing. However, the algorithm faces serious challenges while tackling problems with large state spaces, particularly the Atari game playing tasks. This is due to the major flaw that NEAT aims at evolving a single neural network (NN) that must be able to simultaneously extract high-level state features and select action outputs. However such complicated NNs cannot be easily evolved directly through NEAT. To address this issue, we propose a new reinforcement learning scheme based on NEAT with two key technical advancements: (1) a new three-stage learning scheme is introduced to clearly separate feature learning and policy learning to allow effective knowledge sharing and learning across multiple agents; (2) various policy gradient search algorithms can be seamlessly integrated with NEAT for training policy networks with deep structures to achieve effective and sample efficient RL. Experiments on several Atari games confirm that our new learning scheme can be more effective and has higher sample efficiency than NEAT and three state-of-the-art algorithms from the most recent RL literature.
Objective: Our study aimed to provide an improvised model that classifies the fetal distress usin... more Objective: Our study aimed to provide an improvised model that classifies the fetal distress using a two-dimensional Convolution neural network (CNN). It also helps in improving the visualization of FHR and UC signals. Background: Hypoxia or Fetal Distress is the main cause of death in the newborns. Cardiotocography is used to detect hypoxia in which fetal heart rate and uterine contraction signals are observed. Setting: Department of Computer Engineering and Technology, Guru Nanak Dev University, India. Subjects: The CTG-UHB database was used for classification purpose and 552 records were analyzed for classification purposes. Methods: Convolutional Neural Network was used for the classification purpose and HoloViz was used for the visualization of data in which HvPlot and HoloViews libraries are used in python. The CTG-UHB database was used for the analysis purpose. A total of 552 records were used for classification purposes. The classification was performed on the Keras software...
Domain names have a dual role in today's internet driven market place-to map IP addresses and to ... more Domain names have a dual role in today's internet driven market place-to map IP addresses and to act as identifier of trademark of a company. Unlike trademarks, domain names are not sufficiently protected by the laws of a country. There is no uniformity to protect domain names among the laws of various countries. In order to protect the domain names and bring uniformity, ICANN developed the Uniform Domain Name Resolution Policy (UDRP). In this research, the various kinds of domain name abuses are identified. The application of UDRP, domain name registration process and dispute resolution service process are examined. The major domain name dispute cases resolved under UPRP by WIPO are studied. It has been found that UDRP is applicable to generic top level domains (gTLDs) and new gTLDs. It is much less relevant for country code top level domains (ccTLDs). The losing party still has the option of appealing to a court of competent jurisdiction in case of gTLDs and new gTLDs. However, this option is seldom exercised. In order to protect the domain names in a better way, there is a need to bring uniformity to domain name laws of various countries. ICANN should formulate a model domain names dispute resolution law for adoption by various countries. Also, there is a need to strengthen the UDRP.
With the increasing popularity of ICT, cyber-crimes have increased rapidly. Countries across the ... more With the increasing popularity of ICT, cyber-crimes have increased rapidly. Countries across the globe have made the necessary interventions to ensure cyber-security. Saudi Arabia has been the worst victim of cyber-crimes in the Gulf region. This article investigates the preparedness of Saudi Arabia to defend itself against cyber-crimes. In order to combat against cyber-crimes, Saudi Arabia formed the anti-cybercrimes law in 2007. Global Cyber-security Index of 2017 has placed Saudi Arabia in the maturing stage behind the leading nations. Anti-cybercrimes law covers essential areas to fight against cyber-crimes and states their punishments. However, it is found to be deficient to protect against identity theft, invasion of privacy, cyber-bullying etc. This research finds Saudi Arabia semi-prepared to defend itself against cyber-crimes. In order to be among the leading nations of cyber-security; Saudi Arabia needs to strengthen its anticybercrimes law, cyber-security regulations and national cyber-security authority. It needs to develop cyber-security strategy, standards, metrics and R&D programs. It should promote home-grown cyber-security industry, incentivize cyber-security companies and enter into multilateral agreements.
Student-teachers' dropout is a complicated and serious issue in the learning process, with its at... more Student-teachers' dropout is a complicated and serious issue in the learning process, with its attendant negative implications on students, academic institutions, economic resources, and society. This study investigated the composite and relative impact of personal (student), academic and socioeconomic predictive variables on the student-teachers dropout. The study improves the early identification of at-risk student-teachers by developing a model that optimizes predictability. We used questionnaires and adopted a four-step logistic regression procedure on a sample of 1723 student-teachers in public teachers training colleges (TTCs) of a least-developed country (LDC). The study confirmed twin factors of academic performance and aspirations as the highest predictors of student-teachers attrition. Academic reasons for choosing TTC were significant, as vocational motivation and goals established by student-teachers early in their education help prevent dropout. Contrary to expectations, student-teachers' cultural values, parents' level of education, and cost of financing education had no significant impact on dropout decisions. This is most likely due to the Government's financial support for student-teachers in LDCs and the widespread belief that higher education can improve one's social and economic status. The findings indicate that early identification and dropout prevention efforts should integrate various support services to foster a healthy learning and retention environment.
In this work, we initiate a detailed study of the parameter-ized complexity of Minimax Approval V... more In this work, we initiate a detailed study of the parameter-ized complexity of Minimax Approval Voting. We demon-strate that the problem is W[2]-hard when parameterized by the size of the committee to be chosen, but does ad-mit a FPT algorithm when parameterized by the number of strings that is more efficient than the previous ILP-based approaches for the problem. We also consider several com-binations of parameters and provide a detailed landscape of the parameterized and kernelization complexity of the prob-lem. We also study the version of the problem where we permit outliers, that is, where the chosen committee is re-quired to satisfy a large number of voters (instead of all of them). In this context, we strengthen an APX-hardness result in the literature, and also show a simple but strong W-hardness result.
Advances in Structural and Multidisciplinary Optimization
The structural optimization of large crashworthy systems like a vehicle body in a crash loaded ca... more The structural optimization of large crashworthy systems like a vehicle body in a crash loaded case is a time consuming and costly process. The computation time can be reduced by dividing the large system (main model) into small systems called submodels. These submodels can be effectively used in the optimization to shorten the response time of the simulation. The generation of submodels by hand is challenging and requires a lot of effort and knowledge to create and validate them. This paper presents a workflow to automatically generate and validate the submodels using various mathematical functions. A submodel is a region of interest cut out from the large system which is to be analyzed in detail [1]. This detailed analysis can be useful to enhance the structural performance of a large crash system. There are two important parameters to generate a submodel using the so called connecting island algorithm, the threshold ratio and the connecting island value. These parameters are based on an evaluation function which is a structural response with time averaging and space averaging. The size of the submodel depends on these two parameters. The validation of a submodel is a four step process i.e. a local, global, mean and a response validation. These four steps measure the deviation between the submodel and their counterparts in the large system. The validation process is used to identify the quality of a submodel. It is discussed how the validation criteria effects the submodel quality and its size. The aim of this research work is to create a method for automatic generation and validation of submodels which is universally applicable to different crash models. The method is demonstrated on two different examples. The first one is an academic crash model of a cantilever frame hit by a rigid sphere. The submodels are generated and validated for different crash scenarios, where the rigid sphere hits the cantilever frame at different positions. The second is an industrial crash example of Toyota YARIS in front and side crash.
1University of Wuppertal, Faculty for Mechanical Engineering and Safety Engineering, Chair for Op... more 1University of Wuppertal, Faculty for Mechanical Engineering and Safety Engineering, Chair for Optimization of Mechanical Structures, Gaußstraße 20, 42119 Wuppertal, Germany 2Automotive Simulation Center Stuttgart e. V., Nobelstraße 15, D-70569 Stuttgart 3Gesellschaft für numerische Simulation mbH, Am Gaußberg 2, D-38114 Braunschweig 4divis intelligent solutions GmbH, Joseph-von-Fraunhofer-Straße 20, D-44227 Dortmund
Over the past decade, companies constituting a wide spectrum of industries have been focussing on... more Over the past decade, companies constituting a wide spectrum of industries have been focussing on leveraging the competencies and innovative capabilities to be found in clusters of customers and suppliers constituting their business supply chains. In today’s competitive environment, no enterprise can expect to build a product, process or competitive advantage without integrating their strategies with those of supply chain systems. In the past, what occurred inside the four walls of the business was of primary importance. In contrast, today a company’s ability to look outward to their channel alliances to gain access to sources of unique competencies and physical resources is the measure of success. In fact, the ultimate core competency an enterprise may hold today is not found in a temporary advantage it may hold, for example, in an area of product design or market brand, but rather in the ability to continuously assemble and implement market winning capabilities arising from collaborative alliances with their supply chain partners. Enterprises have increasingly come to recognise that their performance depends, to a large extent, on their role in supply chain ecologies and competitiveness of supply chains they participate in.
2021 International Conference on Intelligent Technologies (CONIT), 2021
Supervised Learning is defined as training a model with input data that includes the result itsel... more Supervised Learning is defined as training a model with input data that includes the result itself. There are large number of supervised learning algorithms and great number of models. Each model has its own merits and demerits and performs differently. There are many data preprocessing techniques and hence the combination of several data preprocessing techniques can increase the performance of the present supervised learning models. Primary data can not be fed directly to the learning model because it can hold a lot of noise. It needs to be preprocessed using various data preprocessing techniques. We have analysed and compared different data preprocessing techniques and their combinations. Comparison is done using various performance metrics and the combination of different data preprocessing is applied to different model. We have done categorical data handling, missing value treatment, feature scaling and feature extraction as the data preprocessing steps. Through the comparison, ...
In the last decade, research in high energy, short duration, pulsed fibre laser systems has yield... more In the last decade, research in high energy, short duration, pulsed fibre laser systems has yielded significant advances that have allowed fibre laser systems to increasingly dominate the commercial sector in laser machining, as well as in medical sensing, imaging, and spectroscopy [1]. Users of these products look for a number of qualities. High energy and short pulse duration are fundamentally important. Other design objectives include beam quality, wall plug efficiency, and thermal stability. However, as laser products move into areas of wider application, end users require more general specifications like hands off interfacing which doesn’t require expert knowledge, and environmental stability to ensure the laser continues to operate with longevity without maintenance or power/wavelength perturbations. We have recently demonstrated robust, passively mode-locked fibre laser designs that achieve these goals [2]. These devices use nonlinear amplifying loop mirrors (NALMs) for mode-...
More than any other infectious disease epidemic, the COVID-19 pandemic has been characterized by ... more More than any other infectious disease epidemic, the COVID-19 pandemic has been characterized by the generation of large volumes of viral genomic data at an incredible pace due to recent advances in high-throughput sequencing technologies, the rapid global spread of SARS-CoV-2, and its persistent threat to public health. However, distinguishing the most epidemiologically relevant information encoded in these vast amounts of data requires substantial effort across the research and public health communities. Studies of SARS-CoV-2 genomes have been critical in tracking the spread of variants and understanding its epidemic dynamics, and may prove crucial for controlling future epidemics and alleviating significant public health burdens. Together, genomic data and bioinformatics methods enable broad-scale investigations of the spread of SARS-CoV-2 at the local, national, and global scales and allow researchers the ability to efficiently track the emergence of novel variants, reconstruct ...
We set up our ambulatory care unit and heart failure (HF) lounge to improve management of heart f... more We set up our ambulatory care unit and heart failure (HF) lounge to improve management of heart failure (HF) patients;reduce A&E attendances and hospital admissions;optimise medicine administration;facilitate discharge from our wards and promote an holistic approach. We kept the lounge running as a 'green' zone through the Covid pandemic in 2020. Methods The programme used a combination of quality improvement and Lean methodology working with staff and patients to create new pathways The service processes were reviewed and referral pathways aligned to support community to specialist referral and redesign of service was completed. The team monitored quality outcomes such as patients access to specialist review, medication review and patient experience of the service, also identifying patterns of attendance and admission pre and post attendance to the lounge. Results The evaluation of the programme, centred on the experience of staff and patients, has shown improved informatio...
Proceedings of the 7th International Conference on Computer and Communication Technology - ICCCT-2017
All people can not do as they plan, it happens because of their habits. Therefore, habits and moo... more All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.
2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)
Following work presents the idea for an effective method of electronic warfare system that can ac... more Following work presents the idea for an effective method of electronic warfare system that can act as a useful device in military applications. The UAV is designed to be a hexa-copter and has a gun mounted on it so that it can eliminate the target with negligible risk of losing human life in combat or war-like situation. Its novelty lies in its anti-radar nature, which is adapted after analyzing various jamming and electronic countermeasure techniques. Hence as ‘Stealth’ technique for jamming is adapted by integration of ‘Radar absorbent material’ to lower the Radar Cross section (RCS), which further decreases the detectability of the UAV in radar’s region. With its ability to stay in air, it can easily be controlled and due to its integration with the high definition-low vision camera it can target the enemy precisely, without any risk of error, taking accurate coordinates of the target. The whole warfare system can be monitored on a screen of mobile or laptop depending on the situation of operation. UAV is armed with a light-weight gun which can be used in low vision, hence attacking particular enemy without any loss of other human life.
Proceedings of the Genetic and Evolutionary Computation Conference
NeuroEvolution of Augmenting Topology (NEAT) is one of the most successful algorithms for solving... more NeuroEvolution of Augmenting Topology (NEAT) is one of the most successful algorithms for solving traditional reinforcement learning (RL) tasks such as pole-balancing. However, the algorithm faces serious challenges while tackling problems with large state spaces, particularly the Atari game playing tasks. This is due to the major flaw that NEAT aims at evolving a single neural network (NN) that must be able to simultaneously extract high-level state features and select action outputs. However such complicated NNs cannot be easily evolved directly through NEAT. To address this issue, we propose a new reinforcement learning scheme based on NEAT with two key technical advancements: (1) a new three-stage learning scheme is introduced to clearly separate feature learning and policy learning to allow effective knowledge sharing and learning across multiple agents; (2) various policy gradient search algorithms can be seamlessly integrated with NEAT for training policy networks with deep structures to achieve effective and sample efficient RL. Experiments on several Atari games confirm that our new learning scheme can be more effective and has higher sample efficiency than NEAT and three state-of-the-art algorithms from the most recent RL literature.
Objective: Our study aimed to provide an improvised model that classifies the fetal distress usin... more Objective: Our study aimed to provide an improvised model that classifies the fetal distress using a two-dimensional Convolution neural network (CNN). It also helps in improving the visualization of FHR and UC signals. Background: Hypoxia or Fetal Distress is the main cause of death in the newborns. Cardiotocography is used to detect hypoxia in which fetal heart rate and uterine contraction signals are observed. Setting: Department of Computer Engineering and Technology, Guru Nanak Dev University, India. Subjects: The CTG-UHB database was used for classification purpose and 552 records were analyzed for classification purposes. Methods: Convolutional Neural Network was used for the classification purpose and HoloViz was used for the visualization of data in which HvPlot and HoloViews libraries are used in python. The CTG-UHB database was used for the analysis purpose. A total of 552 records were used for classification purposes. The classification was performed on the Keras software...
Domain names have a dual role in today's internet driven market place-to map IP addresses and to ... more Domain names have a dual role in today's internet driven market place-to map IP addresses and to act as identifier of trademark of a company. Unlike trademarks, domain names are not sufficiently protected by the laws of a country. There is no uniformity to protect domain names among the laws of various countries. In order to protect the domain names and bring uniformity, ICANN developed the Uniform Domain Name Resolution Policy (UDRP). In this research, the various kinds of domain name abuses are identified. The application of UDRP, domain name registration process and dispute resolution service process are examined. The major domain name dispute cases resolved under UPRP by WIPO are studied. It has been found that UDRP is applicable to generic top level domains (gTLDs) and new gTLDs. It is much less relevant for country code top level domains (ccTLDs). The losing party still has the option of appealing to a court of competent jurisdiction in case of gTLDs and new gTLDs. However, this option is seldom exercised. In order to protect the domain names in a better way, there is a need to bring uniformity to domain name laws of various countries. ICANN should formulate a model domain names dispute resolution law for adoption by various countries. Also, there is a need to strengthen the UDRP.
With the increasing popularity of ICT, cyber-crimes have increased rapidly. Countries across the ... more With the increasing popularity of ICT, cyber-crimes have increased rapidly. Countries across the globe have made the necessary interventions to ensure cyber-security. Saudi Arabia has been the worst victim of cyber-crimes in the Gulf region. This article investigates the preparedness of Saudi Arabia to defend itself against cyber-crimes. In order to combat against cyber-crimes, Saudi Arabia formed the anti-cybercrimes law in 2007. Global Cyber-security Index of 2017 has placed Saudi Arabia in the maturing stage behind the leading nations. Anti-cybercrimes law covers essential areas to fight against cyber-crimes and states their punishments. However, it is found to be deficient to protect against identity theft, invasion of privacy, cyber-bullying etc. This research finds Saudi Arabia semi-prepared to defend itself against cyber-crimes. In order to be among the leading nations of cyber-security; Saudi Arabia needs to strengthen its anticybercrimes law, cyber-security regulations and national cyber-security authority. It needs to develop cyber-security strategy, standards, metrics and R&D programs. It should promote home-grown cyber-security industry, incentivize cyber-security companies and enter into multilateral agreements.
Uploads
Papers by harman singh