2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD)
Cloud computing is the most important part of today's information technology world that provi... more Cloud computing is the most important part of today's information technology world that provides solutions to almost all the sectors for improving their business. This innovation helps organizations to enhance or redesign the services at a very low expense. This study investigates the integrated factors that impact in adopting Cloud computing technologies in banking industries in order to adopt innovation benefit. In this paper, the quantitative analysis were performed based on the questioners and answers given by 2358 banking industry related persons like bank managers, customers, higher management. The questionnaire is based to identify their opinion for the adoption of cloud computing for services in banking industry. Benefits and challenges of cloud adoption in banking industry are also identified. We have proposed a cloud adoption framework Cost, Organizational, Technological, Environmental and Decision-maker (COTED) factors. The findings of this research have revealed that all the identified factors have a significantly effect on the adoption of cloud computing technologies for banking industry. With the findings, cloud computing service providers will have a better understanding of what factors are most influencing for cloud computing adoption with relevant insight on current promotions.
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, 2021
Code-mixing is a phenomenon of mixing words and phrases from two or more languages in a single ut... more Code-mixing is a phenomenon of mixing words and phrases from two or more languages in a single utterance of speech and text. Due to the high linguistic diversity, codemixing presents several challenges in evaluating standard natural language generation (NLG) tasks. Various widely popular metrics perform poorly with the code-mixed NLG tasks. To address this challenge, we present a metric independent evaluation pipeline MIPE that significantly improves the correlation between evaluation metrics and human judgments on the generated code-mixed text. As a use case, we demonstrate the performance of MIPE on the machine-generated Hinglish (code-mixing of Hindi and English languages) sentences from the HinGE corpus. We can extend the proposed evaluation strategy to other code-mixed language pairs, NLG tasks, and evaluation metrics with minimal to no effort.
Text detection in scenes based on deep neural networks have shown promising results. Instead of u... more Text detection in scenes based on deep neural networks have shown promising results. Instead of using word bounding box regression, recent state-of-the-art methods have started focusing on character bounding box and pixel-level prediction. This necessitates the need to link adjacent characters, which we propose in this paper using a novel Graph Neural Network (GNN) architecture that allows us to learn both node and edge features as opposed to only the node features under the typical GNN. The main advantage of using GNN for link prediction lies in its ability to connect characters which are spatially separated and have an arbitrary orientation. We show our concept on the well known SynthText dataset, achieving top results as compared to state-of-the-art methods.
Proceedings of the 6th International Workshop on Mining Scientific Publications, 2017
is paper presents AppTechMiner, a rule-based information extraction framework that automatically ... more is paper presents AppTechMiner, a rule-based information extraction framework that automatically constructs a knowledge base of all application areas and problem solving techniques. Techniques include tools, methods, datasets or evaluation metrics. We also categorize individual research articles based on their application areas and the techniques proposed/improved in the article. Our system achieves high average precision (∼82%) and recall (∼84%) in knowledge base creation. It also performs well in application and technique assignment to an individual article (average accuracy ∼66%). In the end, we further present two use cases presenting a trivial information retrieval system and an extensive temporal analysis of the usage of techniques and application areas. At present, we demonstrate the framework for the domain of computational linguistics but this can be easily generalized to any other eld of research.
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017
The rate at which nodes in evolving social networks acquire links (friends, citations) shows comp... more The rate at which nodes in evolving social networks acquire links (friends, citations) shows complex temporal dynamics. Preferential attachment and link copying models, while enabling elegant analysis, only capture rich-gets-richer e ects, not aging and decline. Recent aging models are complex and heavily parameterized; most involve estimating 1-3 parameters per node. These parameters are intrinsic: they explain decline in terms of events in the past of the same node, and do not explain, using the network, where the linking attention might go instead. We argue that traditional characterization of linking dynamics are insu cient to judge the faithfulness of models. We propose a new temporal sketch of an evolving graph, and introduce several new characterizations of a network's temporal dynamics. Then we propose a new family of frugal aging models with no per-node parameters and only two global parameters. Our model is based on a surprising inversion or undoing of triangle completion, where an old node relays a citation to a younger follower in its immediate vicinity. Despite very few parameters, the new family of models shows remarkably better t with real data. Before concluding, we analyze temporal signatures for various research communities yielding further insights into their comparative dynamics. To facilitate reproducible research, we shall soon make all the codes and the processed dataset available in the public domain. CCS CONCEPTS •Computing methodologies →Modeling methodologies;
2021 International Joint Conference on Neural Networks (IJCNN), 2021
Conventional singing voice conversion (SVC) methods often suffer from operating in high-resolutio... more Conventional singing voice conversion (SVC) methods often suffer from operating in high-resolution audio owing to a high dimensionality of data. In this paper, we propose a hierarchical representation learning that enables the learning of disentangled representations with multiple resolutions independently. With the learned disentangled representations, the proposed method progressively performs SVC from low to high resolutions. Experimental results show that the proposed method outperforms baselines that operate with a single resolution in terms of mean opinion score (MOS), similarity score, and pitch accuracy.
2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes ... more Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes with the help of only a few labeled examples. A recent regularization technique-Manifold Mixup focuses on learning a generalpurpose representation, robust to small changes in the data distribution. Since the goal of few-shot learning is closely linked to robust representation learning, we study Manifold Mixup in this problem setting. Self-supervised learning is another technique that learns semantically meaningful features, using only the inherent structure of the data. This work investigates the role of learning relevant feature manifold for few-shot tasks using self-supervision and regularization techniques. We observe that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance. We show that our proposed method S2M2 beats the current state-of-the-art accuracy on standard fewshot learning datasets like CIFAR-FS, CUB, mini-ImageNet and tiered-ImageNet by 3 − 8%. Through extensive experimentation, we show that the features learned using our approach generalize to complex few-shot evaluation tasks, cross-domain scenarios and are robust against slight changes to data distribution.
Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, 2020
The task of learning a sentiment classification model that adapts well to any target domain, diff... more The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training phase. Experimental results show that transfer with weak supervision from a source domain to various target domains provides performance very close to that obtained via supervised training on the target domain itself. CCS CONCEPTS • Computing methodologies → Supervised learning by classification; Semi-supervised learning settings; Neural networks.
Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, 2021
Code-mixing is a frequent communication style among multilingual speakers where they mix words an... more Code-mixing is a frequent communication style among multilingual speakers where they mix words and phrases from two different languages in the same utterance of text or speech. Identifying and filtering code-mixed text is a challenging task due to its coexistence with monolingual and noisy text. Over the years, several code-mixing metrics have been extensively used to identify and validate codemixed text quality. This paper demonstrates several inherent limitations of code-mixing metrics with examples from the already existing datasets that are popularly used across various experiments.
Journal of Physics G: Nuclear and Particle Physics, 2019
Excited states of 127</supXe were populated via 122</supSn(9</supBe, 4nγ) fusion-evapora... more Excited states of 127</supXe were populated via 122</supSn(9</supBe, 4nγ) fusion-evaporation reaction at a beam energy of 48 MeV. A sequence of ΔI = 2 γ-transitions has been observed above Iπ = 19/2+ state at 2307 keV. Spin and parity of the states belonging to this band have been assigned firmly. The dynamic moment of inertia of this band is found to be low and almost constant as a function of angular momentum. Observed experimental features along with the results of theoretical semiclassical model calculation suggest an antimagnetic rotational character for this band.
Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversari... more Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversarially crafted examples look visually similar to the unaltered original image, neural networks behave abnormally on these modified images. Image attribution methods highlight regions of input image important for the model's prediction. We believe that the domains of adversarial generation and attribution are closely related and we support this claim by carrying out various experiments. By using the attribution of images, we train a second neural network classifier as a detector for adversarial examples. Our method of detection differs from other works in the domain of adversarial detection [10, 13, 4, 3] in the sense that we don't use adversarial examples during our training procedure. Our detection methodology thus is independent of the adversarial attack generation methods. We have validated our detection technique on MNIST and CIFAR-10, achieving a high success rate for various adversarial attacks including FGSM, DeepFool, CW, PGD. We also show that training the detector model with attribution of adversarial examples generated even from a simple attack like FGSM further increases the detection accuracy over several different attacks.
This paper proposes Object-Oriented Usability Indices (OOUI) for multi-objective Demand Side Mana... more This paper proposes Object-Oriented Usability Indices (OOUI) for multi-objective Demand Side Management (DSM). These indices quantify the achievements of multi-objective DSM in a power network. DSM can be considered as a method adopted by utilities to shed some load during peak load hours. Usually, there are service contracts, and the curtailments or dimming of load are automatically done by service providers based on contract provisions. This paper formulates three indices, namely peak power shaving, renewable energy integration, and an overall usability index. The first two indices indicate the amount of peak load shaving and integration of renewable energy, while the third one combines the impact of both indices and quantifies the overall benefit achieved through DSM. The application of the proposed indices is presented through simulation performed in a grid-tied microgrid environment for a multi-objective DSM formulation. The adopted microgrid structure consists of three units o...
The overwhelming number of scientific articles over the years calls for smart automatic tools to ... more The overwhelming number of scientific articles over the years calls for smart automatic tools to facilitate the process of literature review. Here, we propose for the first time a framework of faceted recommendation for scientific articles (abbreviated as FeRoSA) which apart from ensuring quality retrieval of scientific articles for a query paper, also efficiently arranges the recommended papers into different facets (categories). Providing users with an interface which enables the filtering of recommendations across multiple facets can increase users' control over how the recommendation system behaves. FeRoSA is precisely built on a random walk based framework on an induced subnetwork consisting of nodes related to the query paper in terms of either citations or content similarity. Rigorous analysis based an experts' judgment shows that FeRoSA outperforms two baseline systems in terms of faceted recommendations (overall precision of 0.65). Further, we show that the faceted results of FeRoSA can be appropriately combined to design a better flat recommendation system as well. An experimental version of FeRoSA is publicly available at www.ferosa.org (receiving as many as 170 hits within the first 15 days of launch).
2015 Fifth International Conference on Communication Systems and Network Technologies, 2015
We propose a regression test selection technique that is based on analysis of source code of an o... more We propose a regression test selection technique that is based on analysis of source code of an object-oriented program. First we construct a System dependency graph model of the original program from the source code. When some modification is executed in a program, the constructed model is updated to reflect the changes. Our approach in addition to capturing control and data dependencies represents the dependencies arising from object-relations. The test cases that exercise the affected model elements in the program model are selected for regression testing. In our approach System Design Graph representation will be used for regression test selection for analyzing and comparing the code changes of original and modified program. Empirical studies carried out by us show that our technique selects on an average of 26.36. % more fault-revealing test cases compared to a Control Dependence Graph based technique while incurring about 37.34% increase in regression test suite size.
International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II, 2005
Even though passwords are the most convenient means of authentication, they bring along themselve... more Even though passwords are the most convenient means of authentication, they bring along themselves the threat of dictionary attacks. Dictionary attacks may be of two kinds: online and offline. While offline dictionary attacks are possible only if the adversary is able to collect data for a successful protocol execution by eavesdropping on the communication channel and can be successfully countered using public key cryptography, online dictionary attacks can be performed by anyone and there is no satisfactory solution to counter them. This paper presents a new authentication protocol which is called CompChall (computational challenge). The proposed protocol uses only one way hash functions as the building blocks and attempts to eliminate online dictionary attacks by implementing a challenge-response system. This challenge-response system is designed in a fashion that it does not pose any difficulty to a genuine user but is time consuming and computationally intensive for an adversary trying to launch a large number of login requests per unit time as in the case of an online dictionary attack. The protocol is stateless and thus less vulnerable to DoS (Denial of Service) attacks.
This study aimed to evaluate the incidence and clinical spectrum of neurologic complications, pre... more This study aimed to evaluate the incidence and clinical spectrum of neurologic complications, predictors of central and peripheral nervous system involvement, and their outcome in patients with dengue virus infection (DENV). To determine the extent of neurologic complications, we used a hospital-based prospective cohort study design, which included laboratory-confirmed cases of dengue and follow-up for 3 months. We also analyzed clinical and laboratory data to assess predictors of neurologic involvement. The study included enrollment of 486 cases. Two were lost to follow-up and excluded. Forty-five patients developed neurologic complications. Of these, 28 patients had CNS and 17 had peripheral nervous system (PNS) involvement, representing an incidence rate for neurologic complications of 9.26%. Significant predictors of CNS involvement were higher mean body temperature (p = 0.012), elevated hematocrit (p = 0.009), low platelet count (p = 0.021), and liver dysfunction (p < 0.001)...
International Journal of Computer Applications, 2013
Compression technique is very critical problem now a day lot of technique are proposing with diff... more Compression technique is very critical problem now a day lot of technique are proposing with different method or tool, here compression technique are proposing with the help of cosine transformation and wavelets transformations with "db7", "db7" and "db8". And calculate the compression ratio using crossbreed architecture and wavelets "db7", "db7" and "db8" and in the last try to find out which transformation combination is best with respect to compression ratio. Proposed algorithm is implemented in MATLAB 2010.
Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging, 2010
We present a method for drawing lines on an object that depict both the shape and shading of the ... more We present a method for drawing lines on an object that depict both the shape and shading of the object. To do so, we construct a gradient field of the diffuse intensity of the surface to guide a set of adaptively spaced lines. The shape of these lines reflect the lighting under which the object is being viewed and its shape. When the light source is placed at the viewer's location, these lines emanate from silhouettes and naturally extend Suggestive Contours. By using a hierarchical proximity grid, we can also improve the quality of these lines as well as control their density over the image. We also provide a method for detecting and removing ridge lines in the intensity field, which lead to artifacts in the line drawings.
While realistic illumination significantly improves the visual quality and perception of rendered... more While realistic illumination significantly improves the visual quality and perception of rendered images, it is often very expensive to compute. In this paper, we propose a new algorithm for embedding a global ambient occlusion computation within the fast sweeping algorithm while determining isosurfaces. With this method we can approximate ambient occlusion for rendering volumetric data with minimal additional cost over fast sweeping. We compare visualizations rendered with our algorithm to visualizations computed with only local shading, and with a ambient occlusion calculation using Monte Carlo sampling method. We also show how this method can be used for approximating low frequency shadows and subsurface scattering.
2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD)
Cloud computing is the most important part of today's information technology world that provi... more Cloud computing is the most important part of today's information technology world that provides solutions to almost all the sectors for improving their business. This innovation helps organizations to enhance or redesign the services at a very low expense. This study investigates the integrated factors that impact in adopting Cloud computing technologies in banking industries in order to adopt innovation benefit. In this paper, the quantitative analysis were performed based on the questioners and answers given by 2358 banking industry related persons like bank managers, customers, higher management. The questionnaire is based to identify their opinion for the adoption of cloud computing for services in banking industry. Benefits and challenges of cloud adoption in banking industry are also identified. We have proposed a cloud adoption framework Cost, Organizational, Technological, Environmental and Decision-maker (COTED) factors. The findings of this research have revealed that all the identified factors have a significantly effect on the adoption of cloud computing technologies for banking industry. With the findings, cloud computing service providers will have a better understanding of what factors are most influencing for cloud computing adoption with relevant insight on current promotions.
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, 2021
Code-mixing is a phenomenon of mixing words and phrases from two or more languages in a single ut... more Code-mixing is a phenomenon of mixing words and phrases from two or more languages in a single utterance of speech and text. Due to the high linguistic diversity, codemixing presents several challenges in evaluating standard natural language generation (NLG) tasks. Various widely popular metrics perform poorly with the code-mixed NLG tasks. To address this challenge, we present a metric independent evaluation pipeline MIPE that significantly improves the correlation between evaluation metrics and human judgments on the generated code-mixed text. As a use case, we demonstrate the performance of MIPE on the machine-generated Hinglish (code-mixing of Hindi and English languages) sentences from the HinGE corpus. We can extend the proposed evaluation strategy to other code-mixed language pairs, NLG tasks, and evaluation metrics with minimal to no effort.
Text detection in scenes based on deep neural networks have shown promising results. Instead of u... more Text detection in scenes based on deep neural networks have shown promising results. Instead of using word bounding box regression, recent state-of-the-art methods have started focusing on character bounding box and pixel-level prediction. This necessitates the need to link adjacent characters, which we propose in this paper using a novel Graph Neural Network (GNN) architecture that allows us to learn both node and edge features as opposed to only the node features under the typical GNN. The main advantage of using GNN for link prediction lies in its ability to connect characters which are spatially separated and have an arbitrary orientation. We show our concept on the well known SynthText dataset, achieving top results as compared to state-of-the-art methods.
Proceedings of the 6th International Workshop on Mining Scientific Publications, 2017
is paper presents AppTechMiner, a rule-based information extraction framework that automatically ... more is paper presents AppTechMiner, a rule-based information extraction framework that automatically constructs a knowledge base of all application areas and problem solving techniques. Techniques include tools, methods, datasets or evaluation metrics. We also categorize individual research articles based on their application areas and the techniques proposed/improved in the article. Our system achieves high average precision (∼82%) and recall (∼84%) in knowledge base creation. It also performs well in application and technique assignment to an individual article (average accuracy ∼66%). In the end, we further present two use cases presenting a trivial information retrieval system and an extensive temporal analysis of the usage of techniques and application areas. At present, we demonstrate the framework for the domain of computational linguistics but this can be easily generalized to any other eld of research.
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017
The rate at which nodes in evolving social networks acquire links (friends, citations) shows comp... more The rate at which nodes in evolving social networks acquire links (friends, citations) shows complex temporal dynamics. Preferential attachment and link copying models, while enabling elegant analysis, only capture rich-gets-richer e ects, not aging and decline. Recent aging models are complex and heavily parameterized; most involve estimating 1-3 parameters per node. These parameters are intrinsic: they explain decline in terms of events in the past of the same node, and do not explain, using the network, where the linking attention might go instead. We argue that traditional characterization of linking dynamics are insu cient to judge the faithfulness of models. We propose a new temporal sketch of an evolving graph, and introduce several new characterizations of a network's temporal dynamics. Then we propose a new family of frugal aging models with no per-node parameters and only two global parameters. Our model is based on a surprising inversion or undoing of triangle completion, where an old node relays a citation to a younger follower in its immediate vicinity. Despite very few parameters, the new family of models shows remarkably better t with real data. Before concluding, we analyze temporal signatures for various research communities yielding further insights into their comparative dynamics. To facilitate reproducible research, we shall soon make all the codes and the processed dataset available in the public domain. CCS CONCEPTS •Computing methodologies →Modeling methodologies;
2021 International Joint Conference on Neural Networks (IJCNN), 2021
Conventional singing voice conversion (SVC) methods often suffer from operating in high-resolutio... more Conventional singing voice conversion (SVC) methods often suffer from operating in high-resolution audio owing to a high dimensionality of data. In this paper, we propose a hierarchical representation learning that enables the learning of disentangled representations with multiple resolutions independently. With the learned disentangled representations, the proposed method progressively performs SVC from low to high resolutions. Experimental results show that the proposed method outperforms baselines that operate with a single resolution in terms of mean opinion score (MOS), similarity score, and pitch accuracy.
2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes ... more Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes with the help of only a few labeled examples. A recent regularization technique-Manifold Mixup focuses on learning a generalpurpose representation, robust to small changes in the data distribution. Since the goal of few-shot learning is closely linked to robust representation learning, we study Manifold Mixup in this problem setting. Self-supervised learning is another technique that learns semantically meaningful features, using only the inherent structure of the data. This work investigates the role of learning relevant feature manifold for few-shot tasks using self-supervision and regularization techniques. We observe that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance. We show that our proposed method S2M2 beats the current state-of-the-art accuracy on standard fewshot learning datasets like CIFAR-FS, CUB, mini-ImageNet and tiered-ImageNet by 3 − 8%. Through extensive experimentation, we show that the features learned using our approach generalize to complex few-shot evaluation tasks, cross-domain scenarios and are robust against slight changes to data distribution.
Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, 2020
The task of learning a sentiment classification model that adapts well to any target domain, diff... more The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training phase. Experimental results show that transfer with weak supervision from a source domain to various target domains provides performance very close to that obtained via supervised training on the target domain itself. CCS CONCEPTS • Computing methodologies → Supervised learning by classification; Semi-supervised learning settings; Neural networks.
Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, 2021
Code-mixing is a frequent communication style among multilingual speakers where they mix words an... more Code-mixing is a frequent communication style among multilingual speakers where they mix words and phrases from two different languages in the same utterance of text or speech. Identifying and filtering code-mixed text is a challenging task due to its coexistence with monolingual and noisy text. Over the years, several code-mixing metrics have been extensively used to identify and validate codemixed text quality. This paper demonstrates several inherent limitations of code-mixing metrics with examples from the already existing datasets that are popularly used across various experiments.
Journal of Physics G: Nuclear and Particle Physics, 2019
Excited states of 127</supXe were populated via 122</supSn(9</supBe, 4nγ) fusion-evapora... more Excited states of 127</supXe were populated via 122</supSn(9</supBe, 4nγ) fusion-evaporation reaction at a beam energy of 48 MeV. A sequence of ΔI = 2 γ-transitions has been observed above Iπ = 19/2+ state at 2307 keV. Spin and parity of the states belonging to this band have been assigned firmly. The dynamic moment of inertia of this band is found to be low and almost constant as a function of angular momentum. Observed experimental features along with the results of theoretical semiclassical model calculation suggest an antimagnetic rotational character for this band.
Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversari... more Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversarially crafted examples look visually similar to the unaltered original image, neural networks behave abnormally on these modified images. Image attribution methods highlight regions of input image important for the model's prediction. We believe that the domains of adversarial generation and attribution are closely related and we support this claim by carrying out various experiments. By using the attribution of images, we train a second neural network classifier as a detector for adversarial examples. Our method of detection differs from other works in the domain of adversarial detection [10, 13, 4, 3] in the sense that we don't use adversarial examples during our training procedure. Our detection methodology thus is independent of the adversarial attack generation methods. We have validated our detection technique on MNIST and CIFAR-10, achieving a high success rate for various adversarial attacks including FGSM, DeepFool, CW, PGD. We also show that training the detector model with attribution of adversarial examples generated even from a simple attack like FGSM further increases the detection accuracy over several different attacks.
This paper proposes Object-Oriented Usability Indices (OOUI) for multi-objective Demand Side Mana... more This paper proposes Object-Oriented Usability Indices (OOUI) for multi-objective Demand Side Management (DSM). These indices quantify the achievements of multi-objective DSM in a power network. DSM can be considered as a method adopted by utilities to shed some load during peak load hours. Usually, there are service contracts, and the curtailments or dimming of load are automatically done by service providers based on contract provisions. This paper formulates three indices, namely peak power shaving, renewable energy integration, and an overall usability index. The first two indices indicate the amount of peak load shaving and integration of renewable energy, while the third one combines the impact of both indices and quantifies the overall benefit achieved through DSM. The application of the proposed indices is presented through simulation performed in a grid-tied microgrid environment for a multi-objective DSM formulation. The adopted microgrid structure consists of three units o...
The overwhelming number of scientific articles over the years calls for smart automatic tools to ... more The overwhelming number of scientific articles over the years calls for smart automatic tools to facilitate the process of literature review. Here, we propose for the first time a framework of faceted recommendation for scientific articles (abbreviated as FeRoSA) which apart from ensuring quality retrieval of scientific articles for a query paper, also efficiently arranges the recommended papers into different facets (categories). Providing users with an interface which enables the filtering of recommendations across multiple facets can increase users' control over how the recommendation system behaves. FeRoSA is precisely built on a random walk based framework on an induced subnetwork consisting of nodes related to the query paper in terms of either citations or content similarity. Rigorous analysis based an experts' judgment shows that FeRoSA outperforms two baseline systems in terms of faceted recommendations (overall precision of 0.65). Further, we show that the faceted results of FeRoSA can be appropriately combined to design a better flat recommendation system as well. An experimental version of FeRoSA is publicly available at www.ferosa.org (receiving as many as 170 hits within the first 15 days of launch).
2015 Fifth International Conference on Communication Systems and Network Technologies, 2015
We propose a regression test selection technique that is based on analysis of source code of an o... more We propose a regression test selection technique that is based on analysis of source code of an object-oriented program. First we construct a System dependency graph model of the original program from the source code. When some modification is executed in a program, the constructed model is updated to reflect the changes. Our approach in addition to capturing control and data dependencies represents the dependencies arising from object-relations. The test cases that exercise the affected model elements in the program model are selected for regression testing. In our approach System Design Graph representation will be used for regression test selection for analyzing and comparing the code changes of original and modified program. Empirical studies carried out by us show that our technique selects on an average of 26.36. % more fault-revealing test cases compared to a Control Dependence Graph based technique while incurring about 37.34% increase in regression test suite size.
International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II, 2005
Even though passwords are the most convenient means of authentication, they bring along themselve... more Even though passwords are the most convenient means of authentication, they bring along themselves the threat of dictionary attacks. Dictionary attacks may be of two kinds: online and offline. While offline dictionary attacks are possible only if the adversary is able to collect data for a successful protocol execution by eavesdropping on the communication channel and can be successfully countered using public key cryptography, online dictionary attacks can be performed by anyone and there is no satisfactory solution to counter them. This paper presents a new authentication protocol which is called CompChall (computational challenge). The proposed protocol uses only one way hash functions as the building blocks and attempts to eliminate online dictionary attacks by implementing a challenge-response system. This challenge-response system is designed in a fashion that it does not pose any difficulty to a genuine user but is time consuming and computationally intensive for an adversary trying to launch a large number of login requests per unit time as in the case of an online dictionary attack. The protocol is stateless and thus less vulnerable to DoS (Denial of Service) attacks.
This study aimed to evaluate the incidence and clinical spectrum of neurologic complications, pre... more This study aimed to evaluate the incidence and clinical spectrum of neurologic complications, predictors of central and peripheral nervous system involvement, and their outcome in patients with dengue virus infection (DENV). To determine the extent of neurologic complications, we used a hospital-based prospective cohort study design, which included laboratory-confirmed cases of dengue and follow-up for 3 months. We also analyzed clinical and laboratory data to assess predictors of neurologic involvement. The study included enrollment of 486 cases. Two were lost to follow-up and excluded. Forty-five patients developed neurologic complications. Of these, 28 patients had CNS and 17 had peripheral nervous system (PNS) involvement, representing an incidence rate for neurologic complications of 9.26%. Significant predictors of CNS involvement were higher mean body temperature (p = 0.012), elevated hematocrit (p = 0.009), low platelet count (p = 0.021), and liver dysfunction (p < 0.001)...
International Journal of Computer Applications, 2013
Compression technique is very critical problem now a day lot of technique are proposing with diff... more Compression technique is very critical problem now a day lot of technique are proposing with different method or tool, here compression technique are proposing with the help of cosine transformation and wavelets transformations with "db7", "db7" and "db8". And calculate the compression ratio using crossbreed architecture and wavelets "db7", "db7" and "db8" and in the last try to find out which transformation combination is best with respect to compression ratio. Proposed algorithm is implemented in MATLAB 2010.
Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging, 2010
We present a method for drawing lines on an object that depict both the shape and shading of the ... more We present a method for drawing lines on an object that depict both the shape and shading of the object. To do so, we construct a gradient field of the diffuse intensity of the surface to guide a set of adaptively spaced lines. The shape of these lines reflect the lighting under which the object is being viewed and its shape. When the light source is placed at the viewer's location, these lines emanate from silhouettes and naturally extend Suggestive Contours. By using a hierarchical proximity grid, we can also improve the quality of these lines as well as control their density over the image. We also provide a method for detecting and removing ridge lines in the intensity field, which lead to artifacts in the line drawings.
While realistic illumination significantly improves the visual quality and perception of rendered... more While realistic illumination significantly improves the visual quality and perception of rendered images, it is often very expensive to compute. In this paper, we propose a new algorithm for embedding a global ambient occlusion computation within the fast sweeping algorithm while determining isosurfaces. With this method we can approximate ambient occlusion for rendering volumetric data with minimal additional cost over fast sweeping. We compare visualizations rendered with our algorithm to visualizations computed with only local shading, and with a ambient occlusion calculation using Monte Carlo sampling method. We also show how this method can be used for approximating low frequency shadows and subsurface scattering.
Uploads
Papers by MAYANK SINGH