Identifying locations of nodes in wireless sensor networks (WSNs) is critical to both network ope... more Identifying locations of nodes in wireless sensor networks (WSNs) is critical to both network operations and most application level tasks. Sensor nodes equipped with geographical positioning system (GPS) devices are aware of their locations at a precision level of few meters. However, installing GPS devices on a large number of sensor nodes not only is expensive but also affects the form factor of these nodes. Moreover, GPS-based localization is not applicable in the indoor environments such as buildings. There exists an extensive body of research literature that aims at obtaining absolute locations as well as relative spatial locations of nodes in a WSN without requiring specialized hardware at large scale. The typical approach consists of employing only a limited number of anchor nodes that are aware of their own locations, and then trying to infer locations of non-anchor nodes using graph-theoretic, geometric, statistical, optimization, and machine learning techniques. Thus, the literature represents a very rich ensemble of algorithmic techniques applicable to low power, highly distributed nodes with resource-optimal computations. In this chapter we take a close look at the algorithmic aspects of various important localization techniques for WSNs.
One of the key questions facing climate scientists, policy makers and the public today, is how im... more One of the key questions facing climate scientists, policy makers and the public today, is how important is natural variability in explaining global warming? Sedimentary archives from marginal marine environments, such as fjordic (or sea-loch) environments, typically have higher sediment accumulation rates than deeper ocean sites and thus provide suitably expanded archives of the Holocene against which the 20th Century changes can be compared. Moreover, with suitable temporal resolution, the impact of Holocene rapid climate changes episodes, such as the 8.2 kyr event can be constrained. Since fjords bridge the land-ocean interface, palaeo-environmental records from fjordic environments provide a unique opportunity to study the link between marine and terrestrial climate. Here we present millennial to centennial scale, independent records of marine and terrestrial change in two fjordic cores: from Ìsafjardardjúp, northwest Iceland (core MD99-2266; location: 66° 13' 77'' N, 23° 15' 93'' W; 106m water depth) and from Loch Sunart, northwest Scotland (core MD-04 2832; location: 56° 40.19'N, 05° 52.21 W; 50 m water depth). The cores are extremely high resolution with 1cm of sediment representing <10 years of accumulation, and come from sites influenced by disparate branches of the North Atlantic Drift (i.e. the distal Gulf Stream), the Irminger and Shetland Currents. We reconstruct sea surface temperature (SST) and terrestrial mean air annual temperatures (MAT) derived from alkenone and tetraether biomarkers (using the UK37' and MBT/CBT-MAT indices respectively). Additional insights into terrestrial environmental change are derived from proxy records for soil pH (from the tetraether CBT proxy) and, in the case of MD99-2266, from higher plant wax distributions. The timing of the millennial-scale SST variability in the cores should give insights into the degree of phasing of millennial scale climate variability between the western (Irminger Current) and eastern (SC) branches of warm Atlantic inflow to the north East Altantic and Nordic Seas. Additionally, we investigate at higher temporal resolution the signal of the 8.2 kyr event in MD99-2266.
Localization is one of the fundamental problems in wireless sensor networks (WSNs), since locatio... more Localization is one of the fundamental problems in wireless sensor networks (WSNs), since locations of the sensor nodes are critical to both network operations and most application level tasks. Although the GPS based localization schemes can be used to determine node locations within a few meters, the cost of GPS devices and non-availability of GPS signals in confined environments prevent their use in large scale sensor networks. There exists an extensive body of research that aims at obtaining locations as well as spatial relations of nodes in WSNs without requiring specialized hardware and/or employing only a limited number of anchors that are aware of their own locations. In this paper, we present a comprehensive survey on sensor localization in WSNs covering motivations, problem formulations, solution approaches and performance summary. Future research issues will also be discussed.
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing pl... more The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of a highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under Globus environment. The number of IPG nodes, the number of processors per node, and the interconnect speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solutions are achieved when the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
We construct a two-good general equilibrium model of international trade for two small open econo... more We construct a two-good general equilibrium model of international trade for two small open economies where pollution from production is transmitted across borders. Governments in both countries impose emission taxes non-cooperatively. Within this framework, we examine the effect of changes in the degree of cross-border pollution on Nash emission taxes, emission levels and welfare. We do so under two scenarios: when changes in cross-border pollution do not affect domestic pollution (non-strategic) and when they do (strategic). We also examine the effect of changes in international terms of trade on pollution and welfare when cross-border pollution is non-strategic.
In this study, we formulate a computational reaction model following a chemical kinetic theory ap... more In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA.
The disk allocation problem addresses the issue of how to distribute a file on to several disks t... more The disk allocation problem addresses the issue of how to distribute a file on to several disks to maximize the concurrent disk accesses in response to a partial match query. In the past this problem has been studied for binary as well as for p-ary cartesian product files. In this paper, we propose a disk allocation strategy for non-uniform cartesian product files by a coding theoretic approach. Our strictly optimal disk allocation strategies are based on a large and flexible class of maximum distance separable (MDS) codes, namely the redundant residue codes.
The disk allocation problem addresses the issue of how to distribute large files among several di... more The disk allocation problem addresses the issue of how to distribute large files among several disks so as to maximize the concurrent disk accesses in response to partial match queries. In the past, this problem has been studied for binary as well as for p-ary cartesian product files. We propose a strictly-optimal disk allocation strategy for non-uniform cartesian product files
We identified α2, α1, and β1 isoforms of Na+/K+-ATPase in caveolae vesicles of bovine pulmonary s... more We identified α2, α1, and β1 isoforms of Na+/K+-ATPase in caveolae vesicles of bovine pulmonary smooth muscle plasma membrane. The biochemical and biophysical characteristics of the α2β1 isozyme of Na+/K+-ATPase from caveolae vesicles were studied during solubilization and purification using the detergents 1,2-heptanoyl-sn-phosphatidylcholine (DHPC), poly(oxy-ethylene)8-lauryl ether (C12E8), and Triton X-100, and reconstitution with the phospholipid dioleoyl-phosphatidylcholine (DOPC). DHPC was determined to be superior to C12E8, whereas C12E8 was better than Triton X-100 in the active enzyme yields and specific activity. Fluorescence studies with DHPC-purified α2β1 isozyme of Na+/K+-ATPase elicited higher E1Na−E2 K transition compared with that of the C12E8- and Triton X-100-purified enzyme. The rate of Na+ efflux in DHPC–DOPC-reconstituted isozyme was higher compared to the C12E8–DOPC- and Triton X100–DOPC-reconstituted enzyme. Circular dichroism analysis suggests that the DHPC-purified α2β1 isozyme of Na+/K+-ATPase possessed more organized secondary structure compared to the C12E8- and Triton X-100-purified isozyme.
Conflict-free memory access is one of the important factors for the overall performance of a mult... more Conflict-free memory access is one of the important factors for the overall performance of a multiprocessor system in which the available memory is partitioned into several modules. Even if there is no contention in the processor-memory interconnection path, conflicts may still occur when two or more processors attempt to gain access to a single memory module or a memory location within a module. With a goal to achieve higher memory bandwidth, in this paper we resolve access conflicts at the level of memory modules. In particular, we deal with the problem of evenly mapping a data structure, called host, into as few distinct memory modules as possible to guarantee that subsets of distinct host nodes, called templates, can be accessed simultaneously in a conflict-free manner. Since trees are among the most frequently used data structures in numerous applications, we propose a simple algebraic function based on the node indices for assigning the nodes of a k-ary tree to the memory modules in such a way that each subtree of a given height and arity can be accessed without conflicts. The assignment is direct, load balanced and also optimal in terms of the number of modules required. We also investigate conflict-free access to d-dimensional subcubes (Q d ) of n-dimensional hypercubes (Q n ), where Q n represents a set of items indexed with n-digit addresses and accesses will be made to subsets of items differing in any arbitrary d-digit positions. With the help of the coding theory, we propose a novel approach to solve the subcube access problem. Codes with minimum distance d ≥ 2 play a crucial role in our applications. We prove that any occurrence of a subcube Q s ⊃ Q n , for 0 ≤ s ≤ d − 1, can be accessed without conflicts using \(\left\lceil {\frac{{2^n }}{M}} \right\rceil\) memory modules, by associating Q n with a linear code C of length n, size M and minimum distance d. Associating the hypercube nodes with perfect or maximum distance separable (MDS) codes, our problem is solved optimally both in terms of the number of memory modules required and load balancing per module. These codes can be easily modified (without node relocation) according to the change in the size of the host or the number of available memory modules.
Identifying locations of nodes in wireless sensor networks (WSNs) is critical to both network ope... more Identifying locations of nodes in wireless sensor networks (WSNs) is critical to both network operations and most application level tasks. Sensor nodes equipped with geographical positioning system (GPS) devices are aware of their locations at a precision level of few meters. However, installing GPS devices on a large number of sensor nodes not only is expensive but also affects the form factor of these nodes. Moreover, GPS-based localization is not applicable in the indoor environments such as buildings. There exists an extensive body of research literature that aims at obtaining absolute locations as well as relative spatial locations of nodes in a WSN without requiring specialized hardware at large scale. The typical approach consists of employing only a limited number of anchor nodes that are aware of their own locations, and then trying to infer locations of non-anchor nodes using graph-theoretic, geometric, statistical, optimization, and machine learning techniques. Thus, the literature represents a very rich ensemble of algorithmic techniques applicable to low power, highly distributed nodes with resource-optimal computations. In this chapter we take a close look at the algorithmic aspects of various important localization techniques for WSNs.
One of the key questions facing climate scientists, policy makers and the public today, is how im... more One of the key questions facing climate scientists, policy makers and the public today, is how important is natural variability in explaining global warming? Sedimentary archives from marginal marine environments, such as fjordic (or sea-loch) environments, typically have higher sediment accumulation rates than deeper ocean sites and thus provide suitably expanded archives of the Holocene against which the 20th Century changes can be compared. Moreover, with suitable temporal resolution, the impact of Holocene rapid climate changes episodes, such as the 8.2 kyr event can be constrained. Since fjords bridge the land-ocean interface, palaeo-environmental records from fjordic environments provide a unique opportunity to study the link between marine and terrestrial climate. Here we present millennial to centennial scale, independent records of marine and terrestrial change in two fjordic cores: from Ìsafjardardjúp, northwest Iceland (core MD99-2266; location: 66° 13' 77'' N, 23° 15' 93'' W; 106m water depth) and from Loch Sunart, northwest Scotland (core MD-04 2832; location: 56° 40.19'N, 05° 52.21 W; 50 m water depth). The cores are extremely high resolution with 1cm of sediment representing <10 years of accumulation, and come from sites influenced by disparate branches of the North Atlantic Drift (i.e. the distal Gulf Stream), the Irminger and Shetland Currents. We reconstruct sea surface temperature (SST) and terrestrial mean air annual temperatures (MAT) derived from alkenone and tetraether biomarkers (using the UK37' and MBT/CBT-MAT indices respectively). Additional insights into terrestrial environmental change are derived from proxy records for soil pH (from the tetraether CBT proxy) and, in the case of MD99-2266, from higher plant wax distributions. The timing of the millennial-scale SST variability in the cores should give insights into the degree of phasing of millennial scale climate variability between the western (Irminger Current) and eastern (SC) branches of warm Atlantic inflow to the north East Altantic and Nordic Seas. Additionally, we investigate at higher temporal resolution the signal of the 8.2 kyr event in MD99-2266.
Localization is one of the fundamental problems in wireless sensor networks (WSNs), since locatio... more Localization is one of the fundamental problems in wireless sensor networks (WSNs), since locations of the sensor nodes are critical to both network operations and most application level tasks. Although the GPS based localization schemes can be used to determine node locations within a few meters, the cost of GPS devices and non-availability of GPS signals in confined environments prevent their use in large scale sensor networks. There exists an extensive body of research that aims at obtaining locations as well as spatial relations of nodes in WSNs without requiring specialized hardware and/or employing only a limited number of anchors that are aware of their own locations. In this paper, we present a comprehensive survey on sensor localization in WSNs covering motivations, problem formulations, solution approaches and performance summary. Future research issues will also be discussed.
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing pl... more The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of a highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under Globus environment. The number of IPG nodes, the number of processors per node, and the interconnect speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solutions are achieved when the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
We construct a two-good general equilibrium model of international trade for two small open econo... more We construct a two-good general equilibrium model of international trade for two small open economies where pollution from production is transmitted across borders. Governments in both countries impose emission taxes non-cooperatively. Within this framework, we examine the effect of changes in the degree of cross-border pollution on Nash emission taxes, emission levels and welfare. We do so under two scenarios: when changes in cross-border pollution do not affect domestic pollution (non-strategic) and when they do (strategic). We also examine the effect of changes in international terms of trade on pollution and welfare when cross-border pollution is non-strategic.
In this study, we formulate a computational reaction model following a chemical kinetic theory ap... more In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA.
The disk allocation problem addresses the issue of how to distribute a file on to several disks t... more The disk allocation problem addresses the issue of how to distribute a file on to several disks to maximize the concurrent disk accesses in response to a partial match query. In the past this problem has been studied for binary as well as for p-ary cartesian product files. In this paper, we propose a disk allocation strategy for non-uniform cartesian product files by a coding theoretic approach. Our strictly optimal disk allocation strategies are based on a large and flexible class of maximum distance separable (MDS) codes, namely the redundant residue codes.
The disk allocation problem addresses the issue of how to distribute large files among several di... more The disk allocation problem addresses the issue of how to distribute large files among several disks so as to maximize the concurrent disk accesses in response to partial match queries. In the past, this problem has been studied for binary as well as for p-ary cartesian product files. We propose a strictly-optimal disk allocation strategy for non-uniform cartesian product files
We identified α2, α1, and β1 isoforms of Na+/K+-ATPase in caveolae vesicles of bovine pulmonary s... more We identified α2, α1, and β1 isoforms of Na+/K+-ATPase in caveolae vesicles of bovine pulmonary smooth muscle plasma membrane. The biochemical and biophysical characteristics of the α2β1 isozyme of Na+/K+-ATPase from caveolae vesicles were studied during solubilization and purification using the detergents 1,2-heptanoyl-sn-phosphatidylcholine (DHPC), poly(oxy-ethylene)8-lauryl ether (C12E8), and Triton X-100, and reconstitution with the phospholipid dioleoyl-phosphatidylcholine (DOPC). DHPC was determined to be superior to C12E8, whereas C12E8 was better than Triton X-100 in the active enzyme yields and specific activity. Fluorescence studies with DHPC-purified α2β1 isozyme of Na+/K+-ATPase elicited higher E1Na−E2 K transition compared with that of the C12E8- and Triton X-100-purified enzyme. The rate of Na+ efflux in DHPC–DOPC-reconstituted isozyme was higher compared to the C12E8–DOPC- and Triton X100–DOPC-reconstituted enzyme. Circular dichroism analysis suggests that the DHPC-purified α2β1 isozyme of Na+/K+-ATPase possessed more organized secondary structure compared to the C12E8- and Triton X-100-purified isozyme.
Conflict-free memory access is one of the important factors for the overall performance of a mult... more Conflict-free memory access is one of the important factors for the overall performance of a multiprocessor system in which the available memory is partitioned into several modules. Even if there is no contention in the processor-memory interconnection path, conflicts may still occur when two or more processors attempt to gain access to a single memory module or a memory location within a module. With a goal to achieve higher memory bandwidth, in this paper we resolve access conflicts at the level of memory modules. In particular, we deal with the problem of evenly mapping a data structure, called host, into as few distinct memory modules as possible to guarantee that subsets of distinct host nodes, called templates, can be accessed simultaneously in a conflict-free manner. Since trees are among the most frequently used data structures in numerous applications, we propose a simple algebraic function based on the node indices for assigning the nodes of a k-ary tree to the memory modules in such a way that each subtree of a given height and arity can be accessed without conflicts. The assignment is direct, load balanced and also optimal in terms of the number of modules required. We also investigate conflict-free access to d-dimensional subcubes (Q d ) of n-dimensional hypercubes (Q n ), where Q n represents a set of items indexed with n-digit addresses and accesses will be made to subsets of items differing in any arbitrary d-digit positions. With the help of the coding theory, we propose a novel approach to solve the subcube access problem. Codes with minimum distance d ≥ 2 play a crucial role in our applications. We prove that any occurrence of a subcube Q s ⊃ Q n , for 0 ≤ s ≤ d − 1, can be accessed without conflicts using \(\left\lceil {\frac{{2^n }}{M}} \right\rceil\) memory modules, by associating Q n with a linear code C of length n, size M and minimum distance d. Associating the hypercube nodes with perfect or maximum distance separable (MDS) codes, our problem is solved optimally both in terms of the number of memory modules required and load balancing per module. These codes can be easily modified (without node relocation) according to the change in the size of the host or the number of available memory modules.
Uploads
Papers by md sajal