The biostratigraphical and palaeoenvironmental significance of diagnostic Late Palaeocene (Thanet... more The biostratigraphical and palaeoenvironmental significance of diagnostic Late Palaeocene (Thanetian) dinoflagellate cyst assemblages recovered from Lakadong Sandstone exposed around Cherrapunji, Khasi Hills (Meghalaya) is discussed. Based on the occurrence of the cosmopolitan markers, the dinoflagellate cyst assemblages are assigned to the combined Apectodinium hyperacanthun and A. augustum (Ahy-Aau) Biozones. Integration of dinoflagellate cyst evidence with larger foraminiferal data from the underlying Lakadong Limestone suggests close correspondence of the Apectodinium acme with the Ranikothalia nutalli- Miscellania miscella Assemblage (SBZ 5-SBZ6). Estuarine and coastal swamp depositional setting is favoured for the coal-bearing Lakadong Sandstone which is considered to be the lateral facies equivalent of the upper Lakadong Limestone, developed during sea level highstand. The Therria limestone/sandstone - Lakadong limestone/sandstone succession of the Khasi Hills is interpreted ...
In this paper, a surface roughness (SR) study on the powder mixed electrical discharge machining ... more In this paper, a surface roughness (SR) study on the powder mixed electrical discharge machining of EN8 steel has been carried out. Response surface methodology has been used to plan and analyse the experiments. Average current, pulse on time, diameter of electrode and concentration of chromium powder added into the dielectric fluid of electrical discharge machining were chosen as process parameters to study the powder mixed electrical discharge machining performance in terms of SR. Experiments have been performed on a newly designed experimental setup developed in the laboratory. An empirical model has been developed for SR. In addition, the recommended model has been verified by conducting confirmation experiments.
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are
We consider radial and Heisenberg-homogeneous norms on the Heisenberg groups given by N α,A ((z, ... more We consider radial and Heisenberg-homogeneous norms on the Heisenberg groups given by N α,A ((z, t)) = (|z| α + A |t| α/2) 1/α , for α 2 and A > 0. This natural family includes the canonical Cygan-Korányi norm, corresponding to α = 4. We study the lattice points counting problem on the Heisenberg groups, namely establish an error estimate for the number of points that the lattice of integral points has in a ball of large radius R. The exponent we establish for the error in the case α = 2 is the best possible, in all dimensions. Résumé.-Nous considérons les normes radiales et Heisenberg-homogènes sur les groupes de Heisenberg données par N α,A ((z, t)) = |z| α + A |t| α/2 1/α , pour α 2 et A > 0. Cette famille naturelle inclut la norme canonique de Cygan-Korányi, qui correspond à α = 4. Nous étudions le problème de dénombrement des points d'un réseau dans les groupes de Heisenberg, et nous établissons un terme d'erreur sur le nombre d'éléments du réseau des points entiers dans une boule de grand rayon R. L'exposant utilisé pour le terme d'erreur dans le cas α = 2 est optimal, en toute dimension. 1. Introduction, notation, and statement of results 1.1. Euclidean and non-Euclidean lattice point counting problem The classical lattice point counting problem in Euclidean space considers a fixed compact convex set B ⊂ R n with 0 ∈ B an interior point, and aims to establish an asymptotic of the form (1.1) |Z n ∩ tB| = t n vol(B)+O θ t n−θ = vol(tB)+O κ (vol(tB)) κ
In this note, we announce new regularity results for some locally integrable distributional solut... more In this note, we announce new regularity results for some locally integrable distributional solutions to Poisson's equation. This includes, for example, the standard solutions obtained by convolution with the fundamental solution. In particular, our results show that there is no qualitative difference in the regularity of these solutions in the plane and in higher dimensions.
Proceedings of the 2nd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming, 2015
We present a toolkit called Velociraptor that can be used by compiler writers to quickly build co... more We present a toolkit called Velociraptor that can be used by compiler writers to quickly build compilers and other tools for arraybased languages. Velociraptor operates on its own unique intermediate representation (IR) designed to support a variety of arraybased languages. The toolkit also provides some novel analysis and transformations such as region detection and specialization, as well as a dynamic backend with CPU and GPU code generation. We discuss the components of the toolkit and also present case-studies illustrating the use of the toolkit.
2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, 2014
ABSTRACT OpenCL is a vendor neutral and portable interface for programming parallel compute devic... more ABSTRACT OpenCL is a vendor neutral and portable interface for programming parallel compute devices such as GPUs. Tuning OpenCL implementations of important library functions such as dense general matrix multiply (GEMM) for a particular device is a difficult problem. Further, OpenCL kernels tuned for a particular architecture perform poorly on other architectures. We present a solution to the challenge of writing a portable and high-performance GEMM implementation. We designed and implemented RaijinCL, an OpenCL auto-tuning library for real and complex variants of GEMM that automatically generates tuned kernels for a given architecture. We comprehensively tested our library on a wide variety of architectures and show that the library is competitive with vendor libraries on all tested architectures. We also implemented an autotuner for hybrid CPU+GPU GEMM that takes advantage of both the CPU and GPU on singlechip CPU+GPU platforms such as Intel Ivy Bridge. We show that our solution can outperform CPU-only, GPU-only as well as simple CPU+GPU tuning strategies. In addition to performance results, we provide analysis of architectural limitations as well as OpenCL compiler and runtime issues discovered on various systems, along with guidance on avoiding some of these issues.
Since biblical times, the labor process has been recognized as being one of the most painful huma... more Since biblical times, the labor process has been recognized as being one of the most painful human experiences. Early treatments varied widely, according to the cultural and religious practices of the time. In the middle ages, treatments such as amulets, magic girdles, and readings from the Christian liturgy were considered to be appropriate treatment. More invasive pharmacologic treatments such as the use of soporific sponges (a mixture of biologically active plants, inhaled or ingested) were sufficiently potent to cause unconsciousness. Of interest, bloodletting was used until the middle of the nineteenth century to cause swooning and thus pain relief [1]
Visual Communications and Image Processing 2005, 2005
The H.264 standard1 describes the intra prediction process to exploit the spatial correlation bet... more The H.264 standard1 describes the intra prediction process to exploit the spatial correlation between adjacent blocks for video compression. The intra prediction for luminance samples can be performed using blocks of size 4x4 or 16x16 pixels. The intra prediction for chrominance samples is performed on 8x8 block size. 4x4 block size luminance samples can be predicted using nine intra prediction
Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion, 2011
MATLAB is a popular language for scientific computation, used by millions of students, scientists... more MATLAB is a popular language for scientific computation, used by millions of students, scientists and engineers worldwide. The MCLAB project aims to provide an open source compiler and virtual machine infrastructure to enable programming language, compiler and software engineering researchers to work in this important area.
2008 37th International Conference on Parallel Processing, 2008
Report for early dissemination of its contents. In view of the transfer of copyright to the outsi... more Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T.
We present an auction-based algorithm for computing market equilibrium prices in a production mod... more We present an auction-based algorithm for computing market equilibrium prices in a production model, in which producers have a single linear production constraint, and consumers have linear utility functions. We provide algorithms for both the Fisher and Arrow-Debreu versions of the problem.
Recent comparative trials of 3‐hydroxy‐3‐methylglutaryl coenzyme A reductase inhibitors (statins)... more Recent comparative trials of 3‐hydroxy‐3‐methylglutaryl coenzyme A reductase inhibitors (statins) suggest that lower is better and that reducing low‐density lipoprotein cholesterol (LDL‐C) levels to below 100 mg/dL can provide additional clinical benefit. Non‐high‐density lipoprotein cholesterol (non‐HDL‐C) contains more atherogenic cholesterol than LDL‐C and is considered a more accurate measurement of the total amount of atherogenic particles in the circulation. Therefore, the principle that “lower is better” may also apply to lowering levels of non‐HDL‐C. In persons with high triglycerides (200–499 mg/dL), LDL‐C remains the primary target of therapy, but non‐HDL‐C is an important secondary therapeutic target. Non‐HDL‐C is strongly correlated with small dense LDL as well as apolipoprotein B, an established predictor of cardiovascular disease risk. Current evidence indicates that statins not only rapidly and dramatically reduce LDL‐C, but also have a similar effect on non‐HDL‐C, an...
With rapid advancements in computer hardware, it is now possible to perform large simulations of ... more With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an
Abstract In this paper we consider steady-state behaviour of a heterogeneous queueing system with... more Abstract In this paper we consider steady-state behaviour of a heterogeneous queueing system with instantaneously available special service facility and probabilistically available additional space both of which are queue-length dependent. The system operates at two different levels. The arrivals and the departures at these two levels occur at different rates. Steady-state probabilities at both the levels are calculated explicitly. Average number of customers in the system is also obtained. Associating the various costs, a criterion to obtain the decision points at which the hiring of additional space will be profitable and that for the size of additional space to be hired is discussed.
Cardiac function deteriorates with aging or disease. Short term, any changes in heart function ma... more Cardiac function deteriorates with aging or disease. Short term, any changes in heart function may be beneficial, but long term the alterations are often detrimental. At a molecular level, functional adaptations involve quantitative and qualitative changes in gene expression. Analysis of all the RNA transcripts present in a cell's population (transcriptome) offers unprecedented opportunities to map these transitions. Microarrays (chips), capable of evaluating thousands of transcripts in one assay, are ideal for transcriptome analyses. Gene expression profiling provides information about the dynamics of total genome expression in response to environmental changes and may point to candidate genes responsible for the cascade of events that result in disease or are a consequence of aging. The aim of this review is to describe how comparisons of cellular transcriptomes by cDNA array based techniques provide information about the dynamics of total gene expression, and how the results ...
Decentralized multi-item auctions offer great opportunities for integrating fragmented online auc... more Decentralized multi-item auctions offer great opportunities for integrating fragmented online auction markets into larger markets with more efficient outcomes. This paper extends the theory of multi-item ascending auctions of substitutes by considering any finite positive bid increment and allowing the bidders to bid asyn-chronously instead of bidding in a round-robin fashion. We consider a setup where the bidders' utilities over multiple items are additive and bound the maximum inefficiency in the allocation when the bidders follow a simple greedy strategy. We also obtain the limits within which the prices of individual items can vary from one outcome to another. For the special case of single unit bidder demand, we also bound the maximum surplus which a bidder can extract by unilaterally switching to some other strategy. The paper suggests an upper bound for the minimum required bid increment which would be necessary for competitive price discovery and truthful bidding in a practical online implementation.
The biostratigraphical and palaeoenvironmental significance of diagnostic Late Palaeocene (Thanet... more The biostratigraphical and palaeoenvironmental significance of diagnostic Late Palaeocene (Thanetian) dinoflagellate cyst assemblages recovered from Lakadong Sandstone exposed around Cherrapunji, Khasi Hills (Meghalaya) is discussed. Based on the occurrence of the cosmopolitan markers, the dinoflagellate cyst assemblages are assigned to the combined Apectodinium hyperacanthun and A. augustum (Ahy-Aau) Biozones. Integration of dinoflagellate cyst evidence with larger foraminiferal data from the underlying Lakadong Limestone suggests close correspondence of the Apectodinium acme with the Ranikothalia nutalli- Miscellania miscella Assemblage (SBZ 5-SBZ6). Estuarine and coastal swamp depositional setting is favoured for the coal-bearing Lakadong Sandstone which is considered to be the lateral facies equivalent of the upper Lakadong Limestone, developed during sea level highstand. The Therria limestone/sandstone - Lakadong limestone/sandstone succession of the Khasi Hills is interpreted ...
In this paper, a surface roughness (SR) study on the powder mixed electrical discharge machining ... more In this paper, a surface roughness (SR) study on the powder mixed electrical discharge machining of EN8 steel has been carried out. Response surface methodology has been used to plan and analyse the experiments. Average current, pulse on time, diameter of electrode and concentration of chromium powder added into the dielectric fluid of electrical discharge machining were chosen as process parameters to study the powder mixed electrical discharge machining performance in terms of SR. Experiments have been performed on a newly designed experimental setup developed in the laboratory. An empirical model has been developed for SR. In addition, the recommended model has been verified by conducting confirmation experiments.
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are
We consider radial and Heisenberg-homogeneous norms on the Heisenberg groups given by N α,A ((z, ... more We consider radial and Heisenberg-homogeneous norms on the Heisenberg groups given by N α,A ((z, t)) = (|z| α + A |t| α/2) 1/α , for α 2 and A > 0. This natural family includes the canonical Cygan-Korányi norm, corresponding to α = 4. We study the lattice points counting problem on the Heisenberg groups, namely establish an error estimate for the number of points that the lattice of integral points has in a ball of large radius R. The exponent we establish for the error in the case α = 2 is the best possible, in all dimensions. Résumé.-Nous considérons les normes radiales et Heisenberg-homogènes sur les groupes de Heisenberg données par N α,A ((z, t)) = |z| α + A |t| α/2 1/α , pour α 2 et A > 0. Cette famille naturelle inclut la norme canonique de Cygan-Korányi, qui correspond à α = 4. Nous étudions le problème de dénombrement des points d'un réseau dans les groupes de Heisenberg, et nous établissons un terme d'erreur sur le nombre d'éléments du réseau des points entiers dans une boule de grand rayon R. L'exposant utilisé pour le terme d'erreur dans le cas α = 2 est optimal, en toute dimension. 1. Introduction, notation, and statement of results 1.1. Euclidean and non-Euclidean lattice point counting problem The classical lattice point counting problem in Euclidean space considers a fixed compact convex set B ⊂ R n with 0 ∈ B an interior point, and aims to establish an asymptotic of the form (1.1) |Z n ∩ tB| = t n vol(B)+O θ t n−θ = vol(tB)+O κ (vol(tB)) κ
In this note, we announce new regularity results for some locally integrable distributional solut... more In this note, we announce new regularity results for some locally integrable distributional solutions to Poisson's equation. This includes, for example, the standard solutions obtained by convolution with the fundamental solution. In particular, our results show that there is no qualitative difference in the regularity of these solutions in the plane and in higher dimensions.
Proceedings of the 2nd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming, 2015
We present a toolkit called Velociraptor that can be used by compiler writers to quickly build co... more We present a toolkit called Velociraptor that can be used by compiler writers to quickly build compilers and other tools for arraybased languages. Velociraptor operates on its own unique intermediate representation (IR) designed to support a variety of arraybased languages. The toolkit also provides some novel analysis and transformations such as region detection and specialization, as well as a dynamic backend with CPU and GPU code generation. We discuss the components of the toolkit and also present case-studies illustrating the use of the toolkit.
2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, 2014
ABSTRACT OpenCL is a vendor neutral and portable interface for programming parallel compute devic... more ABSTRACT OpenCL is a vendor neutral and portable interface for programming parallel compute devices such as GPUs. Tuning OpenCL implementations of important library functions such as dense general matrix multiply (GEMM) for a particular device is a difficult problem. Further, OpenCL kernels tuned for a particular architecture perform poorly on other architectures. We present a solution to the challenge of writing a portable and high-performance GEMM implementation. We designed and implemented RaijinCL, an OpenCL auto-tuning library for real and complex variants of GEMM that automatically generates tuned kernels for a given architecture. We comprehensively tested our library on a wide variety of architectures and show that the library is competitive with vendor libraries on all tested architectures. We also implemented an autotuner for hybrid CPU+GPU GEMM that takes advantage of both the CPU and GPU on singlechip CPU+GPU platforms such as Intel Ivy Bridge. We show that our solution can outperform CPU-only, GPU-only as well as simple CPU+GPU tuning strategies. In addition to performance results, we provide analysis of architectural limitations as well as OpenCL compiler and runtime issues discovered on various systems, along with guidance on avoiding some of these issues.
Since biblical times, the labor process has been recognized as being one of the most painful huma... more Since biblical times, the labor process has been recognized as being one of the most painful human experiences. Early treatments varied widely, according to the cultural and religious practices of the time. In the middle ages, treatments such as amulets, magic girdles, and readings from the Christian liturgy were considered to be appropriate treatment. More invasive pharmacologic treatments such as the use of soporific sponges (a mixture of biologically active plants, inhaled or ingested) were sufficiently potent to cause unconsciousness. Of interest, bloodletting was used until the middle of the nineteenth century to cause swooning and thus pain relief [1]
Visual Communications and Image Processing 2005, 2005
The H.264 standard1 describes the intra prediction process to exploit the spatial correlation bet... more The H.264 standard1 describes the intra prediction process to exploit the spatial correlation between adjacent blocks for video compression. The intra prediction for luminance samples can be performed using blocks of size 4x4 or 16x16 pixels. The intra prediction for chrominance samples is performed on 8x8 block size. 4x4 block size luminance samples can be predicted using nine intra prediction
Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion, 2011
MATLAB is a popular language for scientific computation, used by millions of students, scientists... more MATLAB is a popular language for scientific computation, used by millions of students, scientists and engineers worldwide. The MCLAB project aims to provide an open source compiler and virtual machine infrastructure to enable programming language, compiler and software engineering researchers to work in this important area.
2008 37th International Conference on Parallel Processing, 2008
Report for early dissemination of its contents. In view of the transfer of copyright to the outsi... more Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T.
We present an auction-based algorithm for computing market equilibrium prices in a production mod... more We present an auction-based algorithm for computing market equilibrium prices in a production model, in which producers have a single linear production constraint, and consumers have linear utility functions. We provide algorithms for both the Fisher and Arrow-Debreu versions of the problem.
Recent comparative trials of 3‐hydroxy‐3‐methylglutaryl coenzyme A reductase inhibitors (statins)... more Recent comparative trials of 3‐hydroxy‐3‐methylglutaryl coenzyme A reductase inhibitors (statins) suggest that lower is better and that reducing low‐density lipoprotein cholesterol (LDL‐C) levels to below 100 mg/dL can provide additional clinical benefit. Non‐high‐density lipoprotein cholesterol (non‐HDL‐C) contains more atherogenic cholesterol than LDL‐C and is considered a more accurate measurement of the total amount of atherogenic particles in the circulation. Therefore, the principle that “lower is better” may also apply to lowering levels of non‐HDL‐C. In persons with high triglycerides (200–499 mg/dL), LDL‐C remains the primary target of therapy, but non‐HDL‐C is an important secondary therapeutic target. Non‐HDL‐C is strongly correlated with small dense LDL as well as apolipoprotein B, an established predictor of cardiovascular disease risk. Current evidence indicates that statins not only rapidly and dramatically reduce LDL‐C, but also have a similar effect on non‐HDL‐C, an...
With rapid advancements in computer hardware, it is now possible to perform large simulations of ... more With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an
Abstract In this paper we consider steady-state behaviour of a heterogeneous queueing system with... more Abstract In this paper we consider steady-state behaviour of a heterogeneous queueing system with instantaneously available special service facility and probabilistically available additional space both of which are queue-length dependent. The system operates at two different levels. The arrivals and the departures at these two levels occur at different rates. Steady-state probabilities at both the levels are calculated explicitly. Average number of customers in the system is also obtained. Associating the various costs, a criterion to obtain the decision points at which the hiring of additional space will be profitable and that for the size of additional space to be hired is discussed.
Cardiac function deteriorates with aging or disease. Short term, any changes in heart function ma... more Cardiac function deteriorates with aging or disease. Short term, any changes in heart function may be beneficial, but long term the alterations are often detrimental. At a molecular level, functional adaptations involve quantitative and qualitative changes in gene expression. Analysis of all the RNA transcripts present in a cell's population (transcriptome) offers unprecedented opportunities to map these transitions. Microarrays (chips), capable of evaluating thousands of transcripts in one assay, are ideal for transcriptome analyses. Gene expression profiling provides information about the dynamics of total genome expression in response to environmental changes and may point to candidate genes responsible for the cascade of events that result in disease or are a consequence of aging. The aim of this review is to describe how comparisons of cellular transcriptomes by cDNA array based techniques provide information about the dynamics of total gene expression, and how the results ...
Decentralized multi-item auctions offer great opportunities for integrating fragmented online auc... more Decentralized multi-item auctions offer great opportunities for integrating fragmented online auction markets into larger markets with more efficient outcomes. This paper extends the theory of multi-item ascending auctions of substitutes by considering any finite positive bid increment and allowing the bidders to bid asyn-chronously instead of bidding in a round-robin fashion. We consider a setup where the bidders' utilities over multiple items are additive and bound the maximum inefficiency in the allocation when the bidders follow a simple greedy strategy. We also obtain the limits within which the prices of individual items can vary from one outcome to another. For the special case of single unit bidder demand, we also bound the maximum surplus which a bidder can extract by unilaterally switching to some other strategy. The paper suggests an upper bound for the minimum required bid increment which would be necessary for competitive price discovery and truthful bidding in a practical online implementation.
Uploads
Papers by Rahul garg