Skip to main content
Over the last decade, storage systems have experienced a 10fold increase between their capacity and bandwidth. This gap is predicted to grow faster with exponentially growing concurrency levels, with future exascales delivering millions... more
    • by 
    •   14  
      MathematicsComputer ScienceDistributed ComputingDistributed Systems
We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework,... more
    • by 
    •   23  
      Distributed ComputingGrid ComputingParallel ComputingHigh Performance Computing Applications development for Atmosphere modeling
Parallel I/O is fast becoming a bottleneck to the research agendas of many users of extreme scale parallel computers. The principle cause of this is the concurrency explosion of high-end computation, coupled with the complexity of... more
    • by 
    •   12  
      Computer ScienceHigh Performance ComputingParallel ProcessingScientific Computing
Computed Tomography (CT) reconstruction is a computationally and data-intensive process applied across many fields of scientific endeavor, including medical and materials science, as a noninvasive imaging technique. A typical CT dataset... more
    • by  and +2
    •   11  
      Materials ScienceComputed TomographyData storageScaling up
Comparisons of high-performance computers based on their peak floating point performance are common but seldom useful when comparing performance on real workloads. Factors that influence sustained performance extend beyond a system's... more
    • by 
    •   10  
      Computer ArchitectureDistributed ComputingPerformance EvaluationComputer Software
The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for... more
    • by 
    •   7  
      Distributed ComputingLinux ClusterDatabase QueryHigh End Computing
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning,... more
    • by 
    •   19  
      Computer ScienceComputer ArchitectureProductivitySoftware Architecture
For parallel applications running on high-end computing systems, which processes of an application get launched on which processing cores is typically determined at application launch time without any information about the application... more
    • by 
    •   44  
      Program EvaluationComputer ScienceNursingPediatrics
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is... more
    • by 
    •   13  
      Scientific ComputingPerformance ModelAlgorithm DesignPerformance Improvement
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning,... more
    • by 
    •   19  
      Computer ScienceComputer ArchitectureProductivitySoftware Architecture
Today’s cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
    • by 
    •   15  
      Distributed ComputingComputational ComplexityCryptographyDistributed System
In the next section, we describe the minimal set of requirements we defined for ourselves. We then describe the Maya renderer architecture in Section 3. Section 4 describes our implementation in detail with a view on how we achieved the... more
    • by 
    •   18  
      Computer ScienceComputer ArchitectureComputer GraphicsAnimation
In this paper, we propose and present the design and initial development of the Fault awareness Enabled Computing Environment (FENCE) system for high end computing. FENCE is a comprehensive fault management system in the sense that it... more
    • by 
    •   3  
      High End ComputingRuntime AnalysisFault management
With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many... more
    • by 
    •   16  
      Distributed SystemCloud ComputingCluster ComputingGlobal Warming
Future high-end computers will offer great performance improvements over today's machines, enabling applications of far greater complexity. However, designers must solve the challenge of exploiting massive parallelism efficiency in the... more
    • by 
    •   13  
      Computer ScienceParallel ComputingHigh Performance ComputingEarth
The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to... more
    • by 
    •   33  
      Distributed ComputingInformation TechnologyCarbonEnergy Policy
Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces... more
    • by 
    •   11  
      High Performance ComputingWeb ServicesDistributed ProcessingParallel & Distributed Computing
Adaptive mesh refinement (AMR) is a powerful technique that reduces the resources necessary to solve otherwise in-tractable problems in computational science. The AMR strategy solves the problem on a relatively coarse grid, and... more
    • by 
    •   4  
      EngineeringPerformance MonitoringHigh End ComputingAdaptive Mesh Refinement
    • by 
    •   5  
      Computer ScienceData MiningData VisualisationHigh End Computing
One critical component of future file systems for high-end computing is meta-data management. This work presents ZHT, a zero-hop distributed hash table, which has been tuned for the requirements of HEC systems. ZHT aims to be a building... more
    • by 
    •   6  
      DesignPerformanceCloud ComputingMeasurement
    • by 
    •   20  
      EngineeringAcousticsElectrochemistryComputer Aided Design
    • by 
    •   10  
      Partial Differential EquationsLinear AlgebraGeometryGpu programming
Today’s cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
    • by 
    •   15  
      Distributed ComputingComputational ComplexityCryptographyDistributed System
    • by 
    •   2  
      Research and DevelopmentHigh End Computing
Achieving good performance on high-end computing systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges in DOE's SciDAC-2... more
    • by 
    •   7  
      Performance ModelUser preferencesHigh End ComputingParallel Computer
Parallel I/O is fast becoming a bottleneck to the research agendas of many users of extreme scale parallel computers. The principle cause of this is the concurrency explosion of high-end computation, coupled with the complexity of... more
    • by 
    •   11  
      High Performance ComputingParallel ProcessingScientific ComputingMiddleware
    • by 
    •   16  
      BioinformaticsDistributed ComputingSchedulingWorkflow
Today's cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
    • by 
    •   15  
      Distributed ComputingComputational ComplexityCryptographyDistributed System
Failure management is crucial for high performance computing systems, especially when the complexity of applications and underlying infrastructure has grown sharply in recent years. In this paper, we present the design, implementation and... more
    • by 
    •   13  
      Computer ScienceDistributed ComputingHigh Performance ComputingManagement Information Systems
AQUAGRID is the subsurface hydrology computational service of the Sardinian GRIDA3 infrastructure, designed to deliver complex environmental applications via a user-friendly Web portal. The service aims to provide to water professionals... more
    • by  and +1
    •   17  
      Decision MakingMiddlewareContaminant TransportCollaborative Problem Solving
This white paper addresses three separate questions in the HECRTF call for white papers: (2d) performance metrics that quantify benefits; (5) practical performance measures for system procurement that correlate well with realized... more
    • by 
    • High End Computing
This paper proposes the study of a new computation model that attempts to address the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit... more
    • by 
    •   15  
      Computer ScienceHigh Performance ComputingComputational ModelingParallel Programming
As the power consumption of a server system becomes a mainstream concern in enterprise environments, understanding the system's power behavior at varying utilization levels provides us a key to select appropriate energy-efficiency... more
    • by 
    •   10  
      Nonlinear ProgrammingPower ManagementPower ConsumptionShape
Coordinated Observation and Prediction of the Earth System (COPES) 3 , the WCRP is embarking on an ambitious, decade-long observing and modeling activity that is intended to improve understanding of the mechanisms that determine the mean... more
    • by 
    •   10  
      Data ManagementHigh End ComputingModel developmentComputer Program
1] It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers have changed this. The... more
    • by 
    •   5  
      MultidisciplinaryHurricane KatrinaHigh End ComputingFinite Volume
In this work we present an scientific application that has been given a Hadoop MapReduce implementation. We also discuss other scientific fields of supercomputing that could benefit from a MapReduce implementation. We recognize in this... more
    • by 
    •   4  
      Data MiningData VisualisationHigh End ComputingLarge Dataset Analysis
The rapid growth of InfiniBand, 10 Gigabit Ethernet/iWARP and IB WAN extensions is increasingly gaining momentum for designing high end computing clusters and data-centers. For typical applications such as data staging, content... more
    • by 
    •   11  
      ProtocolsLocal Area NetworksData CenterHigh End Computing
1] The increasing capability of high-end computers allows numerical simulations with horizontal resolutions high enough to resolve cloud systems in a global model. In this paper, initial results from the global Nonhydrostatic ICosahedral... more
    • by 
    •   9  
      Atmospheric ModelingMultidisciplinaryIndian OceanNumerical Simulation
The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many... more
    • by 
    •   3  
      Scientific ComputingHigh End ComputingHigh performance
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing... more
    • by 
    •   20  
      Materials SciencePlasma PhysicsAtmospheric ModelingEarth
AQUAGRID is the subsurface hydrology computational service of the Sardinian GRIDA3 infrastructure, designed to deliver complex environmental applications via a user-friendly Web portal. The service aims to provide to water professionals... more
    • by 
    •   17  
      Decision MakingMiddlewareContaminant TransportCollaborative Problem Solving
This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the... more
    • by 
    •   8  
      Computer ArchitectureLife SciencesScientific ComputingNational Security
In order to take full advantage of high-end computing platforms, scientific applications often require modifications to source codes, and to their build systems that generate executable files. The ever-increasing emphasis on productivity... more
    • by 
    •   5  
      Life CycleHigh End ComputingSource CodeSoftware Tool
Linux clusters have become very popular for scientific com- puting at research institutions world-wide, because they can be easily deployed at a fairly low cost. However, the most pressing issues of today's cluster solutions are... more
    • by 
    •   5  
      Scientific ComputingLinux ClusterHigh End ComputingBeowulf Cluster
    • by 
    •   19  
      MathematicsComputer ScienceGrid ComputingHigh Performance Computing
Maya is the new 3D software package recently released by Alias|Wavefront for creating state-of-the-art character animation and visual effects. Built on a nextgeneration advanced architecture, Maya delivers high speed interaction and high... more
    • by 
    •   18  
      Computer ScienceComputer ArchitectureComputer GraphicsAnimation
Abstract. The OptIPuter is a radical distributed visualization, teleimmersion, data mining and computing architecture. Observing that the exponential growth rates in bandwidth and storage are now much higher than Moore's Law, this... more
    • by 
    •   9  
      EngineeringComputer ArchitectureData MiningNew World
    • by 
    •   6  
      DesignPerformanceCloud ComputingMeasurement
In 2003, the High End Computing Revitalization Task Force designated file systems and I/O as an area in need of national focus. The purpose of the High End Computing Interagency Working Group (HECIWG) is to coordinate government spending... more
    • by 
    •   13  
      ManagementComputer ScienceOperating SystemsDesign