Skip to main content
For parallel applications running on high-end computing systems, which processes of an application get launched on which processing cores is typically determined at application launch time without any information about the application... more
    • by 
    •   44  
      Program EvaluationComputer ScienceNursingPediatrics
The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to... more
    • by 
    •   33  
      Distributed ComputingInformation TechnologyCarbonEnergy Policy
In this paper, we present PyPANCG, a Python library-interface that implements both the conjugate gradient method and the preconditioned conjugate gradient method for solving nonlinear systems. We describe the use of the library and its... more
    • by 
    •   6  
      Distributed ComputingDesign processHigh End ComputingHigh performance
The rapid growth of InfiniBand, 10 Gigabit Ethernet/iWARP and IB WAN extensions is increasingly gaining momentum for designing high end computing clusters and data-centers. For typical applications such as data staging, content... more
    • by 
    •   11  
      ProtocolsLocal Area NetworksData CenterHigh End Computing
The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. They need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver... more
    • by 
    •   18  
      Energy ConsumptionEnvironmental SustainabilityCloud ComputingOptimal mine design and scheduling
Software optimization for multicore architectures is one of the most critical challenges in today's high-end computing. In this paper we focus on a well-known multicore platform, namely the Cell BE processor, and we address the problem of... more
    • by 
    •   2  
      High End ComputingCommunication Channels
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning,... more
    • by 
    •   16  
      Computer ArchitectureProductivitySoftware ArchitectureDynamic programming
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is... more
    • by 
    •   13  
      Scientific ComputingPerformance ModelAlgorithm DesignPerformance Improvement
    • by 
    •   2  
      Research and DevelopmentHigh End Computing
In this work we present an scientific application that has been given a Hadoop MapReduce implementation. We also discuss other scientific fields of supercomputing that could benefit from a MapReduce implementation. We recognize in this... more
    • by 
    •   4  
      Data MiningData VisualisationHigh End ComputingLarge Dataset Analysis
7 MW [l]. There is a critical need for new hardware technologies as well as new architectural approaches to Scaling CMOS to higher pe$ormance for high end achieve these performance levels at acceptable power. computing requires limiting... more
    • by 
    •   6  
      Technology developmentArchitectural DesignSuperconductorsComputational Efficiency
Mesh generation is a critical component for many (bio-)engineering applications. However, parallel mesh generation codes, which are essential for these applications to take the fullest advantage of the high-end computing platforms, belong... more
    • by  and +1
    •   7  
      Mesh generationHigh End ComputingHybrid AlgorithmLoad Balance
Adaptive mesh refinement (AMR) is a powerful technique that reduces the resources necessary to solve otherwise in-tractable problems in computational science. The AMR strategy solves the problem on a relatively coarse grid, and... more
    • by 
    •   4  
      EngineeringPerformance MonitoringHigh End ComputingAdaptive Mesh Refinement
K42 is an open-source scalable research operating system well suited to support systems research. The primary goals of K42's design that support such research include flexibility to allow a multitude of policies and implementations to be... more
    • by  and +1
    •   12  
      Operating SystemsResource AllocationSystem DesignOPERATING SYSTEM
Computed Tomography (CT) reconstruction is a computationally and data-intensive process applied across many fields of scientific endeavor, including medical and materials science, as a noninvasive imaging technique. A typical CT dataset... more
    • by  and +2
    •   11  
      Materials ScienceComputed TomographyData storageScaling up
With the latest high-end computing nodes combining shared-memory multiprocessing with hardware multithreading, new scheduling policies are necessary for workloads consisting of multithreaded applications. The use of hybrid multiprocessors... more
    • by 
    •   6  
      Shared memoryPerformance ImprovementHigh End ComputingParallel
Many scientific programs exchange large quantities of double-precision data between processing nodes and with mass storage devices. Data compression can reduce the number of bytes that need to be transferred and stored. However, data... more
    • by 
    •   11  
      Distributed ComputingData CompressionComputer HardwareComputer Software
Computed Tomography (CT) reconstruction is a computationally and data-intensive process applied across many fields of scientific endeavor, including medical and materials science, as a noninvasive imaging technique. A typical CT dataset... more
    • by  and +1
    •   11  
      Materials ScienceComputed TomographyData storageScaling up
As High-End Computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpointrestart, are unsuitable at these... more
    • by 
    •   6  
      Computer NetworkHigh End ComputingMessage PassingFault Tolerant
Linux clusters have become very popular for scientific com- puting at research institutions world-wide, because they can be easily deployed at a fairly low cost. However, the most pressing issues of today's cluster solutions are... more
    • by 
    •   5  
      Scientific ComputingLinux ClusterHigh End ComputingBeowulf Cluster
Comparisons of high-performance computers based on their peak floating point performance are common but seldom useful when comparing performance on real workloads. Factors that influence sustained performance extend beyond a system's... more
    • by 
    •   10  
      Computer ArchitectureDistributed ComputingPerformance EvaluationComputer Software
With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many... more
    • by 
    •   16  
      Distributed SystemCloud ComputingCluster ComputingGlobal Warming
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing... more
    • by 
    •   20  
      Materials SciencePlasma PhysicsAtmospheric ModelingEarth
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing... more
    • by 
    •   16  
      Distributed ComputingMaterials SciencePlasma PhysicsHigh Performance Computing Applications development for Atmosphere modeling
Achieving good performance on high-end computing systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges in DOE's SciDAC-2... more
    • by 
    •   6  
      Performance ModelUser preferencesHigh End ComputingParallel Computer
1] The increasing capability of high-end computers allows numerical simulations with horizontal resolutions high enough to resolve cloud systems in a global model. In this paper, initial results from the global Nonhydrostatic ICosahedral... more
    • by 
    •   9  
      Atmospheric ModelingMultidisciplinaryIndian OceanNumerical Simulation
We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework,... more
    • by 
    •   23  
      Distributed ComputingGrid ComputingParallel ComputingHigh Performance Computing Applications development for Atmosphere modeling
Performance and power are critical design constraints in today's high-end computing systems. Reducing power consumption without impacting system performance is a challenge for the HPC community. We present a runtime system (CPU MISER) and... more
    • by 
    •   9  
      Power ManagementParallel ProcessingPerformance ModelPower Consumption
In this paper, we propose and present the design and initial development of the Fault awareness Enabled Computing Environment (FENCE) system for high end computing. FENCE is a comprehensive fault management system in the sense that it... more
    • by 
    •   3  
      High End ComputingRuntime AnalysisFault management
Mobile devices, which are progressively surrounded in our everyday life, have created a new paradigm where they interconnect, interact and collaborate with each other. This network can be used for flexible and secure coordinated sharing.... more
    • by 
    •   6  
      Grid ComputingEveryday LifeMobile Ad Hoc NetworkMobile Device
K42 is an open-source scalable research operating system well suited to support systems research. The primary goals of K42's design that support such research include flexibility to allow a multitude of policies and implementations to be... more
    • by 
    •   8  
      Operating SystemsOPERATING SYSTEMOpen SourcePerformance Monitoring
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing... more
    • by 
    •   20  
      Materials SciencePlasma PhysicsAtmospheric ModelingEarth
    • by 
    •   16  
      BioinformaticsDistributed ComputingSchedulingWorkflow
We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs... more
    • by  and +1
    •   7  
      EngineeringSoftware EngineeringLattice gauge theoryPhysical sciences
Linux currently plays an important role in high-end computing systems, but recent work has shown that Linux-related processing costs and variablity in network processing times can limit the scalability of HPC applications. Measuring and... more
    • by 
    •   5  
      OPERATING SYSTEMData CollectionPerformance MonitoringHigh End Computing
A grid consists of high-end computational, storage, and network resources that, while known a priori, are dynamic with respect to activity and availability. Efficient scheduling of requests to use grid resources must adapt to this dynamic... more
    • by 
    •   5  
      Virtual OrganizationHigh End ComputingParallelDynamic Environment
AQUAGRID is the subsurface hydrology computational service of the Sardinian GRIDA3 infrastructure, designed to deliver complex environmental applications via a user-friendly Web portal. The service aims to provide to water professionals... more
    • by  and +1
    •   17  
      Decision MakingMiddlewareContaminant TransportCollaborative Problem Solving
The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many... more
    • by 
    •   3  
      Scientific ComputingHigh End ComputingHigh performance
We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework,... more
    • by 
    •   23  
      Distributed ComputingGrid ComputingParallel ComputingHigh Performance Computing Applications development for Atmosphere modeling
Failure management is crucial for high performance computing systems, especially when the complexity of applications and underlying infrastructure has grown sharply in recent years. In this paper, we present the design, implementation and... more
    • by 
    •   13  
      Computer ScienceDistributed ComputingHigh Performance ComputingManagement Information Systems
As high-end computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are increasingly... more
    • by  and +1
    •   11  
      High Performance ComputingComputingBenchmarkingDistributed Processing
Grid workflows can be seen as special scientific workflows involving high performance and/or high throughput computational tasks. Much work in grid workflows has focused on improving application performance through schedulers that... more
    • by 
    •   16  
      BioinformaticsDistributed ComputingSchedulingWorkflow
    • by 
    •   10  
      Partial Differential EquationsLinear AlgebraGeometryGpu programming
In 2003, the High End Computing Revitalization Task Force designated file systems and I/O as an area in need of national focus. The purpose of the High End Computing Interagency Working Group (HECIWG) is to coordinate government spending... more
    • by 
    •   6  
      Operating SystemsFile SystemsHigh End ComputingStorage
High-end computing is increasingly I/O bound as computations become more data-intensive, and data transport technologies struggle to keep pace with the demands of large-scale, distributed computations. One approach to avoiding unnecessary... more
    • by  and +1
    •   6  
      Distributed ComputingHigh End ComputingStorage systemData Intensive Computing
    • by  and +1
    •   8  
      Computer ArchitectureLife SciencesScientific ComputingNational Security
For parallel applications running on high-end computing systems, which processes of an application get launched on which processing cores is typically determined at application launch time without any information about the application... more
    • by 
    •   7  
      Computer ScienceComputer SoftwareProcess managementUnited States
The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for... more
    • by 
    •   7  
      Distributed ComputingLinux ClusterDatabase QueryHigh End Computing
In order to take full advantage of high-end computing platforms, scientific applications often require modifications to source codes, and to their build systems that generate executable files. The ever-increasing emphasis on productivity... more
    • by 
    •   5  
      Life CycleHigh End ComputingSource CodeSoftware Tool