High End Computing
1 Follower
Recent papers in High End Computing
Over the last decade, storage systems have experienced a 10fold increase between their capacity and bandwidth. This gap is predicted to grow faster with exponentially growing concurrency levels, with future exascales delivering millions... more
We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework,... more
Parallel I/O is fast becoming a bottleneck to the research agendas of many users of extreme scale parallel computers. The principle cause of this is the concurrency explosion of high-end computation, coupled with the complexity of... more
Comparisons of high-performance computers based on their peak floating point performance are common but seldom useful when comparing performance on real workloads. Factors that influence sustained performance extend beyond a system's... more
The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for... more
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning,... more
For parallel applications running on high-end computing systems, which processes of an application get launched on which processing cores is typically determined at application launch time without any information about the application... more
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is... more
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning,... more
Today’s cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
In the next section, we describe the minimal set of requirements we defined for ourselves. We then describe the Maya renderer architecture in Section 3. Section 4 describes our implementation in detail with a view on how we achieved the... more
In this paper, we propose and present the design and initial development of the Fault awareness Enabled Computing Environment (FENCE) system for high end computing. FENCE is a comprehensive fault management system in the sense that it... more
With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many... more
Future high-end computers will offer great performance improvements over today's machines, enabling applications of far greater complexity. However, designers must solve the challenge of exploiting massive parallelism efficiency in the... more
The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to... more
Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces... more
Adaptive mesh refinement (AMR) is a powerful technique that reduces the resources necessary to solve otherwise in-tractable problems in computational science. The AMR strategy solves the problem on a relatively coarse grid, and... more
One critical component of future file systems for high-end computing is meta-data management. This work presents ZHT, a zero-hop distributed hash table, which has been tuned for the requirements of HEC systems. ZHT aims to be a building... more
Today’s cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
Achieving good performance on high-end computing systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges in DOE's SciDAC-2... more
Parallel I/O is fast becoming a bottleneck to the research agendas of many users of extreme scale parallel computers. The principle cause of this is the concurrency explosion of high-end computation, coupled with the complexity of... more
Today's cryptanalysis on symmetric key cryptography is encouraging the use of larger key sizes and complex algorithms to achieve an unbreakable state. However, this leads an increase in computational complexity. This has promoted many... more
Failure management is crucial for high performance computing systems, especially when the complexity of applications and underlying infrastructure has grown sharply in recent years. In this paper, we present the design, implementation and... more
This white paper addresses three separate questions in the HECRTF call for white papers: (2d) performance metrics that quantify benefits; (5) practical performance measures for system procurement that correlate well with realized... more
This paper proposes the study of a new computation model that attempts to address the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit... more
As the power consumption of a server system becomes a mainstream concern in enterprise environments, understanding the system's power behavior at varying utilization levels provides us a key to select appropriate energy-efficiency... more
Coordinated Observation and Prediction of the Earth System (COPES) 3 , the WCRP is embarking on an ambitious, decade-long observing and modeling activity that is intended to improve understanding of the mechanisms that determine the mean... more
1] It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers have changed this. The... more
In this work we present an scientific application that has been given a Hadoop MapReduce implementation. We also discuss other scientific fields of supercomputing that could benefit from a MapReduce implementation. We recognize in this... more
The rapid growth of InfiniBand, 10 Gigabit Ethernet/iWARP and IB WAN extensions is increasingly gaining momentum for designing high end computing clusters and data-centers. For typical applications such as data staging, content... more
1] The increasing capability of high-end computers allows numerical simulations with horizontal resolutions high enough to resolve cloud systems in a global model. In this paper, initial results from the global Nonhydrostatic ICosahedral... more
The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many... more
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing... more
AQUAGRID is the subsurface hydrology computational service of the Sardinian GRIDA3 infrastructure, designed to deliver complex environmental applications via a user-friendly Web portal. The service aims to provide to water professionals... more
This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the... more
In order to take full advantage of high-end computing platforms, scientific applications often require modifications to source codes, and to their build systems that generate executable files. The ever-increasing emphasis on productivity... more
Linux clusters have become very popular for scientific com- puting at research institutions world-wide, because they can be easily deployed at a fairly low cost. However, the most pressing issues of today's cluster solutions are... more
Maya is the new 3D software package recently released by Alias|Wavefront for creating state-of-the-art character animation and visual effects. Built on a nextgeneration advanced architecture, Maya delivers high speed interaction and high... more
Abstract. The OptIPuter is a radical distributed visualization, teleimmersion, data mining and computing architecture. Observing that the exponential growth rates in bandwidth and storage are now much higher than Moore's Law, this... more
In 2003, the High End Computing Revitalization Task Force designated file systems and I/O as an area in need of national focus. The purpose of the High End Computing Interagency Working Group (HECIWG) is to coordinate government spending... more