Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, IEEE Circuits and Devices Magazine
…
5 pages
1 file
Concurrency and Computation: Practice and Experience, 2016
Recent developments in research and technological studies have shown that High Performance Computing (HPC) will indeed lead to advances in several areas of Science, engineering, and technology, permitting the successful completion of more computationally intensive and dataintensive problems such as those in healthcare, biomedical and biosciences, climate and environmental changes, multimedia processing, design and manufacturing of advanced materials, geology, astronomy, chemistry, physics, and even financial systems. However, further research is required for developing computing infrastructures, models to support newly evolving architectures, programming paradigms, tools to simulate and evaluate new approaches and solutions, and programming languages that are appropriate for the new and emerging domains and applications. The development of the HPC infrastructure has been accelerated by the advances in silicon technology, which permitted the design of complex systems able to incorporate many hardware and software blocks and cores. More precisely, recent rapid advances in technology and design tools enabled engineers to design systems with hundreds of cores, called multi-processor system-on-chip. These systems are composed of several processing elements, that is, dedicated hardware and software components that are interconnected by an on-chip interconnect. According to Moore's law, the number of cores on-chip will double every 18 months; therefore, thousands of cores-on-chip will be integrated in the next 20 years to meet the power and performance requirements of applications. Moreover, current trends on the road to exascale are moving toward the integration of more and more cores into a single chip [1, 2]. For example, accelerators and heterogeneous processing offer some opportunities to greatly increase computational performance and to match increasing application requirements [3]. Engineering these computing systems is one of the most dynamic fields in modern Science and technology. That said, there will continue to be a growing demand for more powerful HPC in the upcoming years, not just to tackle basic mounting computing needs but also to lay out the foundations for the HPC market that is becoming potentially larger than the desktop/laptop computer market. Furthermore, HPC is turning out to be a major source of hope for future applications development that require greater amounts of computing resources in various modern Science domains such as bioengineering, nanotechnology, and energy where HPC capabilities are mandatory in order to run simulations and perform visualization tasks. At the time of writing this editorial, petaflop computing is well established [4]. Several architectures are making major breakthroughs: commodity, accelerators on commodity, and special-purpose cores. All of top 500 systems are based on multicore technologies [5]. HPC usage is growing considerably, especially in industry. And significant efforts toward establishing exascale are underway. At the same time, several challenges have been recently identified in order to create large-scale computing systems that meet current and projected application requirements. Most of them are related to system architectures, algorithms, big data processing, and programming models [6]. However, energy cost, resilience, Central Processing Unit (CPU) access latency and memory transfers are key challenges to address in the era of exascale. To address these challenges, completely new approaches and technologies and a shift from the current approaches used for application development and execution to adaptive approaches are required. Consequently, further research is required for developing advanced exascale-based computing infrastructures, models, and paradigms to support newly emerging architectures, programming models, tools to simulate and evaluate more
High-performance computing (HPC) has become an essential tool in both scientific and industrial communities because it provides an excellent environment for solving a wide range of problems in modern science and engineering. Nowadays, HPC clusters (HPCC) are, in fact, the main architecture for supercomputers due to the high performance of commodity microprocessors and networks and to the lower price/performance ratio. Moreover, current standalone computers present tremendous power and they could be considered as desktop HPC platforms if their resources are appropriately exploited.
2012
High performance computing (HPC) has seen a long history of progress over the last six decades. The prospects for an increase in performance over the coming decade are still very good. Systems with a performance in the range of Petaflops become widely available and discussions have started about how to achieve an Exaflop. This paper discusses the further development of high performance computing in the future. It proposes a strategy that shifts the focus of attention from hardware to software. With such a change of paradigm further progress in the field of simulation is more likely than with many of the concepts presented for Exascale computing.
1996
Description/Abstract We review possible and probable industrial applications of HPCC focusing on the software and hardware issues. Thirty-three separate categories are illustrated by detailed descriptions of five areas--computational chemistry; Monte Carlo methods from physics to economics; manufacturing; and computational fluid dynamics; command and control; or crisis management; and multimedia services to client computers and settop boxes.
Cybernetics and Information Technologies, 2017
High Performance Computing (HPC) is required for many important applications in chemistry, computational fluid dynamics, etc., see, e.g., an overview in [1]. In this paper we shortly describe an application (a multiscale material design problem) that requires HPC for several reasons. The problem of interest is analysis of the fiber-reinforced concrete and we focus on modelling of stiffness through numerical homogenization and computing local material properties by inverse analysis. Both problems require a repeated solution of large-scale finite element problems up to 200 million degrees of freedom and therefore the importance of HPC computing is evident.
Innovative Research and Applications in Next-Generation High Performance Computing, 2016
International Journal of Modern Physics A, 2013
High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.
Advances in Engineering Software, 2007
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
2017
Computational science is well on its way to enter into the exascale era. High-performance computing emerged to meet the need of increasing demands for processing speed. High-performance computers have evolved from MFlops to GFlops to PFlops scale over the past two decades. (Flop = floating-point operations per second). High-performance computing is fast computing using high-performance computers such as supercomputers. HPC has become determinants of industrial competitiveness and advanced research in several areas. This paper presents a brief introduction to high-performance computing.
South Florida Journal of Development, 2021
Environment and Planning E: Nature and Space, 2021
Environmental Archaeology: The Journal of Human Paleoecology, 2018
2006 IEEE Conference on Emerging Technologies and Factory Automation, 2006
The Journal of orthopaedic and sports physical therapy, 2017
IOSR Journal of Mechanical and Civil Engineering, 2014
Covid stories from East Africa and beyond: Lived experiences and forward-looking reflections, 2020
Journal of Religion and Film , 2020
Revista Universidad Catolica de Oriente, 2021
Matter and Radiation at Extremes, 2019
The Japanese Journal of Personality, 2008
Oriental Studies. , 2024
Acta Universitatis Lodziensis. Folia Litteraria Polonica
2019
Archives of Biological Sciences, 2013
Infrared Detectors and Instrumentation, 1993