Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
3 pages
1 file
The structure of the multiprocessor with modified deterministic data flow architecture was proposed. The variants of the multiprocessor depending on the processor memory structure is considered. The results of simulation are presented in the paper.
Microprocessors and Microsystems, 2003
This paper presents new architectural concepts for uniprocessor system designs. They result in a uniprocessor design that conforms to the data-driven (i.e. dataflow) computation paradigm. It is shown that usage of this, namely D 2 -CPU (Data-Driven) processor, follows the natural flow of programs, minimizes redundant (micro)operations, lowers the hardware cost, and reduces the power consumption. We assume that programs are developed naturally using a graphical or equivalent language that can explicitly show all data dependencies. Instead of giving the CPU the privileged right of deciding what instructions to fetch in each cycle (as is the case for CPUs with a program counter), instructions are entering the CPU when they are ready to execute or when all their operand(s) are to be available within a few clock cycles. This way, the application-knowledgeable algorithm, rather than the application-ignorant CPU, is in control. The CPU is used just as a resource, the way it should normally be. This approach results in outstanding performance and elimination of large numbers of redundant operations that plague current processor designs. The latter, conventional CPUs are characterized by numerous redundant operations, such as the first memory cycle in instruction fetching that is part of any instruction cycle, and instruction and data prefetchings for instructions that are not always needed. A comparative analysis of our design with conventional designs proves that it is capable of better performance and simpler programming. Finally, VHDL implementation is used to prove the viability of this approach. q
The 16th Annual International Symposium on Computer Architecture
A highly parallel (more than a thousand) datapoW machine EM-4 is now under development. The EM-4 &sign principle is to construct a high performance computer using a compact architecture by overcoming several defects of dataflow machines. Constructing the EM-4, it is essential to fabricate a processing element (PE) on a single chip for reducing operation speed, system size, design complexhy and cost. In the EM-4. the PE. called EMC-R, has been specially designed using a 50,OOOgate gate array chip. This paper focuses on an architecture of the EMC-R. The distinctive features of it are: a strongly connected arc datafiow model; a direct matching scheme; a RISC-based design; a deadlock-free on-chip packet switch; and an integration of a packet-based circular pipeline and a register-based advanced control pipeline. These features are intensively examined, and the instruction set architecture and the conftguration architecture which exploit them are &scribed.
2000
In this paper we describe a new approach to designing multithreaded architecture that can be used as the basic building blocks in high-end computing architectures. Our architecture uses non-blocking multithreaded model based on dataflow paradigm. In addition, all memory accesses are decoupled from the thread execution. Data is pre-loaded into the thread context (registers), and all results are post-stored after the completion of the thread execution. The decoupling of memory accesses from thread execution requires a separate unit to perform the necessary pre-loads and post-stores, and to control the allocation of hardware thread contexts to the enabled threads. The non-blocking nature of threads reduces the number of context switches, thus reducing the overhead in scheduling threads. Our functional execution paradigm eliminates complex hardware required for dynamic scheduling of instructions used in modern superscalar architectures. We will present our preliminary results obtained from an instruction set simulator using several benchmark programs. We compare the execution of our architecture with that of MIPS architecture as facilitated by DLX simulator.
arXiv (Cornell University), 2010
This paper presents a reconfigurable parallel data flow architecture. This architecture uses the concepts of multi-agent paradigm in reconfigurable hardware systems. The utilization of this new paradigm has the potential to greatly increase the flexibility, efficiency, expandability of data flow systems and to provide an attractive alternative to the current set of disjoint approaches that are currently applied to this problem domain. The ability of methodology to implement data flow type processing with different models is presented in this paper.
2000
In this paper we will present an evaluation of the execution performance and cache behavior of a new multithreaded architecture being investigated by the authors. Our architecture uses non-blocking multithreaded model based on dataflow paradigm. In addition, all memory accesses are decoupled from the thread execution. Data is pre-loaded into the thread context (registers), and all results are post-stored after the completion of the thread execution. The decoupling of memory accesses from thread execution requires a separate unit to perform the necessary pre-loads and post-stores, and to control the allocation of hardware thread contexts to the enabled threads. The non-blocking nature of threads reduces the number of context switches, thus reducing the overhead in scheduling threads. Our functional execution paradigm eliminates complex hardware required for dynamic scheduling of instructions used in modern superscalar architectures. We will present our preliminary results obtained from an instruction set simulator using several benchmark programs. We compare the execution and cache performance of our architecture with that of MIPS architecture as facilitated by DLX simulator.
ACTA POLYTECHNICA HUNGARICA
This paper deals with the data flow computing paradigm, with the characteristics of program execution control using a flow of data instead of a flow of instructions. Data flow computing represents an important alternative to the control flow computing paradigm, which is currently the mainstream computing paradigm represented by architectures mostly based on the Von Neumann principles, in which the flow of program execution is controlled by a flow of instructions. This paper also deals with the tile computing paradigma modern approach to designing multi-core microprocessors with components laid out in two-dimensional grids with various geometries of cores, memory elements and interconnection networks with architectures using both data flow and control flow program execution control.
ACM SIGARCH Computer Architecture News, 1983
This paper presents the architecture of a highly parallel processor array system which executes programs by means of a data driven control mechanism.
Computers & Electrical Engineering, 1995
Ahetract-Computer architects have been constantly looking for new approaches to design high-performance machines. Data flow and VLSI offer two mutually supportive approaches towards a promising design for future super-computers. When very high speed computations are needed, data flow machines may be relied upon as an adequate solution in which extremely parallel processing is achieved. This paper presents a formal analysis for data flow machines. Moreover, the following three machines are considered: (1) MIT static data flow machine; (2) TI's DDP static data flow machine; (3) LAU data flow machine. These machines are investigated by making use of a reference model. The contributions of this paper include: (1) Developing a Data Flow Random Access Machine model (DFRAM), for first time, to serve as a formal modeling tool. Also, by making use of this model one can calculate the time cost of various static data machines, as well as the performance of these machines. (2) Constructing a practical Data Flow Simulator (DFS) on the basis of the DFRAM model. Such DFS is modular and portable and can be implemented with less sophistication. The DFS is used not only to study the performance of the underlying data flow machines but also to verify the DFRAM model.
2015 Euromicro Conference on Digital System Design, 2015
The path towards future high performance computers requires architectures able to efficiently run multi-threaded applications. In this context, dataflow-based execution models can improve the performance by limiting the synchronization overhead, thanks to a simple producer-consumer approach. This paper advocates the ISE of standard cores with a small hardware extension for efficiently scheduling the execution of threads on the basis of dataflow principles. A set of dedicated instructions allow the code to interact with the scheduler. Experimental results demonstrate that, the combination of dedicated scheduling units and a dataflow execution model improve the performance when compared with other techniques for code parallelization (e.g., OpenMP, Cilk).
A Moral Inquiry into Epistemic Insights in Science Education: Personal and Global Perspectives of Socioscientific Issues,, 2024
There are a number of arguments in favour of the greater inclusion of SSI in science education. In particular, SSI can be motivating for learners, give them a deeper understanding of how science is interdisciplinary, and help them appreciate the complexities of applying the science learnt in classrooms and labs to the real world. In the UK, though, SSI are under-used in science education, whether at primary, secondary or tertiary level. We examine the reasons for this by looking at the utility of SSI from a range of perspectives-including Stoicism and neoliberal understandings of education. A crucial aspect of Stoic philosophy, particularly relevant to SSI, is the Stoic view of physics as a necessary foundation or precursor for ethics. They believed that making informed ethical decisions required a deep understanding of the world in which these ethics were applied. This is because ethical principles are anchored in value judgments about what is significant and meaningful, but confined within the realm of what is realistically achievable. Since the 1980s, neoliberalism has increasingly been the dominant ideology shaping school and higher education policy in the UK, maintaining that the education system should be managed according to the principles of a free market. We examine both Stoicism and criticisms of a neoliberal approach to school education and discuss how SSI might be used to contribute to the formation of scientific literacy, epistemic insight and human flourishing at all three levels of formal education.
Kumar G. Gamit1 and Navin B. Patel2
Journal of Ancient Philosophy, 2019
The impact of the Reformation on legal thought, with special reference to Calvin's work on criminal law, 2018
European Journal of Spatial Development, 2024
Tópicos em Ciências Sociais – Volume 6, 2020
Celebrating 100 Years of Archaeology at the University of Helsinki: Past, Present, and Future (eds. L. Kunnas, M. Marila, V. Heyd, E. Holmqvist, K. Ilves, A. Lahelma & M. Lavento; ISKOS 27), 2023
Journal of Saudi Chemical Society, 2010
International Review of Finance, 2010
Computer-Aided Design, 2013
DOAJ (DOAJ: Directory of Open Access Journals), 2004
SAE Technical Paper Series, 1992
Journal of Evolutionary Biology, 2003
Orientalia Lovaniensia Periodica, 1992
Indian Journal of Psychological Medicine, 2014
Amino Acids, 2013
Journal of Clinical Medicine, 2020
20th Australasian Fluid Mechanics Conference Perth, Australia, 2016