Papers by Nikola Grcevski
Paper describes application of recently developed software based on electromagnetic field theory ... more Paper describes application of recently developed software based on electromagnetic field theory approach to the analysis of high frequency and transient behavior of spacious and complex grounding systems. Examples of analysis are presented that illustrates possibilities of the utilization of such software in the grounding system design process, to optimize the grounding electrode arrangement for better high frequency and transient behavior.
Proceedings of the 2011 Conference of the Center For Advanced Studies on Collaborative Research, Nov 7, 2011
In 2011 the Java(tm) Development Kit 7 (or JDK 7) became generally available. JDK 7 is the latest... more In 2011 the Java(tm) Development Kit 7 (or JDK 7) became generally available. JDK 7 is the latest step in the evolution of the Java SE platform. It offers Java developers functionality and performance improvements in several areas including new I/O APIs, concurrency utilities like the Fork/Join framework, new support for dynamically typed languages in the JVM, changes to modularity, and several changes to the language to improve application development. Earlier this year IBM released its own Java Virtual Machine with JDK 7 support for the x86, System P, and System Z platforms. IBM's JDK 7 also brought performance throughput improvements of up to 10% on transactional workloads, as well as significant improvements of up to 15% in startup performance and up to 15% in memory footprint for workloads running with IBM's WebSphere Application Server. The workshop educated participants to the new features on offer in JDK 7 and demonstrated their value to application developers. These features included: • JSR 334 or "Project Coin": a collection of small changes to the Java language to improve developer productivity, including Strings in switch statements, better type inferencing for generic instance allocation, and multi-catch for improving exception handling. • JSR 166y which introduced a framework for Fork/Join parallelism • JSR 292 which introduced JVM support for calling dynamic languages • JSR 203 which introduced new I/O APIs for filesystems, socket I/O, and asynchronous I/O • enhancements to class loading implementation • Unicode 6.0 support and Locale enhancements The Java programming language was designed from the start with concurrency in mind. It offers a rich set of features for creating and managing threads of execution and primitives to allow synchronization among objects. While powerful and correct, these synchronization mechanisms do not always have good performance characteristics in the presence of many threads of execution. The workshop touched on some of the Java thread model and concurrency features available in Java prior to JDK 7, and explained the shortcomings of the object- level synchronization mechanisms as they pertain to scalability. The java/util/concurrent package was introduced in Java 5 under JSR-166 and supplemented the thread safety features already built into the Java language and runtime. It provided an alternative that promised to avoid many of the scaling problems inherent with Java's built-in synchronization. In particular, it offered many light-weight mechanisms that provided finer grain synchronization between objects, useful concurrent data structures, a task management and execution framework, and interfaces for locking and creating intelligent synchronizers. When used appropriately, these features allow Java applications to perform efficiently even in the presence of many interacting threads of execution. In JDK 7 the newest enhancement to aid with developing scalable Java programs is the Fork/Join framework. Fork/Join offers a set of utilities designed to make divide and conquer algorithms easy to parallelize. The framework uses a pool of threads which are assigned tasks created on the basis of the work that needs to be done; each task is broken down recursively into smaller tasks which could be executed either by the same thread that created the task or be stolen by some other thread that has no tasks to execute. The work stealing approach is underpinned by a deque data structure, which allows it to be relatively lightweight and consequently scale better. Professor Doug Lea, the main author and architect of the java/util/concurrent package, delivered a presentation that described the challenges involved in developing an efficient and scalable Java concurrency package, what powerful features are available in the java/util/concurrent package, and explored the new Fork/Join framework in JDK 7. The workshop stressed the importance of mastering and leveraging these features within concurrent applications in order to develop efficient, scalable applications that perform well on modern multi-core hardware. The purpose and applicability of each feature was discussed, as well as case studies with code examples where appropriate. JDK 7 introduced several new I/O APIs via JSR 203 (NIO.2): • a filesystem I/O API which allows a user to abstract the notion of a path or filesystem into Java objects belonging to those classes (Path and FileSystem respectively). Operations such as file copying, file change notification, symbolic links, directory traversal, and querying file attributes are handled easily by the API. • a socket channel API with support for multicast operations and improvements to socket management • an asynchronous I/O API for sockets and files Each of these APIs was discussed with relevant examples in this workshop. Another powerful new feature introduced in JDK 7 was the InvokeDynamic support in the JVM. This allows dynamically typed languages like Ruby and Python to be…
Proceedings of the 2013 Ieee Acm International Symposium on Code Generation and Optimization, Feb 23, 2013
Lecture Notes in Computer Science, 2010
The productivity of a compiler development team depends on its ability not only to the design eff... more The productivity of a compiler development team depends on its ability not only to the design effective solutions to known code generation problems, but also to uncover potential code improvement opportunities. This paper describes a data mining tool that can be used to identify such opportunities based on a combination of hardware-profiling data and on compiler-generated counters. This data is combined into an Execution Flow Graph (EFG) and then FlowGSP, a new data mining algorithm, finds sequences of attributes associated with subpaths of the EFG. Many examples of important opportunities for code improvement in the IBM R Testarossa compiler are described to illustrate the usefulness of this data mining technique. This mining tool is specially useful for programs whose execution is not dominated by a small set of frequently executed loops. Information about the amount of space and time required to run the mining tool are also provided. In comparison with manual search through the data, the mining tool saved a significant amount of compiler development time and effort.
Probably the most efficient way to test somebody's knowledge, about some specific area or to ... more Probably the most efficient way to test somebody's knowledge, about some specific area or to retrieve somebody's opinion is to make a test specially created for that purpose. The test processing is very resourceful and time-consuming operation, what is making automatic test processing procedures present for a long time. The test forms used for today's common test processing machines are
Proceedings of the 2009 Conference of the Center for Advanced Studies on Collaborative Research - CASCON '09, 2009
Proceedings of the 2010 Conference of the Center for Advanced Studies on Collaborative Research - CASCON '10, 2010
The past ten years have seen a significant shift in focus in the design of microprocessors. Previ... more The past ten years have seen a significant shift in focus in the design of microprocessors. Previously, great improvements in performance could usually be realized between processor generations by simply increasing the clock frequency and shrinking the die size. However, due to physical and engineering limitations, a stage was reached where increasing the processor frequency no longer became a practical
Uploads
Papers by Nikola Grcevski