Papers by Robert Hiromoto
2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)
2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)
2022 International Symposium on Networks, Computers and Communications (ISNCC)
2022 International Symposium on Networks, Computers and Communications (ISNCC)
As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity dista... more As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity distance are defined at a global level, under the assumption that the properties of symbols resemble that of independent random variables. However, while applying entropy and unicity to language(s), e.g., encryption and decryption, the symbols (letters) of a language are not independent. Thus, we introduce a new measure, HL(s), called the “local entropy” of a string s. HL(s) includes a priori information about the language and text at the time of application. Since the unicity distance is dependent on the entropy (because entropy is the basis of calculations for the unicity distance), local entropy leads to a local unicity distance for a string. Our local entropy measure explains why some texts are susceptible to decryption using fewer symbols than predicted by Shannon’s unicity while other texts require more. We demonstrate local entropy using a substitution cipher along with the results for an algorithm based on the principle, and show that Shannon’s unicity is an average measure rather than a lower bound; this motivates us to present a discussion on the implications of local entropy and unicity distance
does your systems execute? – Is everything normal? – Is something out of the ordinary? � Let us c... more does your systems execute? – Is everything normal? – Is something out of the ordinary? � Let us consider dynamic monitoring of an executing system – Formalism of discussion based on:
International Journal of Computing, 2014
The security of block and product ciphers is considered using a set theoretic estimation (STE) ap... more The security of block and product ciphers is considered using a set theoretic estimation (STE) approach to decryption. Known-ciphertext attacks are studied using permutation (P) and substitution (S) keys. The blocks are formed from two (2) alphabetic characters (meta-characters) where the applications of P and S upon the ASCII byte representation of each of the two characters are allowed to cross byte boundaries. The application of STE forgoes the need to employ chosen-plaintext or known-plaintext attacks.
The storytelling delivery mechanism of the future is certainly the Internet, due to its interacti... more The storytelling delivery mechanism of the future is certainly the Internet, due to its interactive and interconnectivity capability. However, Internet bandwidth is the single most constrictive aspect of real-time, high-quality, interactive character animation. The goal of this research is to make the smallest amount of data represents the largest spectrum of character definition and movement. We suggest that surfaces must be exact (verses polygonal approximations), movement must be abstracted (rather than geometry-specific), and computational power should be balanced against bandwidth, (i.e. computationally expensive compression and decompression should occur on the ends of the Internet pipeline to reduce traffic).
Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditi... more Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditionally, STE has been used to solve vector problems in a Hilbert space using a distance metric to create a volume in that space. Given this type of space structure, Optimal Bounding Ellipsoid (OBE) algorithms are typically used to simplify STE estimate processing; however, OBEs are not bounded to the original estimate volume defined by the STE problem. At times the OBE algorithm includes spurious estimates in the new volume that may incorrectly expand the solution set. In contrast to prior implementations of STE, the algorithms used in this dissertation are based in topological space. Errors may still be present in the selected property sets, but these sets are used only as a priori information that are static and does not expand (change). Therefore, by choosing a topological space the problems associated with OBEs are avoided. For the first time, STE has been applied to decryption and ST...
Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effecti... more Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effective in evaluating risk associated with complex systems. However, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state. A station blackout scenario associated with loss of heat sink, for example, may result in different outcomes depending the timing of events and the amount of energy stored within the reactor and the time the blackout occurs following reactor scram. These time and energy scales associated with the transient may vary as a function of transition time to a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Methods for solving DPRA include the theory of Continuous Event Trees (CET), Discrete Dynamic Event Trees (DDET), and Cell-to-Cell Mapping Techniques (CCMT). Each method has its advantages and disadvantages related to solving complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-tostate of the entire system.. Specifically, the theory of CET involves the solution of a partial differential equation that results in a probability density function (PDF) related to state variables of interest such at thermal-hydraulic properties (e.g., temperature, energy, pressure). The PDE is similar to the Boltzmann Transport Equation, where the PDF of state properties is being transported. The PDF contains the information of the state. As the number of state variables increases, the complexity of the problem also grows significantly. In order to address the combinatorial complexity arising from the number of possible state configurations and discretization of transition times, a characteristic scaling metric (LENDIT-length, energy, number, distribution, information and time) is proposed as a means to describe systems uniformly and thus provide means to describe relational constraints expected in the dynamics of a complex (coupled) systems. Thus when LENDIT is used to characterize four sets-'state, system, resource and response' (S2R2)-describing reactor operations (normal and off-normal), LENDIT and S2R2 in combination have the potential to 'branch and bound' the state space investigated by DPRA. LENDIT and S2R2 provide a consisting methodology across both deterministic and heuristic domains. This paper explores the reduction of a PWR's state-space using a RELAP5 model of a plant subject to a SBO. We introduce LENDIT metrics and S2R2 sets to show the branch-and-bound concept.
Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear powe... more Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion t...
2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017
We proposes the development of a cyber-secure, Internet of Things (IoT), supply chain risk manage... more We proposes the development of a cyber-secure, Internet of Things (IoT), supply chain risk management architecture. The proposed architecture is designed to reduce vulnerabilities of malicious supply chain risks by applying machine learning (ML), cryptographic hardware monitoring (CHM), and distributed system coordination (DSC) techniques to mitigate the consequences of unforeseen (including general component failure) threats. In combination, these crosscutting technologies will be integrated into Instrumentation-and-Control/Operator-in-the-Loop (ICOL) architecture to learn normal and abnormal system behaviors. The detection of absolute or perceived abnormal system-component behaviors will trigger an ICOL alert that will require an operator's manual verification-response action (i.e., that the detected alert is, or is not a viable control system threat). The operator's verification-response will be fed back into the ML systems to recalibrate the perceived normal or abnormal state of the system's behavior.
Nuclear Engineering and Design, 2015
Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set o... more Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peak clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts' knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This paper presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L-Length, E-Energy, N-Number, D-Distribution, I-Information, and T-Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. In order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.
Information Center is to provide broadest dissemination possiof information contained in NE's Res... more Information Center is to provide broadest dissemination possiof information contained in NE's Research and Development i3eports to business, industry, the academic community, and federal, state and local gOVWlmWltSm Mthough a small portion of tk)s rep~rt is not reproducible, it is being made available to expedite the availability of information on the research discussed herein.. .. ,, k. " L.)s Alamos NaIIo~aI LdBoralorb IS CIDW~Iea b~ma unwarwly 01 Cahlornm for Ino llmlmd SImos DaDa:lmml of EnorgV unaaf comrac! w 7405. F.NG.36
Scientific Programming, 1996
In this article we describe the systematic development of two implementations of the Jacobi eigen... more In this article we describe the systematic development of two implementations of the Jacobi eigensolver and give their initial performance results for the MIT/Motorola Monsoon dataflow machine. Our study is carried out using MINT, the MIT Monsoon simulator. The functional semantics with respect to array updates, which cause excessive array copying, have led us to an implementation of a parallel "group rotations" algorithm first described by Sameh. Our version of this algorithm requires O(n3) operations, whereas Sameh's original version requires O(n4) operations. The convergence of the group algorithm is briefly treated. We discuss the issues involved in rewriting the algorithm in Sisal, a strict, purely functional language without explicit l-structures.
Proceedings of the 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems, 2011
The security of block and product ciphers is considered using a set theoretic estimation (STE) ap... more The security of block and product ciphers is considered using a set theoretic estimation (STE) approach to decryption. Known-ciphertext attacks are studied using permutation (P) and substitution (S) keys. The blocks are formed form three (3) alphabetic characters (metacharacters) where the applications of P and S upon the ASCII byte representation of each of the three characters are allowed to cross byte boundaries. The application of STE forgoes the need to employ chosen-plaintext or known-plaintext attacks.
The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactio... more The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactions, is composed of two distinct computational phases that are each highly parallel, yet problematic in their integrated mapping to parallel processing systems. On a parallel distributed memory system, the different data structures prescribed by the push (the acceleration of particles) and solver (the solution of Poisson's equation) computational phases may require large amounts of data communications during transition between these phases. The ratio of global to nearest-neighbor data communication is a function of the parallel decomposition, the computational algorithms (e.g., multigrid versus Fast Fourier Transform (FFT)) selected, and the characteristics of the physical simulation under investigation. The PIC codes implemented and described here are plasma simulation programs. Unfortunately, these codes are different in their computational details, making direct comparisons of the...
Computers & Mathematics with Applications, 1996
Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's ... more Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's equation, is described and estimates of its computational complexities are derived. The method yields better estimates than the standard Monte Carlo approach by incorporating boundary information more efficiently and by the implicit reuse of random walk information gathered throughout the course of the computation. The new approach reduces the computational complexity of the length of a random walk by one order of magnitude as compared to a standard method described in many text books. Also, the number of walks necessary to achieve a desired accuracy is reduced.
I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww... more I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww n rIcwreEclumvu roynlly Iron hcanso 10 pubhsh or rwroduca .
2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)
International Journal of Computing, Aug 1, 2014
This paper examines the computational programming issues that arise from the introduction of GPUs... more This paper examines the computational programming issues that arise from the introduction of GPUs and multi-core computer systems. The discussions and analyses examine the implication of two principles (spatial and temporal locality) that provide useful metrics to guide programmers in designing and implementing efficient sequential and parallel application programs. Spatial and temporal locality represents a science of information flow and is relevant in the development of highly efficient computational programs. The art of high performance programming is to take combinations of these principles and unravel the bottlenecks and latencies associate with the architecture for each manufacturer computer system, and develop appropriate coding and/or task scheduling schemes to mitigate or eliminate these latencies.
Uploads
Papers by Robert Hiromoto