Papers by Alexander Kamkin
2021 Ivannikov Memorial Workshop (IVMEM), 2021
This paper considers open-source tools for the logical-synthesis and place-and-route hardware des... more This paper considers open-source tools for the logical-synthesis and place-and-route hardware design stages. Several flows (CADs), including qFlow, OpenLANE, Coriolis, VTR, and SymbiFlow, have been described. For experimental evaluation of these flows, two RISC-V implementations have been used: schoolRISCV and PicoRV32. The results show that open-source flows are capable to produce physical layouts for realistic examples. At the same time, commercial CADs allow generating more effective designs in terms of clock frequency.
2020 Ivannikov Memorial Workshop (IVMEM)
Cryptographic protocols are utilized for establishing a secure session between “honest” agents wh... more Cryptographic protocols are utilized for establishing a secure session between “honest” agents which communicate strictly according to the protocol rules as well as for ensuring the authenticated and confidential transmission of messages. The specification of a cryptographic protocol is usually presented as a set of requirements for the sequences of transmitted messages including the format of such messages. Note that protocol can describe several execution scenarios. All these requirements lead to a huge formal specification for a real cryptographic protocol and therefore, it is difficult to verify the security of the whole cryptographic protocol at once. In this paper, to overcome this problem, we suggest verifying the protocol security for its fragments. Namely, we verify the security properties for a special set of so-called traces of the cryptographic protocol. Intuitively, a trace of the cryptographic protocol is a sequence of computations, value checks, and transmissions on the sides of “honest” agents permitted by the protocol. In order to choose such set of traces, we introduce an Adversary model and the notion of a similarity relation for traces. We then verify the security properties of selected traces with Tamarin Prover. Experimental results for the EAP and Noise protocols clearly show that this approach can be promising for automatic verification of large protocols.
2016 IEEE East-West Design & Test Symposium (EWDTS), 2016
This paper overviews a technique for verifying cache coherence protocols described in the Promela... more This paper overviews a technique for verifying cache coherence protocols described in the Promela language. The approach is comprised of the following steps. First, a model written for a certain configuration of the memory system is generalized to the model being parameterized with the number of processors. Second, the parameterized model is abstracted from the exact number of processors. Finally, the abstract model is verified with the Spin model checker in a usual way. The suggested technique has been successfully applied to verification of the MOSI protocol implemented in the Elbrus computer systems.
Proceedings of the Institute for System Programming of RAS, 2018
In this work, some issues of automated construction of test programs intended for functional veri... more In this work, some issues of automated construction of test programs intended for functional verification of branch units of microprocessors are considered. Problems appearing when creating such programs are defined, and techniques for their automated solution are suggested. The article focuses on the general issues of branch processing mechanisms and does not touch upon the problems specific for concrete microprocessor architectures. The suggested techniques can be used in industrial test program generators.
2016 IEEE East-West Design & Test Symposium (EWDTS), 2016
In this paper we propose to think out of the box and discuss an approach for universal mitigation... more In this paper we propose to think out of the box and discuss an approach for universal mitigation of Negative Bias Temperature Instability (NBTI) induced aging untied from the limitations of its modelling. The cost-effective approach exploits a simple property of a randomized design, i.e., the equalized signal probability and switching activity at gate inputs. The techniques considered for structural design randomization involve both the hardware architecture and embedded software layers. Ultimately, the proposed approach aims at extending the reliable lifetime of nanoelectronic systems.
Proceedings of the Institute for System Programming of the RAS, 2019
Data access conflicts may arise in hardware designs. One of the ways of detecting such conflicts ... more Data access conflicts may arise in hardware designs. One of the ways of detecting such conflicts is static analysis of hardware descriptions in HDL. We propose a static analysis-based approach to data conflicts extraction from HDL descriptions. This approach has been implemented in the Retrascope tool. The following types of conflicts are considered: simultaneous reads and writes, simultaneous writes, reading of uninitialized data, no reads between two writes. Conflict assertions are formulated as conditions on variables. HDL descriptions are automatically translated into formal models suitable for the nuXmv model checker. The translation process consists of the following steps: 1) preliminary processing; 2) Control Flow Graph (CFG) building; 3) CFG transformation into a Guarded Actions Decision Diagram (GADD); 4) GADD translation into a nuXmv format. Conflict assertions are automatically built using static analysis of the GADD model and passed to the nuXmv model checker. Bounded m...
Proceedings of the Institute for System Programming of the RAS, 2019
Data access conflicts may arise in hardware designs. One of the ways of detecting such conflicts ... more Data access conflicts may arise in hardware designs. One of the ways of detecting such conflicts is static analysis of hardware descriptions in HDL. We propose a static analysis-based approach to data conflicts extraction from HDL descriptions. This approach has been implemented in the Retrascope tool. The following types of conflicts are considered: simultaneous reads and writes, simultaneous writes, reading of uninitialized data, no reads between two writes. Conflict assertions are formulated as conditions on variables. HDL descriptions are automatically translated into formal models suitable for the nuXmv model checker. The translation process consists of the following steps: 1) preliminary processing; 2) Control Flow Graph (CFG) building; 3) CFG transformation into a Guarded Actions Decision Diagram (GADD); 4) GADD translation into a nuXmv format. Conflict assertions are automatically built using static analysis of the GADD model and passed to the nuXmv model checker. Bounded model checking is used to check whether these assertions are satisfiable. If true, counterexamples are generated and then translated to HDL testbenches by the Retrascope tool. The proposed approach was applied to several open source HDL benchmarks like Texas-97, Verilog2SMV, VCEGAR and mips16 modules. Potential conflicts have been detected for all of these benchmarks. Future work includes propagation of conflict assertions to the interface level (thus getting assertions on modules' communication protocols) and generation of built-in HDL checkers.
Proceedings of the Institute for System Programming of the RAS, 2016
In this paper, a tool for automatically generating test programs for MIPS64 memory management uni... more In this paper, a tool for automatically generating test programs for MIPS64 memory management units is described. The solution is based on the MicroTESK framework being developed at the Institute for System Programming of the Russian Academy of Sciences. The tool consists of two parts: an architecture-independent test program generation core and MIPS64 memory subsystem specifications. Such separation is not a new principle in the area: it is applied in a number of industrial test program generators, including IBM's Genesys-Pro. The main distinction is in how specifications are represented, what sort of information is extracted from them, and how that information is exploited. In the suggested approach, specifications comprise descriptions of the memory access instructions, loads and stores, and definition of the memory management mechanisms such as translation lookaside buffers, page tables, table lookup units, and caches. A dedicated problem-oriented language, called MMUSL, is used for the task. The tool analyzes the MMUSL specifications and extracts all possible instruction execution paths as well as all possible inter-path dependencies. The extracted information is used to systematically enumerate test programs for a given user-defined test template and allows exhaustively exercising co-execution of the template instructions, including corner cases. Test data for a particular program are generated by using symbolic execution and constraint solving techniques.
Proceedings of the Institute for System Programming of the RAS, 2016
This paper introduces a method for scalable verification of cache coherence protocols described i... more This paper introduces a method for scalable verification of cache coherence protocols described in the PROMELA language. Scalability means that resources spent on verification (first of all, machine time and memory) do not depend on the number of processors in the system under verification. The method is comprised of three main steps. First, a PROMELA model written for a certain configuration of the system is generalized to the model being parameterized with the number of processors. To do it, some assumptions on the protocol are used as well as simple induction rules. Second, the parameterized model is abstracted from the number of processors. It is done by syntactical transformations of the model assignments, expressions, and communication actions. Finally, the abstract model is verified with the SPIN model checker in a usual way. The method description is accompanied by the proof of its correctness. It is stated that the suggested abstraction is conservative in a sense that every invariant (a property that is true in all reachable states) of the abstract model is an invariant of the original model (invariant properties are the properties of interest during verification of cache coherence protocols). The method has been automated by a tool prototype that, given a PROMELA model, parses the code, builds the abstract syntax tree, transforms it according to the rules, and maps it back to PROMELA. The tool (and the method in general) has been successfully applied to verification of the MOSI protocols implemented in the Elbrus computer systems.
In this paper we describe a method of automated test program generation intended for systematic f... more In this paper we describe a method of automated test program generation intended for systematic functional verification of microprocessors. The method supplements such widely-spread practical approaches as software-based verification and random generation. In our method, construction of test programs is based on microprocessor model, which includes structural model and instruction set model. The goal of generation is defined by means of instruction-level test coverage. Test programs are constructed by combining test situations for different sequences of instructions.
Programming and Computer Software, 2015
The paper introduces a method for overcoming state explosion arising when verifying concurrent an... more The paper introduces a method for overcoming state explosion arising when verifying concurrent and distributed computer systems. The method is based on projecting a system state space onto a number of subspaces associated with quite small and, generally speaking, overlapping groups of processes. Analysis of the system—checking whether a given property holds on the system states—is carried out by collaborative exploration of the projections’ state graphs; the process is completed as soon as all transitions of all projections have been traversed (usually, this requires significantly less amount of time than exploring the state graph of the entire system). To increase controllability of the traversing process, it is suggested to use techniques for cooperative searching paths in the projections (the latter may appear to be highly nondeterministic due to the loss of information upon projecting). In this work, certain issues of the introduced verification scheme are investigated, and results of some experiments are given. The method described can be applied to model checking, as well as to model-based testing, namely for automatic test sequence generation.
Proceedings of the Institute for System Programming of the RAS, 2015
The paper describes a method for constructing test oracles for memory subsystems of multicore mic... more The paper describes a method for constructing test oracles for memory subsystems of multicore microprocessors. The method is based on using nondeterministic reference models of systems under test. The key idea of the approach is on-the-fly determinization of the model behavior by using reactions from the system. Every time a nondeterministic choice appears in the reference model, additional model instances are created and launched (each simulating a possible variant of the system behavior). When the testbench receives a reaction from the system under test, it terminates all model instances whose behavior is inconsistent with that reaction. An error is detected if there is no active instance of the reference model. The suggested method has been used in verification of the L3 cache of the Elbrus-8C microprocessor and allowed to find three bugs.
East-West Design & Test Symposium (EWDTS 2013), 2013
The increasing complexity of hardware designs makes functional verification a challenge. The key ... more The increasing complexity of hardware designs makes functional verification a challenge. The key issue of the state-of-the-art verification approaches is to obtain a "good" model for automated test generation or formal property checking. In this paper, we describe techniques for deriving EFSM-based models from HDL descriptions and briefly discuss applications of such models for verification. The distinctive feature of the suggested approach is that it automatically determines what registers of a design encode its state and use this information for model reconstruction.
Proceedings of the Institute for System Programming of the RAS, 2020
Аннотация. В последние годы ИСП РАН разрабатывает систему дедуктивной верификации машинного (бина... more Аннотация. В последние годы ИСП РАН разрабатывает систему дедуктивной верификации машинного (бинарного) кода. Мотивация понятна: современные компиляторы, такие как GCC и Clang/LLVM, не застрахованы от ошибок; тем самым, проверка корректности сгенерированного кода (хотя бы для компонентов с повышенными требованиями к надежности и безопасности) не является лишней. Ключевая особенность предлагаемого подхода состоит в возможности переиспользования формальных спецификаций (пред-и постусловий, инвариантов циклов, лемм и т.п.) уровня исходного кода для верификации машинного кода. Инструмент основан на формальной спецификации системы команд и обеспечивает высокий уровень автоматизации: он дизассемблирует машинный код, извлекая его семантику, адаптирует высокоуровневые спецификации для машинного кода и генерирует условия верификации. Система использует ряд сторонних компонентов, включая анализатор исходного кода (Frama-C), анализатор машинного кода (MicroTESK) и SMT-решатель (СVC4). Модульная архитектура позволяет заменять один компонент другим при изменении формата входных данных или используемой техники верификации. В работе рассматривается архитектура инструмента, описывается наша реализация и демонстрируется пример верификации библиотечной функции memset. Ключевые слова: формальные методы; дедуктивная верификация; анализ бинарного кода; проверка эквивалентности; архитектура системы команд; машинный код; тестирование компиляторов.
Abstract. In this work, an approach to generate test programs for functional verification of memo... more Abstract. In this work, an approach to generate test programs for functional verification of memory management units of microprocessors is proposed. The approach is based on formal specification of memory access instructions, namely load and store instructions, and memory devices such as cache units and address translation buffers. The use of formal specifications helps automate development of test program generators and makes verification systematic due to clear definition of testing goals. In the suggested approach, test programs are constructed by using combinatorial techniques, which means that stimuli – sequences of loads and stores – are created by enumerating all feasible combinations of instructions, situations (instruction execution paths) and dependencies (sets of conflicts between instructions). It is of importance that test situations and dependencies are automatically extracted from specifications. The approach has been used in a number of industrial projects and allowe...
In this paper we describe a methodology and experience of simulation-based verification of microp... more In this paper we describe a methodology and experience of simulation-based verification of microprocessor units based on cycle-accurate contract specifications. Such specifications describe behavior of a unit in the form of preconditions and postconditions of microoperations. We have successfully applied the methodology to several units of the industrial microprocessor. The experience shows that cycle-accurate contract specifications are very suitable for simulation-based verification, since, first, they represent functional requirements on a unit in comprehensible declarative form, and second, they make it possible to automatically construct test oracles which verify unit correctness.
2010 12th Biennial Baltic Electronics Conference, 2010
The paper describes a methodology for formal cycle-accurate specification of synchronous parallel... more The paper describes a methodology for formal cycle-accurate specification of synchronous parallel-pipeline hardware. The main application of the methodology is simulation-based verification of control-intensive digital designs. Its key features are as follows: (1) resources of a design under verification (buffers, arbiters, data transfer channels, etc.) are specified by means of reusable cycle-accurate models; (2) operations of a design (pipeline control flows) are described by defining contracts (i.e. pre- and post-conditions) for all operation stages (functional units of a pipeline). Formal specifications of that kind can be easily applied to automate simulation-based verification. The suggested solution is aimed at achieving technological effectiveness of specifications development.
Programming and Computer Software, 2014
ABSTRACT Development of test programs and analysis of the results of their execution is the basic... more ABSTRACT Development of test programs and analysis of the results of their execution is the basic approach to verification of microprocessors at the system level. There is a variety of methods for the automation of test generation, starting with the generation of random code and ending with directed model-based test generation. However, there is no cure-all method. In practice, combinations of various complementary techniques are used. Unfortunately, no solution for the integration of various test generation methods into a unified environment is currently available. To test a microprocessor, verification engineers are forced to use many different test generators, which results in a number of difficulties, such as (1) the necessity to ensure the compatibility of tool configurations (in each tool, a specific description of the target microprocessor is used, which leads to duplication of information); (2) the necessity to develop utilities for integration tools (different tools have different interfaces and use different data formats). This paper describes a concept of extensible environment for test program generation for microprocessors. This environment provides a unified approach for test generation; it supports widespread test generation techniques, and can be extended by new testing tools. The proposed concept was partially implemented in MicroTESK (Microprocessor T Esting and Specification Kit).
Proceedings of the Spring Summer Young Researchers Colloquium on Software Engineering, 2010
Uploads
Papers by Alexander Kamkin