Academia.eduAcademia.edu

Adaptive reflexivity threat protection

2015, Automatic Control and Computer Sciences

A confrontation between a security administrator and an intruder is presented as a conflict of information security. The formalization of the conflict with the use of Lefebvre's algebra of conflicts is proposed. The effective behavior policies of the parties to the conflict in terms of protection have been analyzed. An approach to implementing these policies using deceptive systems has been sug gested.

ISSN 0146-4116, Automatic Control and Computer Sciences, 2015, Vol. 49, No. 8, pp. 727–734. © Allerton Press, Inc., 2015. Original Russian Text © D.S. Lavrova, A.I. Pechenkin, 2015, published in Problemy Informatsionnoi Bezopasnosti. Komp’yuternye Sistemy. Adaptive Reflexivity Threat Protection D. S. Lavrova and A. I. Pechenkin Peter the Great St. Petersburg Polytechnic University, St. Petersburg, 195251 Russia e-mail: [email protected], [email protected] Received May 8, 2015 Abstract—A confrontation between a security administrator and an intruder is presented as a conflict of information security. The formalization of the conflict with the use of Lefebvre’s algebra of conflicts is proposed. The effective behavior policies of the parties to the conflict in terms of protection have been analyzed. An approach to implementing these policies using deceptive systems has been suggested. Keywords: deceptive system, intruder, threat model, protection mechanisms, Lefebvre’s algebra of conflicts DOI: 10.3103/S0146411615080106 INTRODUCTION In the context of ubiquitous computerization, information has become a critical resource, which gave rise to a significant increase in attempts to violate the security of information systems [1]. To improve the effectiveness of information security of automated systems, a comprehensive approach based on a combination of a priori and a posteriori means of information protection is often used. It is a common practice to complement the means of protection by other means implementing indirect protection by setting the attacker to the state of a priori uncertainty. An example of such means of protection are deceptive systems. The mechanism of deceptive systems controls the attacker; therefore, in the study of the behavior of the attacker this protective mechanism can be expanded and supplemented in order to increase the degree of information security. To study the behavior of the attacker one should consider options for his actions in a confrontation, a conflict, a subject which is information security. FORMALIZATION OF THE INFORMATION-SECURITY CONFLICT As a matter of providing the information security of an information system (IS), an intruder is considered to be a party who stands against protective mechanisms of the IS. As far as safeguards are set by the security administrator of the IS, there is a confrontation between an intruder and a security administrator, which can be presented as a conflict between two parties. The conflict of two diverse security professionals that cannot be resolved peacefully, will be called a conflict of information security (IT security conflict). The impossibility of a peaceful settlement of the IT security conflict is based on the opposite purposes of the security administrator and the intruder. The former wants to ensure the security of the system and prohibit illegitimate access to its infrastructure, while the latter wants to obtain information about the infrastructure of the protected system, acting illegitimately and breaking or circumventing the IS protection. The object (target) of the conflict is the IS infrastructure. Parties to the conflict are the security administrator and the intruder. A formal description of the IT security conflict is necessary to improve the level of protection of the information systems, since being mathematically described, the possible intruder policies and their dynamics can extend the functionality of the safeguards. For the formal description of this process, we should competently select a mathematical apparatus that will make possible to describe the characteristics of the process most notionally and adequately. A combination of the Lefebvre’s algebra of conflicts and the game-theory apparatus present the best answer for the formal description of a conflict in which parties imitate each other’s arguments trying to impose a certain pattern of behavior on each other and can change behavior policies. An advantage of the mathematical apparatus of Lefebvre’s algebra of conflicts with respect to the problem of a formal descrip727 728 LAVROVA, PECHENKIN tion of the conflict is the ability to imitate the arguments of the contending parties and the description of a reflexive control, i.e., the transfer of grounds that lead the opponent to make a wrong decision [2]. In accordance with the concept of Lefebvre’s algebra [3], which describes the conflict between the security administrator A and the intruder H, we should introduce the following designations: —S is an impartial situation represented as a foothold on which the confrontation is going on. With regard to the conflict of information security, the foothold is a protected information system infrastructure, including physical and logical system structures, as well as a set of integrated system safeguards. —A’s notion of the foothold S is a reflection of S onto the A’s variety of understandings, denoted as SA. The security administrator can not have a complete set of information about the security of the system, since his variety of understandings is limited by the safeguards he uses; therefore, there is always a nonzero risk of having undiagnosed or previously unknown security issues. —An objective IA of the security administrator is to ensure the information security of the protected system while facing the intruder and to identify attempts at security breaches. —Doctrine DA, which is a set of measures and actions to ensure the security of the system and the information stored therein. —A solution to the problem of information security of the system RA obtained by applying the doctrine DA to SA. —The procedure by which the administrator makes a decision can be represented as follows: I R ---A-D A → -----A . SA SA (1) Similarly, we introduce notations that describe the terms of the conflict for the intruder as follows: —H s notion of the foothold S is a reflection of S on H s variety of understandings, denoted as SH. Initially, the intruder’s variety of understandings about the system can be both an empty set and contain information obtained both from public sources and from insiders. In the course of the conflict, this set can be expanded by additions of both true and false information. —An objective of the intruder IH is a violation of information security of the target system, and concealment of traces of harmful effect on the system. —DH doctrine, which is a set of actions and tools designed to violate the system security. —A solution to the problem of the violation of information security of the system RH obtained by applying the DH doctrine to SH. Based on his experience and skills, intruder H is able to simulate the decision made by the administra tor, which is designated as AH . In order to make a decision ensuring success, H should imitate the reasoning of A and follow the procedure described by formula (1). It is worth noting that intruder H is not the owner of SA. He has what might be called the reflection of SA in terms of H, which is a secondary objective that reflects situation S. Similarly, the intruder H does not have IA and DA; he has only IA in terms of H and DA in terms of H. Having introduced the appropriate designation SAH, IAH, DAH, and RAH, it becomes possible to formally notate the imitation of A’s reasoning by H as follows: I AH R AH ------ D AH → ------. S AH S AH (2) R AH Next, the intruder should project a solution ------, which results from imitating the variety of his knowlS AH edge about the system as follows: R AH R AH ------- → ------. S AH SH (3) Now, the intruder has to work out a solution via the application of his doctrine that consists of a variety of actions that will be able to breach the target system security as follows: R AH I H R AH R ------- → ----------- D H → ----H-. SH SH SH AUTOMATIC CONTROL AND COMPUTER SCIENCES (4) Vol. 49 No. 8 2015 ADAPTIVE REFLEXIVITY THREAT PROTECTION 729  The decision-making process with the AH imitation looks like as follows: I AH R AH R R AH R AH I H ------ D AH → ------ → ------ → ----------- D H → ----H-. S AH S AH SH SH SH (5) In this conflict, administrator A is defeated because intruder H managed to imitate A’s reasoning. It is obvious that this conflict may have a more complex chains of reasoning simulated by the opponent; in particular, the administrator can also imitate the process of the intruder’s simulation of the opponent’s reasoning. Thus, the winner of the conflict is the party who has most accurately predicted the opponent’s mindset. One of the most effective behavior policies in a conflict situation is controlling the opponent’s solution, by transferring him the grounds from which the opponent could deduce his own solution, albeit predetermined by the other party’s decision [3]. The transfer of grounds in the conflict between the administrator and the intruder lies in the process of connecting A to the reflection of the situation by H; thereby A starts to control the process of decision-making. It is determined in [3] that any deceptive movements, i.e., provocations and intrigues, disguises, drawings, the creation of false objects, and generally lies in any context, are realizations of reflexive control. Let administrator A have a single (first) rank of reflection, and intruder H have zero rank. This means that H can be reflexively controlled by A. In general terms, it may be denoted as the transfer of pattern F to H, which has been specially prearranged by A in relation to H as follows: FHA → FH . (6) Pattern F is a set of elements SAH, IAH, DAH, and RAH. In an IT security conflict, reflexive control is the transfer of one or more elements of this set to the opponent. Source [3] provides some illustrations of the opponent’s reflexive control by the following means: —a transfer of false information about the foothold, SHA → SH; —the formation of opponent’s targets, IHA → IH; —the formation of the opponent’s doctrine, DHA → DH, which characterizes the intentional training of the opponent to develop his association with the achievement of the goal; —solutions RHA → RH, an example of which may be an incorrect tip; —the formation of the target by transferring the foothold pattern IHA → SHA → SH → IH; —conversions SHAH → SHA, which allegedly transfer the opponent his own view of the foothold; —conversions IHAH → IHA, aimed to persuade the opponent to commit acts that will not be committed (e.g., a trick); —conversions DHAH → DHA, aimed to persuade the opponent to use a doctrine that will affect the conclusions of the intruder, while the doctrine is not used, but the intruder’s logic, based on the false information, will be available; —the formation of the chain IHAH → SHAH → SHA → IHA, which is also the opponent’s alleged transfer of his own view of the foothold for the formation of a false target; —the neutralization of the opponent’s deduction, a technique used when it is impossible to avoid disclosing the true foothold pattern in order to form some equally probable targets that can confuse the intruder. In practice, the implementation of these illustrations (referred to as policies) of the reflexive control over the intruder can be solved using deceptive systems. DECEPTIVE SYSTEMS AS IMPLEMENTS OF REFLEXIVE CONTROL POLICIES Deceptive systems is a promising mechanism that supplements the existing mechanisms for protecting information in computer networks due to deception (fraud) by information interlopers [4]. The use of deceptive systems leads the intruder astray, as well as maximizes the volume of the data about his behavior, objections, and skills using the reflexive control policy. Reflexive control is realized through any of its manifestations; hence, the mechanism of deceptive systems realizes the a priori reflexive control of the intruder. The mechanism of deceptive systems can only directly implement the policy of transferring false information on the foothold. Therefore, in order to improve the efficiency and effectiveness of protecting the information system, we should consider approaches to implementing other reflexive control policies described previously through the mechanism of deceptive systems. AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015 730 LAVROVA, PECHENKIN However, before we consider and propose possible approaches to implementing policy, it is necessary to establish precisely what policies should be implemented based on the possible targets of the intruder. This need is justified by the presence of risk, since it takes an impractical length of time and significant material resources to design and implement policies, whereas the layer of IS protection remains unchanged. Thus, we should implement an IS threat model. In this case, an information system is understood, not as a specific information system, but as the general definition given to the source [5], in which an “information system is a combination of technical, software, and organizational support, as well as personnel, designed to provide the right people with adequate information in the proper time.” SUMMARY MODEL OF SECURITY THREATS TO INFORMATION SYSTEMS Security threats to ISs can be generally divided into four classes according to the layer in which they occur, and examples can be given for each class of possible threats: (1) physical layer: —an instrument bug is integrated, —IS components are disabled, —physical media are destroyed, —line access is violated, —photos and video are used; (2) network layer: —the availability of equipment is isolated, —network traffic is intercepted, —network traffic is modified; (3) OS layer: —malicious software is installed, —the stability of system processes and services is violated, —information resources (copying, editing, deleting information) are impacted; (4) application layer: —applications are disabled, —information resources of applications are impacted, —the operations of applications are modified. According to IS security threats, we should highlight the most important, i.e., critical, objects that an intruder would definitely want to access. Critical objects can be generally classified as follows: (1) hardware, such as —computers; —network hardware; —physical media; (2) software, such as —OS services and processes; —software applications; (3) information Resources, such as —network traffic; —files; —databases; —email; —usernames/passwords. To model the summary threat, it only remains to classify the following sources of threats: (1) Intruders performing a meaningful impact on the IS, including: (a) external intruders; (b) internal intruders, i.e., legitimate users of the IS operating outside the framework provided by the authority. AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015 ADAPTIVE REFLEXIVITY THREAT PROTECTION 731 Intruders Service providers Intruders Internal intruders Application layer OS layer Network layer Physical layer Legitimate users External intruders Software threats Information threats Information threats Software threats Information threats Information threats Hardware threats Hardware threats Software threats Hardware threats Fig. 1. Summary model of the security threats to information system. (2) Providers of software and hardware, consumables, services, etc. and contractors responsible for installing the equipment, starting up and adjusting operations, and maintenance. (3) Legitimate users of the IS acting in good faith. (4) A summary model of the IS security threats is presented in accordance with Fig. 1. A reflexive game with an external intruder is the most interesting, since for him the variety of possible policies to use is somewhat broader and requires a lower rank of reflection. Therefore, here and below, to describe the concept of an adaptive deceptive system and policies of reflexive control, the intruder is external. SELECTING THE REFLEXIVE CONTROL POLICY TO IMPLEMENT IN THE DECEPTIVE SYSTEM TO IMPROVE THE IS PROTECTION Some policies were identified from the represented spectrum of reflexive control policies based on classified threats that it is not difficult to implement through the mechanism of deceptive systems: (1) the transfer of false information on the foothold SHA → SH; (2) reflexive control by creating the opponent’s targets (IHA → IH); (3) creating a target by transferring a foothold pattern IHA → SHA → SH → IH; (4) conversion SHAH → SHA; (5) formation of the chain IHAH → SHAH → SHA → IHA. With regard to IS protection, the transfer of false information on the foothold is the transfer of the SHA element to the opponent, which forms a deceptive understanding of the intruder of the protected IS infrastructure. This could be achieved by adding redundant elements into both the IS network infrastructure (hosts, ports, network security services) and the internal infrastructure (a lot of installed software, data folders, etc.). The use of the mechanism of deceptive systems can implement the policy of reflexive control by forming opponent’s targets (IHA → IH). In this case, the intruder imposes a desired action via the security AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015 732 LAVROVA, PECHENKIN Examples of the traps for each of the selected policies Reflexive control policy Examples of the traps being used Transfer of deceptive information on the foothold SHA → SH Open ports Hist Network Protocol Formation of the opponent's target IHA → IH Network traffic Open ports OS Services & Software Vulnerabilities Foothold pattern transfer IHA → SHA → SH → IH Network traffic Open ports Vulnerabilities Files Conversion SHAH → SHA Network traffic Files Chain formation IHAH → SHAH → SHA → IHA Network traffic Files administrator. This is done in such a way that the intruder has confidence that he decided to make this step himself. An example of such a reflexive control during the study of the protected IS network infrastructure by the intruder, is the opening of a port, scanning which will lead the intruder into the trap, provided that the intruder had before was in the habit of port scanning to analyze the network infrastructure. There is also another example of such a reflexive control when a file or a folder with a significant name that attracts the intruder’s attention (e.g., a passwords.txt file, etc.) is created in the IS or, in other words, any information for use by a limited range of persons. The policy of creating a target by transferring a foothold pattern leads to the transfer IHA → SHA → SH → IH. This reflexive control is a more complicated process because it involves objections of different value. In a conflict of information security, the administrator forms a global objection before the beginning of the conflict, i.e., IS protection. A specific task could be to make the intruder attack a specially created target (e.g., one of the servers). The formation of the intruder’s objection to attack the desired object occurs by displaying deceptive information on the foothold. Thus, initially, the administrator selects an object that which should become the target for the intruder, and then displays it in such a way that the attack of the object was obvious for seizing the information resources. Then, the administrator acts such that it was not clear to the intruder that this target is specially formed. In practice, the implementation of reflexive control via the conversion SHAH → SHA can be performed using the deliberate stovepiping of the relevant technical documentation on the IS infrastructure. The policy of reflexive control could be also realized by means of the chain IHAH → SHAH → SHA → IHA. As in one of the policies discussed previously, the target is transferred to the intruder by transferring a foothold pattern. In this case, after analyzing the foothold pattern, the intruder should come to some (obviously false) conclusions. For example, the security administrator can concentrate a large number of protective measures around one of the network nodes, making the intruder think that this node contains extremely important confidential information, while that is not the case. Because the deceptive system is a software tool, i.e., a certain emulation of the protected IS, the selected policies will be implemented at all layers, except for the physical layer (although the reflexive control of the intruder may also be feasible at the physical layer). To form a deceptive system structure that implements the reflexive control of the intruder, we should decide at what layers these policies will be implemented and what IS objects will be affected. Table shows examples of the traps used by each of the selected policies. Concept of an Adaptive Deceptive System for the Reflexive Control of the Intruder The concept of an adaptive deceptive system can be represented as a series of stages that are closely related to each other. AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015 ADAPTIVE REFLEXIVITY THREAT PROTECTION 733 Open rdp port + vulnerability Vulnerability 6 7 5 8 Administrator’s computer data 4 9 File server Documents Protected server Vulnerability 3 2 Administrator 1 Open ssh port + vulnerability Fig. 2. Example of the deceptive system configuration with traps. 1. Construction of the configuration of the deceptive system. The peculiarity of this stage is the need for conformity between infrastructures of the deceptive system and the real system. Since the intruder can take advantage of competitive intelligence and find out at least the number of employees in the company, the number of nodes of the deceptive system should match the number of employees; otherwise, there is a possible significant risk of bringing the deceptive system into discredit. 2. The selection of the most probable objects of attack. For this purpose, the deceptive system needs a sampling from the set of critical objects for the formation of intruder targets; e.g., the target may be an administrator’s computer or a protected server. 3. The construction of the maximum length of a graph to each target. Nodes of the graph are objects of the deceptive system, and edges are actions of the intruder. The purpose of constructing such a graph is based on the need to force the intruder to spend as much time in the deceptive system as possible in order to collect sufficient information about him. 4. The classification of graph nodes based on the degree of discrediting criticality. It is obvious that, for example, a trap that emulates the administrator’s computer requires more complicated implementation than an ordinary network node; therefore, the number of actions that neutralize threats of disclosure will be considerably larger. 5. The assignment of a variety of policies for each class of nodes. Each policy will use a different set of traps for the implementation, which depends on the degree of discrediting criticality of the deceptive system object. 6. The assignment of the set of traps for each policy for each class. 7. The selection and placement of the initial traps, since there is no sense in using all of the traps in the initial configuration of the deceptive system. 8. Setting a lot of traps in the case of discrediting. This set will be used in the event that the deceptive system is discredited in order to divert the intruder’s attention. 9. The maximization of the length of graph of the intruder’s actions. This is emulation of the unused traps to impose the intruder a way similar to the maximum long graph in order to get as much information about him and his actions as possible. 10. For example, there is a configuration of the deceptive system in accordance with Fig. 2, with the set of traps that are marked with exclamation marks. Figure 3 represents three scenarios that describe the intruder’s actions. The full line shows the maximum long graph, which was constructed initially, and the dashed lines indicate maximization of the graph length under the conditions of discrediting traps of various degrees of criticality, including the host and the administrator’s computer. AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015 734 LAVROVA, PECHENKIN 6 7 8 5 4 9 File server Protected server 3 2 Administrator 1 Fig. 3. Intrusion scenarios. The most difficult case is maximizing the length of the graph under the conditions of discrediting the trap that simulates the administrator’s computer. Neutralizing the discrediting threat include reversing the logic of the traps because, when the intruder gets access to the administrator’s computer, we need to give the impression that this is not the administrator’s computer, but a usual network node, which makes the graph change its direction to opposite, and the computer no. 1 will simulate the administrator’s computer, which will be the ultimate intruder target. CONCLUSIONS The article represents the concept of an adaptive deceptive system for the reflexive control of the intruder. The further formalization of the IT security conflict involves the development of a hybrid mathematical model, which includes policies of Lefebvre’s algebra of conflicts. This will make it possible to give mathematical descriptions of the principles of reflexive control of the intruder, and implement algorithms and methods of applying these policies in practice. REFERENCES 1. Zegzhda, D.P. and Stepanova, T.V., The approach to solving the problem of security of industrial control systems from cyber threats, Probl. Inf. Bezop., Komp’yut. Sist., 2013, no. 4, pp. 32–39. 2. Eremeev, M.A., Gorbachev, I.E., Poterpeev, G.Yu, and Kravchuk, A.V., Concealment of information resources with the use of deceptive systems, Sbornik trudov konferentsii “Teoreticheskiye i prikladnyye problemy razvitiya i sovershenstvovaniya avtomatizirovannykh sistem upravleniya voyennogo naznacheniya” (Proc. Conf. Theoretical and Applied Problems of Development and Improvement of Automated Control Systems for Military Purposes), St. Petersburg, 2013, vol. 1, pp. 140–147. 3. Lefevr, V.A. and Smolyan, G.A., Algebra konfliktov (Algebra of Conflict), Moscow: URSS, 2011, 4th ed. 4. Kotenko, I.V. and Stepashkin, M.V., Deception systems for protection of information resources in computer networks, Tr. SPIIRAN, 2004, vol. 1, no. 2, pp. 211–230. 5. Davis, W.S. and Yen, D.C., The Information System Consultant’s Handbook. Systems Analysis and Design, CRC Press, 1998. Translated by A. Kolemesin AUTOMATIC CONTROL AND COMPUTER SCIENCES Vol. 49 No. 8 2015