Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993, From animals to animats 2: proceedings of the Second International Conference on Simulation of Adaptive Behavior
…
7 pages
1 file
Planning has been generally considered as a problem-solving activity where it would be possible to search through a state space to find an admissible, and often optimal, solution. Action operators map states to states, modifying accordingly some of the facts known to be true (or false); the decision of which operator to apply at any given step is based on the analysis of what the corresponding plan would look like a few steps ahead. Lately, some authors have been suggesting that planning should, and could, be viewed as the ...
Lecture Notes in Computer Science, 2002
This paper proposes a domain independent heuristic for state space planning, which is based on action evaluation. The heuristic obtains estimates for the cost of applying each action of the domain by performing a forward search in a relaxed version of the initial problem. The estimates for the actions are then utilized in a backward search on the original problem. The heuristic, which has been further refined by a goal-ordering technique, has been implemented in AcE (Action Evaluation), a state space heuristic planner, and thoroughly tested on a variety of toy problems.
2013
The world is a dynamic place. Things change because of our actions and because of others’ actions. Moreover, we have no privileged status: others are able to act without our permission in unexpected ways, without explanation. We contend that it is both natural and necessary to give an account of planning in such a world. Classical theories of planning are essentially static — they assume that the world changes only as a result of the planner’s actions, that it has complete knowledge of the consequences of its actions and that the environment is predictable. Thus we contend that classical theories of planning are inadequate for domains with dynamic environments and evolving plans — essential features of problem-solving in the real world. We propose a solution based on the algebraic theory of processes that shifts the focus of attention in the account of planning from changes of state to the processes by which transitions occur. One can ask what actions are required to bring about a g...
2016
Although PDDL is an expressive modelling language, a significant limitation is imposed on the structure of actions: the parameters of actions are restricted to values from finite (in fact, explicitly enumerated) domains. There is one exception to this, introduced in PDDL2.1, which is that durative actions may have durations that are chosen (possibly subject to explicit constraints in the action models) by the planner. A motivation for this limitation is that it ensures that the set of grounded actions is finite and, ignoring duration, the branching factor of action choices at a state is therefore finite. Although the duration parameter can make this choice infinite, very few planners support this possibility, but restrict themselves to durative actions with fixed durations. In this paper we motivate a proposed extension to PDDL to allow actions with infinite domain parameters, which we call control parameters. We illustrate reasons for using this modelling feature and then describe a planning approach that can handle domains that exploit it, implemented in a new planner, POPCORN (Partial-Order Planning with Constrained Real Numerics). We show that this approach scales to solve interesting problems.
Journal of Artificial Intelligence Research
Linear Temporal Logic (LTL) is widely used for defining conditions on the execution paths of dynamic systems. In the case of dynamic systems that allow for nondeterministic evolutions, one has to specify, along with an LTL formula ϕ, which are the paths that are required to satisfy the formula. Two extreme cases are the universal interpretation A.ϕ, which requires that the formula be satisfied for all execution paths, and the existential interpretation E.ϕ, which requires that the formula be satisfied for some execution path. When LTL is applied to the definition of goals in planning problems on nondeterministic domains, these two extreme cases are too restrictive. It is often impossible to develop plans that achieve the goal in all the nondeterministic evolutions of a system, and it is too weak to require that the goal is satisfied by some execution. In this paper we explore alternative interpretations of an LTL formula that are between these extreme cases. We define a new language that permits an arbitrary combination of the A and E quantifiers, thus allowing, for instance, to require that each finite execution can be extended to an execution satisfying an LTL formula (AE.ϕ), or that there is some finite execution whose extensions all satisfy an LTL formula (EA.ϕ). We show that only eight of these combinations of path quantifiers are relevant, corresponding to an alternation of the quantifiers of length one (A and E), two (AE and EA), three (AEA and EAE), and infinity ((AE) ω and (EA) ω). We also present a planning algorithm for the new language that is based on an automata-theoretic approach, and study its complexity.
profit, time, pleasure, etc.) by achieving this goal and consequently to define the efficient actions for this end. Abstract. Dynamic planning concerns the planning and execution of actions in a dynamic, real world environment. Its goal is to take into account changes generated by unpredicted events occurred during the execution of actions. In this paper we develop the theoretic model of dynamic planning presented in . This model proposes a graph representation of possible, efficient and best plans of agents acting in a dynamic environment. Agents have preferences among consequences of their possible actions performed to reach a fixed goal. Environmental changes and their consequences are taken into account by several approaches proposed in the so-called "reactive planning" field. The dynamic planning approach we propose, handles in addition changes on agents´ preferences and on their methods to evaluate them; it is modeled as a multi-objective dynamic programming problem.
Cornell University - arXiv, 2018
In Reasoning about Action and Planning, one synthesizes the agent plan by taking advantage of the assumption on how the environment works (that is, one exploits the environment's effects, its fairness, its trajectory constraints). In this paper we study this form of synthesis in detail. We consider assumptions as constraints on the possible strategies that the environment can have in order to respond to the agent's actions. Such constraints may be given in the form of a planning domain (or action theory), as linear-time formulas over infinite or finite runs, or as a combination of the two. We argue though that not all assumption specifications are meaningful: they need to be consistent, which means that there must exist an environment strategy fulfilling the assumption in spite of the agent actions. For such assumptions, we study how to do synthesis/planning for agent goals, ranging from a classical reachability to goal on traces specified in LTL and LTLf/LDLf, characterizing the problem both mathematically and algorithmically. 1 Introduction Reasoning about actions and planning concern the representation of a dynamic system. This representation consists of a description of the interaction between an agent and its environment and aims at enabling reasoning and deliberation on the possible course of action for the agent (Reiter 2001). Planning in fully observable nondeterministic domains (FOND), say in Planning Domain Definition Language (PDDL), (Ghallab, Nau, and Traverso 2004; Geffner and Bonet 2013) exemplifies the standard methodology for expressing dynamic systems: it represents the world using finitely many fluents under the control of the environment and a finitely many actions under the control of the agent. Using these two elements a model of the dynamics of world is given. Agent goals, e.g., reachability objectives, or, say, temporally extended objectives written in LTL (Bacchus and Kabanza 2000; Camacho et al. 2017; De Giacomo and Rubin 2018), are expressed over such models in terms of such fluents and actions. An important observation is that, in devising plans, the agent takes advantage of such a representation of the world. Such a representation corresponds to knowledge that the agent has of the world.
Lecture Notes in Computer Science, 2018
Online planning in domains with uncertainty and partial observability conveys a series of performance challenges: agents must obtain information about the environment, quickly select actions with high reward prospects and avoid very expensive mistakes, while interleaving planning and execution in highly variable and uncertain domains. In order to reduce the amount of mistakes and help an agent focus on directly relevant actions, we propose a goal-driven, action selection method for planning in (PO)MDP's. This method introduces a reward bonus and a rollout policy for MCTS planners, both of which depend almost exclusively on a clear specification of the goal and produced promising results when planning in large domains of interest to cognitive and mobile robotics.
PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, 1997
In order to generate plans for agents with multiple actuators or agent teams, we must be able to represent and plan using concurrent actions with interacting effects. Historically, this has been considered a challenging task that could require a temporal planner. We show that, with simple modifications, the STRIPS action representation language can be used to represent concurrent interacting actions. Moreover, current algorithms for partial-order planning require only small modifications in order to handle ...
2002
In this paper we present the recent developments of the approach to the design of Cognitive Robots (i.e. robots whose actions are driven by a formally developed theory of action), that are capable of performing tasks in a coordinated way. The logic of actions that we adopt is an epistemic dynamic logic, where it is possible to derive acyclic branching plans (branches corresponding to sensing actions), including primitive parallel actions. In the present work, we consider an extended notion of plan by admitting a simple class of cycles that arise from the attempt to recover from the failure states originated by sensing actions. The proposed extension allows us to address the problem of generating plans that handle a form of synchronization based on the recognition of specific situations through sensing actions, including forms of coordination required in a multi-robot scenario.
Abstract
Planning has been generally considered as a problem-solving activity where it would be possible to search through a state space to find an admissible, and often optimal, solution. Action operators map states to states, modifying accordingly some of the facts known to be true (or false); the decision of which operator to apply at any given step is based on the analysis of what the corresponding plan would look like a few steps ahead. Lately, some authors have been suggesting that planning should, and could, be viewed as the result of the interactions of a group of some sort of computational units (agents) 1 . These units are simple, are strongly interconnected between them and to the external world, and each can be selected to decide on the action to perform at a given moment. What the units can be and what kinds of interactions can take place is what this paper is concerned about. We present here some results we obtained by -wiring‖ the state and the goals into a network where the nodes are the entities in the world being modeled. They exchange limited information, have simple behaviors, and modify their available interaction links accordingly to their current state. These approaches clearly suggest that decentralised, local forms of control can ensure coherent behavior in changing and
Introduction
This paper is concerned with planning viewed as the result of interactions between a group of simple units (i.e., presenting restricted cognitive capabilities), each having only local information about the state of the world and having some means of transmitting limited information to the others. Units can be the entities being acted upon (e.g., the blocks in a blocks world), or the actions which can be performed (e.g., instantiated operators). These units are highly reactive to the external environment and to the others, and as a result they increase/decrease some internal measure of the suitability they have to be chosen to perform an action, given the present state of the world. Even if the units are not cognitive by nature (there's neither internal inference nor knowledge representation structures) they are not only purely reactive either. A unit reacting to a changing fact in the environment generally entails a chain of reactions involving other units, which makes that the result of the initial reaction will be -weigthed‖ over the entire group. As such, these approaches avoid the problems which traditionnal planners generally have to cope with:
• the Sussman anomaly (conjunctive goals 2 ).
• the detection and the resolution of conflicts in nonlinear plans.
• the exponential growth of the size of the state space to search.
Having no explicit global representation of the world, and since there is no partial plan being built, the group of units simply ignores these problems. In the remainder of this paper we show how a network of units linked by their local state and their goals -builds‖ a plan. First, in the next section we describe the algorithm for computing energy. In section 3 we discuss our approach, and section 4 analyses some related work. Finally, in section 5 we expose our concluding remarks. Throughout the paper our examples are taken from the blocks world. The planning problems found in a world simple as it is have been discussed for some time now, even if they were not always solved, or even completely understood. Some examples here and in the related bibliography show difficult cases which are solved in a quite natural way if we look at the effort classical planners do -e.g., in Sacerdoti (1977) the later is proportional to the number of conflicts.
How do interacting units plan
Our approach relies on the multi-agent planning paradigm where a group of agents builds collectively a plan. Each agent has a goal, a state, and the task of the group is to satisfy all individual goals; moreover, we want the global solution to be optimal, i.e., the one that minimizes the total number of steps necessary to reach that solution. If we were to characterize our approach inside the planning or action sequencing work, we would say that it is an -attempt to avoid planning‖ (following McDermott, 1992).
What units really are
We take here as example the blocks world where cubes must be moved from one place to another. However, we are only interested here in the mechanisms allowing an optimal selection procedure to take place inside a group of agents (whatever they may be) and so we do not care about external elements which could act as resources for these agents -for example, a robot manipulator that could only act as a -transporter‖, or sensory devices that would act also upon request. In a sense, the agents we consider are aware of these possibilities, although this view can certainly confine our problem to a smaller area of the overall adaptive action selection behavior. Each agent manipulates a crude representation of its current state and that is all that is needed in order to interact with the others -we assume that the means it has to build that representation are available as they do not rely on a complex processing of the data they manipulate.
In our case, agents are the blocks and the table. We say that blocks are satisfied if they are both in their final position and the one supporting them is also satisfied; the table is always satisfied (for the moment we deal with an infinite capacity table: there is enough space for all blocks).
Blocks have two choices when moving: either their final position is satisfied, in which case they move to it, otherwise they move to the table. When a block is selected to move, it always chooses one of these two options, and so it must be free (i.e., not supporting a block). At each step, the block selected to move, besides being free, must also be the one that minimizes the total number of moves. In other words, with an infinite capacity table, a block has a maximum of two moves: one to the table (if its final position is not satisfied), and another one to its final position.
All the agents are linked by two binary relations: depends-on (Don) and blocked-by (Bby). The first says that if the goal of A is to be on top of B, then A depends-on B.
The second just reflects the current state of the world: if A is on top of C, than C is blocked-by A. The inverse relations also exist (i.e., for the above we could say that B is-dependence-of A and that A blocks C); for simplicity we will omit them in what follows, their presence being implicit. The agents and the relations form a network where the nodes (the agents) are going to be assigned numeric values (henceforth called energy 3 ) which come from other nodes via the links representing the above relations between the entities -see figure 1: the current and final states are shown at the top, and the initial network at the bottom.
Figure 1
fig. 1. Initial configuration and network.
After a move, the blocked-by links are updated according to the new state of the world; unless we allow for changing goals, an agent's depends-on link will never be modified. When all agents are satisfied the algorithm stops: the plan executed is then the temporal ordering of all the moves made so far.
Before going further we will highlight two points:
• the energy of a block (which expresses, in a sense, the agent's will of getting satisfaction 4 ) is computed based on local information about the node: that is, based on its goal and on its current state, as reflected by the links blocked-by and depends-on which transmit energy. All decisions are thus local, although we will see later that blocks propagate also energy: the energy of a block can thus have been modified by some other block.
• the plan is the collection of moves (decisions after each propagation phase). If a move does not produce the desired results (a block falls while being moved by a gripper) only the links between the entities involved -the block being moved and its initial and final positions -are updated. In the next propagation phase there will be no replanning whatsoever (which can be rather complex to deal with, e.g., Drummond and Currie, 1989); the network will just use the new links and again another agent will be chosen. This is quite important when interleaving planning and execution, where there is often the need to modify previous planning sequences that were based on expected outcomes.
Some hypothesis are made concerning the agent's behavior. An agent is commited to retransmit whatever information is to be propagated (see section 2.2). This means that, even if they have some degree of autonomy, they respect the rules governing the action selection mechanism. As they are all engaged in some sort of a cooperative problem-solving process, it is not apparent how they could, for example, hide information so as to benefit from it (e.g., as in Zlotkin and Rosenschein, 1990).
These two remarks will become apparent later. Next we will describe how energy is computed for each agent.
Computing energy
Planning will be considered here as the collection of choices made by the group of agents at regular time intervals (the time needed to compute the energy terms we will see below). These choices are based on the energy level of each block after all exchanges of energy have been done. It is this mutual exchange, or better, propagation, mechanism which will be described below.
At the beginning of each propagation phase all entities have zero energy; the table besides having no energy is always satisfied -it will never be moved… Energy is computed as the sum of four components: E b is the energy exchanged via the blocked-by links. 4 As in the Eco-problem-solving approach (Ferber, 1990). E d is the energy coming from the depends-on links.
E db is the energy going through the depends-on links and propagated via the blocked-by links.
E bd is the energy going through the blocked-by links and propagated via the depends-on links.
Energy values range from -∞ to +∞. E(X) denotes one of the above terms for block X. Let us first introduce some auxiliary predicates: sat, don and bby. We will say sat(
So, if we are computing the energy for block X, the following local increments in its energy are made:
• If bby(x, y),
The function propagate-b does the following:
Intuitively, the energy of a blocked block is decreased while a blocking block sees its energy increased (unless the one below is satisfied) and eventually will be forced (i.e., chosen) to move. This energy by itself would not be able to take into account the goal relations between the nodes; it is only based on the local, and current, state of the blocks. We use then the links depends-on so that the blocks' energy will be influenced also by the goal intentions of the agents. It is the introduction of this energy and of the two subsequent propagation phases that will optimize the blocks' moves.
• If don(x, y), This function just takes into account the fact that when we compute the energy for one agent we may be using values that will be updated later. For the example in figure 1 this will happen if we compute E d (B) before E d (C); we would find the value -1 for E d (B) and it should be 0 (see at bottom in figure 2 below). Also, we only update our energy if we have already computed it; otherwise when we compute it later, Value would have been counted twice. As agents are independent and can compute their energy in parallel regardless of the computations going on in the others, these restrictions are compulsory. After all blocks had their E b and E d values computed (as shown in figure 2), we start a phase of forward propagation of energy to compute E db and E bd . This aditional propagation will play the role of an action at a distance, and will influence block's energy by those who are only indirectly related to it -that is, not linked by an explicit depends-on or blocked-by relation. This phase proceeds as described in the next section.
Propagating mixed energy
We saw before how the current state and the goal intentions of the agents contributed to their internal energy. We now explore the fact that, in order to have a rational, optimal if possible, selection criterion for the agents, there must be some way of weighting their global interaction. That is, given two agents with the same energy why should we prefer one in detriment of the other? We argue that this decision can be made solely by letting the agents exchange information about their local relations to other nodes, this time via both the blocked-by and depends-on links. This information, which is propagated from agent to agent following some fixed rules, can be though of as a mixed energy. For each agent, it will add the contributions of both those who depend on it and of those who are blocked by it.
Propagation and local computation of the remaining two terms go as follows:
• Computing E bd Take the graph at the top in figure 2. Consider the set S of the nodes having a predecessor (C and A). B has no predecessor (incoming Bby link). Let P(s) be the length of the predecessor chain of s S (including s itself): thus P(A)=3 and P(C)=2.
Figure
S S do:
If X : don(s, x) then
If X : bby(s, x) then propagate-db(x, P(s))
where propagate-db is defined as follows:
propagate-db(Unit, Value)
Presented at 2nd Conf. on Simulation of Adaptive Beahavior, Honolulu, 7-11 dec. 1992. MIT Press/Bradford Books #5 If X : bby(Unit, x) then propagate-db(x,Value) else unless y : don(Unit, y) and not(sat(y)) then E db (Unit) = E db (Unit)+Value
This last propagation step is not necessary if the initial agent s is satisfied. Indeed, in this case the agents standing above should not be -forced‖ to move -it is as if they were standing on the table. From this function we can see that the propagated values are only stored in the end nodes (unless they depend on non-satisfied agents); those along the propagation path do not modify their energy, although this could be useful if certain optimizations to the algorithms were to be done. Specifically, after each propagation cycle energy values are reset. As only one agent is selected to move, only a few links will be modified, having a restricted impact on others' energies which don't need to be recomputed again from the beginning.
If we look at the links in figure 2 it is easily seen that E bd (A)=2 and E db (A)=3, all the other mixed energies being 0. After all four values have been computed, the nonsatisfied agent with the highest energy is chosen to try to achieve its goal; if its final destination is satisfied the block can be moved to it, otherwise it moves to the table. It should be noted that it is the agent that is chosen, not the action to be performed; it is then up to the agent to choose which action to take, based on the local information it has.
propagate-bd(x, P(s))
where propagate-bd is defined as follows:
propagate-bd(Unit, Value)
If X : don(Unit, x) and not(sat(x)) then
propagate-bd(x,Value) else New-Unit = blocked?(Unit) E bd (New-Unit) = E bd (New-Unit)+Value
Where blocked?(Unit) returns the last in a Bby chain starting at Unit, Unit otherwise.
The test on sat(X) tells us that if X is satisfied that makes no sense to continue propagating energy; its depends-on link can only point to another satisfied block, so there's no interest in propagating further. The last remark becomes evident if we recall that an agent is satisfied if it is in its final position, and the one supporting it is also satisfied. We stop the propagation just before the first satisfied agent found. Note also that these functions are simplified from the fact that we do not consider here multiple supporting and supported blocks. If this were the case, then we would have had to consider multiple incoming and outcoming blocked-by links for each agent.
• Computing E db Take the graph at the bottom in figure 2. Consider the set S of the nodes having a predecessor (C and B). As before, we define P(s) such that now we have P(C)=3 and P(B)=2.
Analysis of the approach
We have presented a model of multi-agent interaction based on two primary relations: agent's dependency and blocking. An agent depends on someone (something) to achieve its goal, and it can be blocked by someone (something) in its endeavour. These two relations help define the communication channels between the agents. Via these channels, they communicate and propagate received information (essentially numeric values) following a fixed and common set of rules. At the current stage of the research, it is not clear yet how agents can have different behaviors in order to -falsify‖ (to their own profit) the selection criteria: they just cooperate, and their collaborative effort allows an optimal solution to be found. The latter is indeed the most interesting result of this approach, which means that temporal action sequencing is the optimal. Other recent approaches in other domains make use also of simple (and numeric) information exchange to solve heuristic search-based problems (e.g. Clearwater and Huberman, 1991) which have been traditionnaly viewed as hard state space search problems.
Let alone for efficiency reasons, this result shows some sort of -social‖ agreement based on two primitive behaviors: know what you want to do and what prevents you from doing it. From that, increase the energy of those who can help you in getting satisfaction, and decrease that of those who prevent you from having it. One notable consequence of this behavior is that no distinction is made between planning and replanning phases. As we said before, only modifying links affects propagation of energy, and this does not rely on the previous states of the group. On the contrary, this is however a question of great concern in traditional planners, which is tackled in a more or less natural way in several experiments (e.g., see Hutchinson andKak, 1990, Ramos andOliveira 1991).
There remain several restrictions on a more general application of the approach. When an agent is selected to act, it could only do a move after he decided where to. With more complex situations, with a set of actions to choose from, it could be useful to use the individual components of the energy, or else to have a hard-wired combination of these values directly into each known action -this is what the behavioral approach preconizes (e.g., Brooks 1991Brooks , 1991a. It seems however that identifying the blocked-by and depends-on links should be important when using our approach. Even if they can be related to the classical precondition lists, it is not clear yet how this mapping could be done.
We will discuss next to what extent our approach is inspired from other work in the same, or related, fields.
Relation to other work
Essentially, our inspiration comes from the spreading activation network of Maes (1989), and from the ecoproblem-solving framework of Ferber (1990) 5 . The first uses a network where the nodes are all the possible actions in the world, and the links between these nodes relate all the facts in the precondition, add, and delete lists of these actions (in the blocks world, actions would be something like put-on-A-B -meaning that A is to be put on top of B -and other facts could be clear-A, on-A-B, etc). Nodes exchange inhibition/activation energy mutualy, weighted by some tuning factors which allow the overall system to be more goal-or situation-oriented.
Besides this later characteristic, which reveals some sort of adaptativity and allows for capabilities such as learning to be introduced, the system presents two drawbacks: first, all possible actions must be explicitly present in the network, even if they are not very important. If the number of possible actions grows, the size (number of nodes and links) of the network grows accordingly. Second, mutual influences must be tuned up prior to using the system. That means the designer must have a pretty good idea of the relative weigths of the links if the system is to behave -as expected‖. Nevertheless, it is expected that some sort of learning can also be introduced in this case. On the other hand, as the units are all the possible actions, this approach is really more concerned with the action selection problem inside a single agent rather than with viewing the network as a group of agents. Recent experiments (Maes and Brooks, 1990) demonstrated that the activation/inhibition network could be at the basis of an adaptive behavior selection mechanism in a mobile robot.
The second work we consider important to our approach is also based on a simple set of interaction behaviors (Ferber and Jacopin, 1990). MASH (for Multi Agent Satisfaction Handler) is a multi-agent system based on the eco-problem-solving paradigm (Ferber, 1990). Units in MASH are the blocks and the table; a unit can be seen as a finite deterministic automaton with a set of simple behaviors: the will to be satisfied, the obligation to move away (when attacked), and the will to be free (which can lead to attack others).
However, there is no mutually agreed upon decision about which unit should be given priority to act. This example could also make use (as in other examples in ecoproblem-solving -e.g., Drougoul et al., 1991) of some heuristic function which would allow the system to optimize, choosing the -best‖ unit to be satisfied first; from the the standpoint of our approach, this would be equivalent to assign to the units more than two -energy‖ levels, and choose amongst them the highest. Nevertheless, as it is, this approach -simple and conceptually attractive -is well suited to problems where actions are cheap but the time to get to the solution is expensive.
Another source of inspiration has been the -multi-agent planning‖ work. Although agents in this case are generally considered to be more representation and deductionoriented (at the expense of an increased decoupling from the external world -but there are exceptions, e.g., Hayes-Roth, 1992), they are also engaged in a global interaction process where global coherence and local consistency are difficult to obtain.
Recent work on new approaches to model cognitive behavior has also provided good and stimulating examples of what non-symbolic techniques can offer (e.g., Real, 1991, Roitblat et al., 1991, and see also Meyer and Guillot, 1990, for a comprehensive review on the attempts to simulate adaptive behavior).
Final remarks
We have presented here a model of group interaction based on dependency and blocking relations, which has provided optimal results in the multi-agent planning paradigm. It is based on the assumption that agents are well-behaved, i.e., they respect a fixed set of interaction rules, such as retransmitting information when they are expected to do so, and taking actions when asked to. Moreover, they are commited to update their direct relations to other agents, such as when they are moved to other places. It seems also that more behaviors could be added so as to take into account other characteristics of real-world domains, such as resource constraints, and individuality.
We have seen that the group of agents can adapt itself easily to changing environments: simply by modifying the links between the agents, the system can go on -planning‖, its internal structure was thus reconfigured to match the state of the environment.
Application to other domains with different characteristics will be necessary to test the validity of this approach. For the moment only well-defined goal relations between agents can be taken into account. We are currently extending our work to domains where these relations are more complex in nature, and involve different types of preconditions.
Lingua Aegyptia 21 (2013), 277-288
Progress In Electromagnetics Research, 2012
DailyArt Magazine, 2018
Advances in Life Science and Technology, 2016
Israel Journal of Ecology & Evolution, 2010
Bulletin of Mathematical Biology, 1996
Journal of Physics D: Applied Physics, 2011
African American Hip Hop Slang, 2013
Istituto Lombardo - Accademia di Scienze e Lettere - Incontri di Studio, 2018
Hypertension Research, 2021
The Lancet Neurology, 2019
Remote Sensing, 2015