Moving Goal Search in Dynamic Environments: Satish Dinakar B Faculty of Science & Technology ICFAI University Tripura

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

International Journal of Computer Application

Issue 3, Volume 1 (February 2013)


Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797

MOVING GOAL SEARCH IN DYNAMIC


ENVIRONMENTS

SATISH DINAKAR B
Faculty of Science & Technology
ICFAI University Tripura

Abstract
The shortest path searching algorithm plays a major role in various applications like video games,
operational management, system analysis and design, logistics, transportation, project management,
network and production line. Typical heuristic search algorithms usually make assumptions such as
using a single agent, having one fixed goal and a static environment. In this report, we deal with
multiple agents who search for a moving goal in a dynamic environment. This is akin to a number of
cops who cooperate to catch a fleeing thief. The agents always move on the shortest path towards the
goal, which is a reasonable strategy for computer controlled characters in video games. Some of the
states are blocks and it is necessary to find a path which does not have any blocked states. Designing the
realistic movement of agents in computer games is a major challenge. This re-port investigates several
effective approaches for solving problem instances in this domain.

---------------------------------------------------Keywords: Moving goal, Dynamic environment, searching, Heuristic search algorithms, A*


algorithm, Artificial Intelligence,

Page 51

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797

1. Introduction
The classic algorithms (such as A*) are effective for solving search problems that satisfy the
properties: one search agent, the agent has perfect information about the environment, the
environment and goal state do not change, and enough time is given to make an optimal decision.
Relaxing even one of the assumptions gives rise to new algorithms (e.g., moving target search [Ishida
and Korf, 1995 ]) . Many real search [Korf, 1990 ], and world problems do not satisfy all of these
properties. In this paper we use an application domain which deals with multiple agent, a moving goal
and a dynamic environment. A moving goal search problem is a planning problem where multiple
agents having to catch a moving goal [5]. One application of this area of research is

computer

games, where computer characters (= Cops) has to chase or catch up with other character (= Thief).
Here the agents keep changing their positions randomly. While the thief is changing its position, the
shortest distance have been traced with respect to the nearest Cop. The moving goal search problem is
solved once the cop and thief are in the same state (= maze cell).We designed a test environment that
is as realistic as possible. We assume a maze with obstacles where agents can only \see" what is
directly visible to them. Agents are allowed to communicate with any agent that they can see. As the
goal flees, it may become obscured from sight. Given whatever knowledge of the goal and other
agents is available, an agent must decide how to pursue the goal. The goal's possible locations provide
information of where to search; the other agents' possible locations provide information that can be
used to coordinate the search e ort. The challenge is to have the agents act autonomously to catch the
goal as quickly as possible.

Most real-time search algorithms limit the look ahead of each search and thus nd only a pre x of a
path from the cop to the thief before the cop starts to move. Their advantage is that the cop starts to
move in constant time, independent of the number of states. Their disadvantage is the trajectory of the
cop can be highly suboptimal and that it is di cult to determine whether there exists a path from the
cop to the thief. Furthermore, they do not scale to large environments due to their memory
requirements that typically grow quadratic ally in the number of states. Incremental search algorithms,
on the other hand, nd a path from the cop to the thief before the cop starts to move. They run
sufficiently fast and they provide alternatives to real time search algorithms that avoid their
disadvantages. We therefore investigated incremental search algorithms in this paper. There are
generally two classes of incremental search algorithms: 1. Incremental search algorithms of the first
kind use information from previous searches to update the h values to make them more informed
over time and to make the search more focused. 2. Incremental search algorithms of the second kind
transform the previous search tree to the current search tree. The current search then starts with the
current search tree instead of from scratch. Examples include Differential A*, Fringe Retrieving A*
(FRA*) and its generalization, Generalized FRA* (GFRA*). Most incremental search algorithms[12]
of the second kind were designed for search problems with stationary start states or in static
environments (whose blocked states do not change over time). Thus, they are not e cient for moving
goal search problems in dynamic environments (whose blocked states can change over time) since
both the start and goal states can change over time for moving goal search problems. Here, we used

Page 52

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
moving goal A* , an incremental search algorithm in dynamic environment where start and goal states
both change over time.

2. Related Work
There are generally two classes of approaches for solving moving goal search problems with search
algorithms. The first one is, offline approach", that take into account all possible contingencies
(namely, movements of the goal and changes in the environment) to find the best plan for the cop, for
example, using minimax or alphabeta search. Unfortunately, offline approaches do not scale to large
environments due to the large number of contingencies. The second one is , on-line approach, that
sacrifice optimality for smaller computation times by interleaving planning and movement, for
example, by finding the best plan for the cop with the information currently available and allowing
the cop to gather additional information while moving. For example, they can determine a cost
minimal path from the current state of the cop to the current state of the thief in the current
environment. This allows them to find plans for the cop efficiently while being reactive to movements
of the thief and changes in the environment. Examples include real-time search algorithms and
incremental search algorithms..

3. Problem Definition
Moving goal environment has the following properties:
1.

AGENT'S GOAL: Multiple Agents have to catch single moving goal.

2. MAZE: m x n maze with randomly placed obstacles (blocked states).


3.

MOVES: All moves are horizontal or vertical and are made simultaneously. The cop can
move from its current unblocked state to any unblocked neighboring state with cost 1 and
blocked neighboring state with cost infinity. We make no assumptions about how the goal
moves.

4.

COMMUNICATION: Between moves, agents communicate with any agent that is visible to
them. The agents exchange information as to where they believe the goal and other agents
are located.

5.

OBJECTIVE: Catch the goal in the fewest number of moves by the nearest agent. Agent
always follows a minimal cost path from its current state to the current state of the goal until
it reaches the current state of the goal, which is often a reasonable strategy for the goal that
can be computed

efficiently.

We use the following notation:


S denotes the finite set of all (blocked and un- blocked) states, sstart S denotes the
current state of the cop and the start state of the search, and sgoal S denotes the
current state of the thief and the goal state of the search.
c(s, s0 ) denotes the cost of a cost minimal path from state s S to state s0 S.
Succ(s) S denotes the set of successors of state s S, and Pred(s) S denotes the set of

Page 53

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
predecessors of state s S. The cost of moving from state s to its successors or from its
predecessors to state s can be infinity.
In grid worlds, the p r e d e c e s s o r s a n d s u c c e s s o r s of a state are thus its (blocked and
unblocked) neighbors.

Back Ground

3.1

A*

A*[14] is the most popular search algorithm in artificial intelligence and the basis of all
search algorithms described in this paper. A* maintains four values for every state s S: (1)
The h value h(s, sgoal ) is a user provided approximation of c(s, sgoal ).The h values have to be
consistent.(2) The gvalue g(s) is an approximation of c(sstart , s).Initially, it is infinity.

(3)

The f value f (s) := g(s) + h(s, sgoal ) is an approximation of the smallest cost of moving
(4) The parent pointer par(s) Pred(s)

from the start state via state s to the goal state.


points to the parent of state s in the search tree.

Initially, i t is NULL. The parent pointers

are used to extract a cost minimal path from the start state to the goal state after the
search terminates A* also maintains two data structures: (1) The OPEN list contains all states
to be considered for expansion. Initially, it contains on ly the start state wi t h gvalue zero. (2)
The CLOSED list contains a l l states that have been expanded. Initially, i t is empty. Actually,
none of the search algorithms described in this paper require a CLOSED list but, for ease of
description, we pretend they do. A* repeatedly deletes a state s with the smallest fvalue from
the OPEN list, inserts it into the CLOSED list and expands it by performing the following
operations for each state s0 Succ(s) .
If g(s) + c(s, s0 ) < g(s0 ), then A* generates s by setting g(s0 ) := g(s) + c(s, s0 ) and par(s0 )
:= s and, if s0 is not in the OPEN list, inserting it into the OPEN list. A* terminates when
the OPEN l i st is empty or when it expands the goal state. The former condition indicates that
no path exists from the start state to the goal state, and the latter condition indicates that
A* found a cost minimal path Therefore, one can solve moving goal search problems in dynamic
environments by finding a cost minimal path with A* from the cur- rent state of the cop to
the current state of the thief whenever the environment changes or the goal moves off the
known path.

3.2

G-FRA*

Generalized
that

Fringe-Retrieving A* (G-FRA*) [14] is an incremental search algorithm

generalizes Fringe Retrieving A* (FRA*) from grid worlds to arbitrary directed

graphs.

We use the principle behind G-FRA* i n one of the p r o p o s e d s e a r c h

a l g o r i t h m s . G-FRA*

cannot

solve

moving

goal

search

problems in

dynamic

Page 54

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
environments but it solves moving goal search problems in static environments by
finding a cost-minimal path with A* from the current state of the cop to the current
state of the thief whenever the goal moves off the known path.

Each A* run is called a

search iteration. In each search iteration, G- FRA* first transforms the previous search tree
to the initial search tree, which consists of the initial OPEN and C LO SE D l i s t s .

It then

s t a r t s A* with t h e i n i t i a l OPEN and CLOSED lists instead of from scratch. Thus, GFRA* inherits the properties of A*, for ex- ample, it finds cost-minimal paths from ties of
A*, for example, it finds cost-minimal paths from the current state of the cop to the
current state of the thief. The initial search tree is the sub tree of the previous search tree
rooted at the current start state. Thus, th e initial and previous search trees are different if the
cop moved.

To determine the initial OPEN and CLOSED lists, G-FRA* maintains a

DELETED list, which contains all states that are in the previous search tree but not the
initial search tree. Then, the initial CLOSED list contains those states that are in the
previous CLOSED list but not the DELETED list, and the initial OPEN list contains
both those states that are in the previous OPEN list but not the DELETED list and those
states that are in the DELETED list and are successors of states in the initial CLOSED
list.

MOVING GOAL A*
Moving Goal A* is an incremental search algorithm that forms the basis of the

proposed search algorithms. Moving Goal A* solves sequences of search problems in


dynamic environments where the start state change over time by repeatedly transforming
the previous search tree to the current search tree. It have been extended in a straight
forward way to solve moving goal search problems in dynamic environments by finding a
cost minimal path from the current state of the cop to the current state of the thief
whenever the environment changes or the goal moves off the known path.

In each search

iteration, Moving Goal A* first shifts the map to keep the start state stationary (for
example, it shifts the map one unit south if the cop moved one unit north) and then
updates the gvalues and parent pointers of states as necessary until it has found a cost
minimal path from the current state of the cop to the current state of the thief.

4.1

Algorithm

1 function ComputeKey(S)
2 return

[min(g(s), rhs(s))

h(s, sgoal )

km ; min(g(s), rhs(s))];
3 procedure Initialize()
4

OPEN := CLOSED := ;

5 km := 0;

Page 55

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797

6 for all sS ;
7 rhs(s) := g(s) := ;
8 par(s) := N U LL ;
9 sstart := the current state of the Cop ;
10 sgoal := the current state of the Thief ;
11 rhs(sstart ) := 0 ;
12 OPEN.I nsert(sstart , C omputeK ey(sstart )) ;
13 procedure Upadatestate(u)
14 if (g(u)= rhs(u) AND u OPEN)
15 OPEN.U pdate(u, C alculateK ey(u)) ;
16 elseif (g(u)= rhs(u) AND u OPEN)
17 OP EN.I nsert(u, C alculateK ey(u)) ;
18 else if (g(u)= rhs(u) AND u OPEN) ;
19 OP EN.Delete(u);
20 procedureCalculateMinimalCostPath()
21 while(OP EN.T opK ey() < C alculateK ey(sgoal ) OR
rhs(sgoal ) > g( sgoal ))
22 u := OP EN.T op() ;
23 kold := OP EN.T opK ey() ;
24 knew := C alculateK ey(u) ;
25 if (kold < knew )OP EN.U pdate(u, knew );
26 else if (g(u) > rhs(u)) ;
27 g(u) := rhs(u) ;
28 OP EN.Delete(u) ;
29 for all s Succ(u) ; if (s = sstart AND (rhs(s) > g(u)
+ c(u, s))) ;
30 par(s) := u ;
31 rhs(s) := g(u) + c(u, s) ;
32 UpdateState(s) ;
33 else
34 g(u) := ;
35 for all s Succ(u) {u}if (s= sstart AN Dpar(s) = u)

Page 56

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
36 rhs(s) := mins 0 P red(s) (g(s0 ) + c(s0 ,s)) ;
37 if (rhs(s) = ); par(s) := N U LL;
38 else
39 par(s) := argmins 0

0
P red(s) (g(s )

+ c(s0 , s));

40 U pdateState(s) ;
41 function main()
42 I nitialise()
43 whie (sstart = sgoal )
44 soldstart := sstart ;
45 soldgoal := sgoal ;
46 CalculateMinimalCostPath()
47 if (rhs(sgoal ) = ) / nopathexists /
48 return false;
49 identify a path from sstart to sgoal using the parent
pointers ;
50 while (Thief not caught AND Thief on path from
sstart tosgoal AND no edge costs changed)
51 Cop follows path fromsstart tosgoal ;
52 if Cop caught the Thief
53 return true;
54 sstart := the current state of the Cop ;
55 sgoal := the current state of the Thief ;
56 km := km + h(soldgoal , sgoal ) ;
57 if (soldstart = sstart )
58 shift

the

map

appropriately

(which

changessstart andsgoal ) for all directed edges (u, v)


with changed edge costs.
59 cold := c(u,v);
60 update the edge cost c(u, v)
61 if (cold > c (u,v) if (v = AND par(v) = u)
62 rhs(v) := mins 0 P red(v) (g(s0 ) + c(s0 , v)) ;
63 if (rhs(v) = )
64 par(v) := NULL ;
65 ele
66 par(v)

:=

argmins 0 P red(v) (g(s0

+ c(s0

Page 57

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
v)); UpdateState(v);
67 return true ;
68 procedure Basic Deletion()
69 par(sstart ) := NULL;
70 rhs (soldstart ) := mins 0

P red(s

oldstart ) (g(s

) + c(s0 ,soldstart )) ;

71 if (rhs(soldstart ) = ) ;
72 par(sstart ) := NULL ;
73 else
74 par(soldstart ) :=

argmins P red(soldstart ) (g(s0 )+c(s0 , s


oldstart ));

75 UpdateState(soldstart );
76 procedure Optimized Deletion()
77 DELET ED := ;
78 par(sstart ) := NULL ;
79 for all sS in the search tree but not the subtree rooted at sstart
80 par(s) := NULL ;
81 rhs(s) := g(s) :=;
82 if (s OPEN)
83 OPEN.Delete(s);
84 DELETED := DELET ED {s} ;
85 for all sDELETEDf orallspred(s)
86 if (rhs(s) > g(s0 ) + c(s0 ,s))
87 (rhs(s) := g(s0 ) + c(s0 , s))
88 par(s) := s0
89 if (rhs(s)<)
90 OP EN.I nsert(s, C omputeK ey(s));

Pseudo code of Moving Goal A*


The pseudo code uses the following functions to manage the OPEN list: OPEN.Top()
returns a state with

the

smallest

priority

of all states

in the

OPEN list.

OPEN.TopKey() returns the smallest priority of all states in the OPEN list.

(If the

OPEN list is empty, then OPEN.TopKey() returns [; ].) OPEN.Insert(s, k) inserts

Page 58

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
state s into the OPEN list with priority k. OPEN.Update(s, k) changes the priority of
state s in the OPEN list to k. OPEN.Delete(s) deletes state s from the OPEN list. Priorities
are com- pared lexicographically. The minimum of an empty set is infinity.
ComputeCostMinimalPath() determines a cost- minimal path from the start state to
the goal state. Before it is called, states can have arbitrary g-values but their g-values and
parent pointers have to satisfy Eqs. 1-4 and the OPEN list has to contain all locally
inconsistent states. The runtime of ComputeCostMinimalPath() is typically the higher the
more g-values it updates.
Moving Goal A* maintains an h-value, g-value, f - value and parent pointer for every
state s with similar meanings as used by A* but it also maintain an rhs- value. The g-value
is defined to be

where c = 0. Thus, the rhs-value is basically is a one step lookahead g-value. State s is
called locally inconsistent if g(s) = rhs(s). The f -value of state s is

Results
Moving goal A* calls OptimizedDeletion() on Line 58 instead of shifting the map.It
expands the states in the previous search tree that are not in the subtree rooted at the
current start state one at a time to set their g-values to infinity when needed. Whenever
it sets the g-value of a state to infinity, it needs to update the rhs-values, parent pointers
and memberships in the OPEN list for all successors of the state.
Moving goal A*, on the other hand, eagerly uses a specialized procedure that operates in

two phases.
Moving goal A* maintains a DELETED list, which contains all states in the previous
search tree that are not in the subtree rooted at the current start state, which is exactly
the definition that G-FRA*

uses. First, Moving goal A* sets the parent pointer of the

current start state to NULL on Line 78. Then, Phase 1 on Lines 79-84 eagerly sets the gvalues of all states in the DELETED list to infinity in one pass.
We perform our experiments in four-neighbor maze- worlds of size up to 200 x 200 with
20 percent ran- domly blocked states and randomly chosen start and goal states.

The

Page 59

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
hunter always knows the blockage statuses of all states.

The goal always follows a cost-

minimal path from its current state to a randomly se- lected unblocked state and repeats the
process when- ever it reaches that state or cannot move due to its path being blocked.
The goal randomly skips move to ensure that the goal catches it.

We perform the

experiments in two different environments. In static environments, the blockage statuses of


states does not change. In dynamic environments, we randomly block k unblocked states and
unblock B blocked states af- ter every move of the hunter in a way that ensures that
there always exists a path from the current state of the cop to the current state of the thief.
We use the Manhattan distances as consistent h-values.
Here, the sample output of moving goal A* algorithm used in 10x25 maze is given.

Figure 1: Step by Step iterations from 1 to 9.

Page 60

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797

Figure 2: Step by Step iterations from 10 to 18.

Figure 3: cop thief in same cell H


Finally,

Phase

2 on

Lines

85-90 updates the

rhs- values, parent pointers

and

Page 61

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
memberships in the OPEN list of all potentially affected states in one pass, which are the states
in the DELETED list. Their rhs-values, parent pointers and memberships in the OPEN list can
be updated in one pass since they depend only on the g-values of their pre-decessors, which do
not change in Phase 2. If the g-values of all predecessors are infin- ity, which is likely the
case for many of these states due to Phase 1, their rhs-values have to be updated to infinity,
their parent pointers have to be updated to NULL and they have to be deleted from the
OPEN list since they are not locally inconsistent. Phase 1 there- fore sets the rhs-values of all
states in the DELETED list to infinity, their parent pointers to NULL and re- moves them
from the OPEN list. Phase 2 then only updates the rhs-values (according to Eqs. 1-2) and
parent pointers (according to Eqs.3-4) of the few ex- ceptions and inserts them into the
OPEN

list if they are locally inconsistent. Overall, Phase 1 is thus simi- lar to Step 2

(Deleting States) of G-FRA*, and Phase


2 is similar to Step 4 (Inserting States) of G-FRA*. The runtimes of both phases are small.
Phases 1 and
2 iterate over all states in the DELETED list. Phase2 also iterates over all predecessors of the
states in the DELETED list. Thus, each state in the DELETED list can require n operations
on n-neighbor gridworlds. Both phases manipulate the OPEN list, which, if it is implemented
as a binary heap, requires the execution of heap operations to keep it sorted only once,namely
at the end of Phase 2, since it can remain unsorted until then.
Here in the maze S,T,U,V,W are the start states(Cops) and G is the goal state(Thief ).
They randomly changing their positions. Here the thief fre- quently changes its position and
the shortest distance have been traced with respect to the nearest Cop.Thus the search comes
to an end when the thief and cop comes to the same cell. That cell is named as H.
Figure 4: Results Table

Page 62

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797

Figure 5: Runtime comparision

Observations
It took too long to find the shortest paths on grids of size 200 X 200 , which is why these

results are omitted from the table.


All path planning methods were implemented in C and executed on a 3.7 GHz Core 2 Duo
with 4 GByte of RAM. Our implementations are not optimized for performance and can
possibly be improved.
In dynamic environments,GFR A* has runtimes
that are independent

of B (percentage of blocked stated) because they construct their

search trees from scratch and thus expand about the same number of states
independent

of B. GFR-A*

per search

ex- pands some of the states in the subtree of the previous

search tree rooted at the current start state.

In both static and dynamic environments,

Moving goal A* has smaller runtimes than GFR-A* because GFR-A* con- structs its current
search tree from scratch.

Basic Moving goal A* on the other hand, reuse the previous search

tree and hence expand fewer states

per search. In both static and dynamic environments,

Moving goal A* expands the states in the previous search tree that are not in the subtree
rooted at the current start state to set their g-values to infinity. Moving goal A*, on the other
hand, uses OptimizedDeletion() instead, which runs faster and results in only a slightly larger
number of deleted and expanded states. In static environments Moving goal A*, expands all
locally inconsistent states whose f -values are smaller than the f -value of the goal state. In static
environments, Moving goal A* has the smallest runtime in dynamic environments, where it is
four to five times faster than GFR-A*.

Page 63

International Journal of Computer Application


Issue 3, Volume 1 (February 2013)
Available online on http://www.rspublication.com/ijca/ijca_index.htm
ISSN: 2250-1797
In dynamic environments, Moving goal A* has run- times that increase with B because they
update more g-values as B increases and hence expand more states per search.
In both static and dynamic environments, all states from the previous search tree are
deleted.

Conclusion and Future work

Existing incremental search algorithms are slow on moving goal search problems in
dynamic environments. In this paper, we therefore introduced Moving Goal A* for improving
the performance of exisiting Generalized Fringe-Retrieving A*,that uses the princi- ple behind
A* to solve moving goal search problems in dynamic environments fast.We demonstrated
experi- mentally that Moving Goal A* is four to five times faster than Generalized Adaptive
A*, which so far was believed to be the faster incremental search algorithm for solving moving
goal search problems in dynamic environments. It is future work to investigate how to
speed up the computation of more sophisticated strategies for the cop [13].

References
[1] Eric A. Hansen, Rong Zhou , Journal of Artificial Intelligence Research 28 (2007) 267297
Anytime Heuristic Search.
[2] Dmitry Davidov, Shaul Markovitch, Journal of Artificial Intelligence Research 26 (2006)
417451MultipleGoal Heuristic Search.
[3] Vadim Bulitko,Mitja Lu-trek, Journal of Artifi- cial Intelligence Research 32 (2008) 419 - 452
Dy- namic Control in RealTime Heuristic Search
[4] Bagchi, A., Srimani, P. (1985). Weighted heuris- tic search in networks. Journal of Algo rithms,
6,550-576.
[5] Bonet, B.,
Geffner, H. (2001). Planning as heuristic search. Artificial Intelligence, 129
(1), 5-33.
6] Carillo, H., Lipman, D. (1988). The multiple se- quence alignment problem in biology. SIAM
Jour- nal of Applied Mathematics, 48 (5), 1073-1082.
[7] Chakrabarti, P., Ghosh, S., DeSarkar, S. (1988).Admissibility of AO* when heuristics
overesti- mate. Artificial Intelligence, 34 (1), 97-113.
[8] Davis, H., BramantiGregor, A., Wang, J. (1988).The advantages of using depth and breadth components in heuristic search. In Ras, Z., Saitta, L. (Eds.), Methodologies for Intelligent Systems 3,
pp. 19-28.
[9] Korf, R. (1990). Realtime heuristic search. Arti- ficial Intelligence, 42(2-3), 189-211.
[10] Shimbo, M., Ishida, T. (2003). Controlling the learning process of real-time heuristic search.
Ar- tificial Intelligence, 146(1), 1-41.
[11] Pandurangan, G., Raghavan, P.,
Upfal, E. (2002). Using PageRank to Characterize Web
Structure. In 8th Annual International Comput- ing and Combinatorics Conference (COCOON), pp.
330-339.
[12] T. Ishida and R. Korf. Moving target search. In Proceedings of IJCAI, pages 204-211, 1991.
[13] C. Undeger and F. Polat. Multi-agent real-time pursuit. Autonomous Agents and MultiAgent Systems, pages 1-39, 2009.
[14] P. Hart, N. Nilsson, and B. Raphael. A formal basis for the heuristic determination of
minimum cost paths. IEEE Transactions on Systems Sci- ence and Cybernetics, 2:100-107, 1968.

Page 64

You might also like