Note: This is not yet ready, but shows the direction I'm leaning in for Fourth Edition Search.
State-Space Search
This notebook describes several state-space search algorithms, and how they can be used to solve a variety of problems. We start with a simple algorithm and a simple domain: finding a route from city to city. Later we will explore other algorithms and domains.
The Route-Finding Domain
Like all state-space search problems, in a route-finding problem you will be given:
A start state (for example,
'A'
for the city Arad).A goal state (for example,
'B'
for the city Bucharest).Actions that can change state (for example, driving from
'A'
to'S'
).
You will be asked to find:
A path from the start state, through intermediate states, to the goal state.
We'll use this map:
A state-space search problem can be represented by a graph, where the vertices of the graph are the states of the problem (in this case, cities) and the edges of the graph are the actions (in this case, driving along a road).
We'll represent a city by its single initial letter. We'll represent the graph of connections as a dict
that maps each city to a list of the neighboring cities (connected by a road). For now we don't explicitly represent the actions, nor the distances between cities.
Suppose we want to get from A
to B
. Where can we go from the start state, A
?
We see that from A
we can get to any of the three cities ['Z', 'T', 'S']
. Which should we choose? We don't know. That's the whole point of search: we don't know which immediate action is best, so we'll have to explore, until we find a path that leads to the goal.
How do we explore? We'll start with a simple algorithm that will get us from A
to B
. We'll keep a frontier—a collection of not-yet-explored states—and expand the frontier outward until it reaches the goal. To be more precise:
Initially, the only state in the frontier is the start state,
'A'
.Until we reach the goal, or run out of states in the frontier to explore, do the following:
Remove the first state from the frontier. Call it
s
.If
s
is the goal, we're done. Return the path tos
.Otherwise, consider all the neighboring states of
s
. For each one:If we have not previously explored the state, add it to the end of the frontier.
Also keep track of the previous state that led to this new neighboring state; we'll need this to reconstruct the path to the goal, and to keep us from re-visiting previously explored states.
A Simple Search Algorithm: breadth_first
The function breadth_first
implements this strategy:
A couple of things to note:
We always add new states to the end of the frontier queue. That means that all the states that are adjacent to the start state will come first in the queue, then all the states that are two steps away, then three steps, etc. That's what we mean by breadth-first search.
We recover the path to an
end
state by following the trail ofprevious[end]
pointers, all the way back tostart
. The dictprevious
is a map of{state: previous_state}
.When we finally get an
s
that is the goal state, we know we have found a shortest path, because any other state in the queue must correspond to a path that is as long or longer.Note that
previous
contains all the states that are currently infrontier
as well as all the states that were infrontier
in the past.If no path to the goal is found, then
breadth_first
returnsNone
. If a path is found, it returns the sequence of states on the path.
Some examples:
Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this: given a start word and a goal word, find the shortest way to transform the start word into the goal word by changing one letter at a time, such that each change results in a word. For example starting with green
we can reach grass
in 7 steps:
green
→ greed
→ treed
→ trees
→ tress
→ cress
→ crass
→ grass
We will need a dictionary of words. We'll use 5-letter words from the Stanford GraphBase project for this purpose. Let's get that file from aimadata.
We can assign WORDS
to be the set of all the words in this file:
And define neighboring_words
to return the set of all words that are a one-letter change away from a given word
:
For example:
Now we can create word_neighbors
as a dict of {word: {neighboring_word, ...}}
:
Now the breadth_first
function can be used to solve a word ladder problem:
More General Search Algorithms
Now we'll embelish the breadth_first
algorithm to make a family of search algorithms with more capabilities:
We distinguish between an action and the result of an action.
We allow different measures of the cost of a solution (not just the number of steps in the sequence).
We search through the state space in an order that is more likely to lead to an optimal solution quickly.
Here's how we do these things:
Instead of having a graph of neighboring states, we instead have an object of type Problem. A Problem has one method,
Problem.actions(state)
to return a collection of the actions that are allowed in a state, and another method,Problem.result(state, action)
that says what happens when you take an action.We keep a set,
explored
of states that have already been explored. We also have a class,Frontier
, that makes it efficient to ask if a state is on the frontier.Each action has a cost associated with it (in fact, the cost can vary with both the state and the action).
The
Frontier
class acts as a priority queue, allowing the "best" state to be explored next. We represent a sequence of actions and resulting states as a linked list ofNode
objects.
The algorithm breadth_first_search
is basically the same as breadth_first
, but using our new conventions:
Next is uniform_cost_search
, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
Finally, astar_search
in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
Search Tree Nodes
The solution to a search problem is now a linked list of Node
s, where each Node
includes a state
and the path_cost
of getting to the state. In addition, for every Node
except for the first (root) Node
, there is a previous Node
(indicating the state that lead to this Node
) and an action
(indicating the action taken to get here).
Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f
, supports these operations:
f.add(node)
: Add a node to the Frontier.f.pop()
: Remove and return the "best" node from the frontier.f.replace(node)
: add this node and remove a previous node with the same state.state in f
: Test if some node in the frontier has arrived at state.f[state]
: returns the node corresponding to this state in frontier.len(f)
: The number of Nodes in the frontier. When the frontier is empty,f
is false.
We provide two kinds of frontiers: One for "regular" queues, either first-in-first-out (for breadth-first search) or last-in-first-out (for depth-first search), and one for priority queues, where you can specify what cost function on nodes you are trying to minimize.
Search Problems
Problem
is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem
. You will need to override the actions
and result
method to describe how your problem works. You will also have to either override is_goal
or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost
method.
Two Location Vacuum World
Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
Visualization Output
Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one.
Simulated Annealing visualisation using TSP
Applying simulated annealing in traveling salesman problem to find the shortest tour to travel all cities in Romania. Distance between two cities is taken as the euclidean distance.
Iterative Simulated Annealing
Providing the output of the previous run as input to the next run to give better performance.