Press Ctrl+D to draw

Drawing Tools

Log in for saved annotations

1px

2.2  Search Basics and Uninformed Search Algorithms

Search turns a problem formulation into concrete action sequences. We will start with uninformed search—algorithms that do not use domain-specific heuristics—and build up to uniform-cost search (UCS), which handles non-uniform action costs.

Note on Continuous State Spaces

Search algorithms typically operate on discrete state spaces, where states can be enumerated. Continuous state spaces often require either discretization or different techniques altogether, which we will not cover here. This is important to note for robotics applications, where the robot's configuration space is often best modeled as continuous.

2.2.1 Introducing Search

We begin with a definition of search.

Definition 2.7  Search

Search is the process of exploring a state space by generating and expanding nodes that represent possible states, with the goal of finding a path from a start state to a goal state.

A search algorithm systematically explores the state space by generating nodes, expanding them to produce successor nodes, and checking for goal states. This is represented abstractly as a search tree or graph. Note that in many cases, a search tree or graph is a much smaller abstraction of the full state space, focusing only on relevant states and transitions. Furthermore, the search tree or graph is often generated lazily during the search process, rather than being precomputed in full.

Definition 2.8  Node

A node is an element of a search tree or graph representing a state in the state space.

Definition 2.9  Edge

An edge connects two nodes, representing a transition between states via an action.

For a search tree, a single state can appear in multiple places. Conversely, for a search graph, each unique state appears only once, as illustrated below.

      Tree                          Graph
        A                         ----A----
       / \                        |  / \  |
      B   C                       | B   C |
     /     \                      |/     \|
    D       E                     D       E
    |     /  \                           / 
    A    F    A                         F

Suppose A is the initial state and F is the goal state. We see that a graph representation can be useful for avoiding redundant exploration of the same state.

Definition 2.10  Node Expansion

During a search, a node is expanded by generating its successor nodes based on the possible actions from that state.

A successor node is sometimes called a child node of an expanded node (i.e., a parent node).

Definition 2.11  Generated Node

A generated node is a node that has been added to the tree or graph as the result of the expansion of another node, but not yet expanded itself.

Two important sets of nodes that are useful for many search algorithms are:

  • Frontier: The set of nodes generated but not yet expanded
  • Explored: The set of nodes already expanded

Search algorithms differ along two main axes:

  • Frontier ordering: How nodes are chosen for expansion (queue, stack, priority queue)
  • Cost awareness: Whether edge costs influence expansion order

2.2.2 Memory in Search

Search algorithms differ in their memory usage. In large state spaces, memory can be a limiting factor. Two common strategies for managing memory are:

  • Tree Search: Maintains only the frontier, allowing nodes to be revisited. This is memory-efficient but can lead to cycles and non-termination in infinite state spaces. Here the tree is the fundamental structure.
  • Graph Search: Maintains both the frontier and the explored set, preventing revisits to already expanded states. This ensures termination on finite graphs but uses more memory. Here the graph is the fundamental structure.

2.2.3 Completeness and Optimality

  • Completeness: The algorithm will find a solution if one exists
  • Optimality: The algorithm will find the least-cost solution (with respect to some cost function)

These properties depend on the frontier ordering, cost assumptions, and whether we use tree or graph search.

2.2.4 Data Structures for Search

In search algorithms, nodes have a richer structure than that described in definition 2.8. A search node typically includes:

  • State
  • Parent node (to reconstruct the path)
  • Action taken to reach this state from the parent
  • Path cost from the initial state to this node, denoted \(g(n)\)

Operations on a frontier are:

  • IS_EMPTY(frontier): Check if the frontier is empty
  • POP(frontier): Remove and return a node from the frontier based on the ordering
  • TOP(frontier): Peek at the next node to be expanded without removing it
  • ADD(node, frontier): Add a node to the frontier in the appropriate position

Algorithms use a variety of data structures for the frontier, with common choices including:

  • Priority Queue: Ordered by an evaluation function (e.g., path cost)
  • FIFO Queue: First-in, first-out order pops the oldest node
  • LIFO Queue:1 Last-in, first-out order pops the most recently added node

Uninformed search strategies do not use any domain-specific knowledge beyond the problem definition. They rely solely on the structure of the state space and the search algorithm's mechanics.

The most common uninformed search algorithms are:

  • Breadth-First Search (BFS)
  • Depth-First Search (DFS)
  • Uniform-Cost Search (UCS), also known as Dijkstra's algorithm
  • Iterative Deepening Search (IDS)
  • Bidirectional Search

We will cover BFS, DFS, and UCS in this section. For IDS and Bidirectional Search, see Russell and Norvig, 2020, § 3.4.

2.2.6 Breadth-First and Depth-First

The two prototypical uninformed search algorithms are breadth-first search (BFS) and depth-first search (DFS).

Definition 2.12  Breadth-First Search

Breadth-first search (BFS) expands the shallowest unexpanded node first, using a FIFO queue. With uniform step costs and finite branching, BFS is complete and optimal (for unit-cost edges).

BFS explores all nodes at depth \(d\) before any at depth \(d+1\). Therefore, the first goal found is guaranteed to be at the shallowest depth. Consider the following breadth-first-search pseudocode.

def breadth_first_search(problem):
    frontier = FIFO queue
    frontier.enqueue(problem.initial_state)
    explored = set()  # This makes it graph search
    while frontier not empty:
        node = frontier.dequeue()  # FIFO: pop oldest node
        if problem.goal_test(node.state):
            return solution(node)  # Return path to goal
        explored.add(node.state)
        for action, child_state in problem.successors(node.state):
            if child_state not in explored and child_state not in frontier:
                frontier.enqueue(child_state, parent=node, action=action)
    return failure

Here we have used function solution to reconstruct the path from the initial state to the goal by following parent pointers. Furthermore, we have used the method problem.successors(state) to generate successor states and actions.

Definition 2.13  Depth-First Search

Depth-first search (DFS) expands the deepest unexpanded node first, using a LIFO queue (stack). DFS is memory-efficient but is not optimal and is incomplete on infinite or cyclic state spaces unless depth-limited.

At any time, DFS only needs to store a single path from the root to a leaf, plus unexpanded siblings. This leads to linear memory usage in the depth of the tree.

However, DFS can get stuck exploring deep paths that do not lead to a solution, especially in infinite or cyclic state spaces. Here is the depth-first-search pseudocode.

def depth_first_search(problem):
    frontier = LIFO queue
    frontier.push(problem.initial_state)
    explored = set()  # This makes it graph search
    while frontier not empty:
        node = frontier.pop()  # LIFO: pop most recent node
        if problem.goal_test(node.state):
            return solution(node)  # Return path to goal
        explored.add(node.state)
        for action, child_state in problem.successors(node.state):
            if child_state not in explored and child_state not in frontier:
                frontier.push(child_state, parent=node, action=action)
    return failure

Why DFS is fragile for safety:

  • It can dive indefinitely down a bad branch (e.g., loops), delaying discovery of safe exits.
  • It returns the first solution found, which can be arbitrarily bad.

Graph search mitigates cycles, but DFS still lacks cost awareness and prefers depth over safety.

When action costs are non-uniform, BFS is no longer optimal. Uniform-cost search (UCS) generalizes BFS by expanding the node with the lowest path cost \(g(n)\) first, using a priority queue keyed by \(g\).

Definition 2.14  Uniform-Cost Search

Uniform-cost search (UCS) expands frontier nodes in order of increasing path cost \(g(n)\), guaranteeing optimality for non-negative edge costs when using graph search with proper duplicate handling.

Key properties (graph search, non-negative costs):

  • Complete if the cost of each action is bounded below by a positive \(\epsilon\)
  • Optimal: returns the least-cost solution
  • Frontier: priority queue ordered by \(g\)
  • Explored set: prevents re-expanding cheaper or equal-cost duplicates

2.2.7.1 UCS Pseudocode (Graph Search)

def uniform_cost_search(problem):
    frontier = priority queue ordered by path cost
    frontier.insert(problem.initial_state, path_cost=0)
    explored = dict()  # Maps state -> lowest path cost found
    while frontier not empty:
        node = frontier.pop_lowest_cost()
        if problem.goal_test(node.state):
            return solution(node)  # Return path to goal
        if node.state in explored and explored[node.state] <= node.path_cost:
            continue  # Skip if a cheaper path was found earlier
        explored[node.state] = node.path_cost  # Record lowest cost to this state
        for action, child_state, step_cost in problem.successors(node.state):
            child_cost = node.path_cost + step_cost
            frontier.insert_or_update(child_state, child_cost, parent=node, action=action)
    return failure

Here insert_or_update adds the child state to the frontier or updates its cost if a cheaper path is found.

2.2.7.2 Warehouse Example: Time vs Energy

  • Time-aware UCS: step cost = action duration (move = 1, pick/drop = 2). Finds the fastest route.
  • Energy-aware UCS: step cost = energy use (move with load = 3, without load = 1). Finds the lowest-energy route, possibly longer in distance.

Run both on the same grid to see path differences.

2.2.8 Mini-Lab: UCS on Warehouse Maps

Hands-On: Compare UCS Cost Models

Goal: Compare UCS paths under time vs energy costs.

  1. Implement UCS using the pseudocode above with a priority queue (e.g., heapq).
  2. Use the warehouse grid from Chapter 1 (warehouse_env.py), and define two cost functions: cost_time(action, state) and cost_energy(action, state).
  3. Run UCS with each cost function from charging station to a pickup/dropoff pair.
  4. Plot or print the resulting paths and total costs; note differences in chosen routes.
  5. Extension: add a safety penalty near humans, and rerun UCS with weighted costs.

Reflection: How does changing the cost model affect path length, load-carrying distance, and turns? When is the energy-optimal path preferable to the time-optimal one?

2.2.9 Summary

  • Tree search explores paths; graph search reuses state knowledge to avoid revisits and ensure termination on finite graphs.
  • BFS is optimal only for uniform step costs; DFS is memory-light but unsafe for cost or safety-critical tasks.
  • UCS uses a priority queue on path cost to deliver optimal solutions with non-uniform, non-negative costs, making it a practical default for routing with time/energy tradeoffs.
  • Uniform-cost search for cost-aware routing; tie to energy/time tradeoffs
  • Implementation sketch with open/closed sets and simple priority queue
  • Mini-lab: run UCS on warehouse maps with different step costs

Bibliography

  1. [AI] Russell, Stuart J. and Peter Norvig. Artificial intelligence: a modern approach. (2020) 4 ed. Prentice Hall. http://aima.cs.berkeley.edu/