Press Ctrl+D to draw

Drawing Tools

Log in for saved annotations

1px

3.1  Knowledge-Based Agents and the Limits of Search

In Chapter 2, we developed powerful techniques for finding solutions: search algorithms explore state spaces, heuristics guide them efficiently, and constraint satisfaction handles combinatorial problems. These methods work well when the agent has complete information about the environment and can enumerate possible states.

But what happens when the agent must operate with incomplete information? When it can only perceive local features and must reason about what lies beyond? When the rules of the world are complex and the agent needs to derive new facts from what it already knows?

This chapter introduces knowledge-based agents—agents that maintain an explicit representation of what they know about the world and use logical inference to decide what to do (Russell and Norvig, 2020, ch. 7). This approach complements search: rather than exploring a state space directly, the agent reasons about its knowledge to determine which states are possible, which are dangerous, and which actions are safe.

Consider our warehouse robot navigating a grid. In Chapter 2, we assumed the robot knew the complete layout: obstacle locations, pickup and dropoff points, everything. The search problem was fully specified, and algorithms like A* could find optimal paths.

Now suppose the warehouse has hazards the robot cannot see directly and an unknown pickup location. For example:

  • Damaged floor sections that will cause the robot to fall and be destroyed
  • A malfunctioning forklift that moves unpredictably and will collide with any robot in its path
  • A high-value package the robot must retrieve but whose location is unknown

The robot has sensors, but they only detect local evidence:

  • Adjacent to damaged floor: the robot hears creaking sounds
  • Adjacent to the malfunctioning forklift: the robot detects rumbling
  • At the package pickup location: the robot's scanner detects the package beacon

The robot cannot see into adjacent squares. It must infer what is there based on its percepts and its knowledge of how the world works.

This is fundamentally different from search. The robot doesn't know the true state of the world—it only knows what it has perceived. It must maintain beliefs about possible world states and update them as new evidence arrives.

3.1.2 Knowledge-Based Agents

A knowledge-based agent uses a knowledge base (KB) to store what it knows and inference to derive new knowledge.

Definition 3.1  Knowledge Base

A knowledge base (KB) is a set of sentences in a formal language that represent facts the agent believes to be true about the world.

The agent interacts with its knowledge base through two operations:

Definition 3.2  TELL and ASK

  • TELL: Add a new sentence to the knowledge base (the agent learns something)
  • ASK: Query the knowledge base to determine what can be inferred (the agent reasons)

The generic knowledge-based agent operates as follows:

  1. Perceive: Receive percepts from sensors
  2. TELL: Add percept information to the KB
  3. ASK: Query the KB for the best action
  4. Act: Execute the chosen action
  5. TELL: Record the action taken
  6. Repeat

This architecture separates what the agent knows (the KB) from how it reasons (the inference mechanism). We can change the domain by changing the KB without modifying the inference engine.

3.1.3 Declarative vs. Procedural Knowledge

The knowledge-based approach is fundamentally declarative: we tell the agent facts about the world, and it figures out what to do. This contrasts with procedural approaches where we program specific behaviors directly.

Comment 3.1  Declarative Advantage

Declarative knowledge is easier to modify and extend. Adding a new rule (e.g., "radioactive materials also emit a faint glow") requires adding a sentence to the KB, not rewriting the agent's decision logic.

Consider encoding warehouse safety rules:

Procedural approach:

if hear_creaking(current_location):
    for neighbor in adjacent_squares(current_location):
        mark_as_potentially_dangerous(neighbor)
    if all neighbors already visited and safe:
        mark current square as safe
    else:
        ...

Declarative approach:

Creaking at location L implies damaged floor adjacent to L.
Damaged floor at location L implies robot destroyed if it enters L.
Robot should not enter locations where it might be destroyed.

The declarative version states what is true about the world. The inference engine derives what to do.

3.1.4 What Logic Provides

To make the TELL/ASK interface precise, we need a formal language with:

  1. Syntax: Rules for constructing well-formed sentences
  2. Semantics: Rules for determining what sentences mean (when they are true or false)
  3. Inference: Procedures for deriving new sentences from existing ones

Logic provides all three. A logic defines which sentences are grammatical (syntax), which possible worlds make sentences true (semantics), and which conclusions follow from premises (inference).

Definition 3.3  Entailment

A knowledge base KB entails a sentence \(\alpha\), written \(KB \models \alpha\), if \(\alpha\) is true in every possible world where all sentences in KB are true.

Entailment is the semantic notion of "follows from." If KB entails \(\alpha\), then \(\alpha\) must be true whenever the KB is true—it's a logical consequence.

Definition 3.4  Inference

An inference procedure derives sentences from a knowledge base. A procedure is sound if it only derives sentences that are entailed. A procedure is complete if it can derive every sentence that is entailed.

Soundness ensures we never conclude falsehoods from truths. Completeness ensures we can find every logical consequence. We want both.

Knowledge-based reasoning doesn't replace search—it complements it. Consider how the warehouse robot might operate:

  1. Knowledge representation: Encode what the robot knows about hazards, percepts, and world dynamics
  2. Inference: Determine which squares are definitely safe, definitely dangerous, or unknown
  3. Search: Plan a path through the safe squares to reach the goal

The inference step constrains the search problem. Instead of searching the full grid, the robot searches only the squares it has deduced are safe. As it explores and gathers more percepts, it updates its knowledge and may unlock new safe paths.

This integration is powerful: logic handles what is possible given partial information, while search handles how to achieve goals given the resulting constraints.

3.1.6 Chapter Overview

The remainder of this chapter develops these ideas:

By the end, you will be able to design agents that reason about uncertain, partially observable environments—a crucial capability for real-world AI systems.

Bibliography

  1. [AI] Russell, Stuart J. and Peter Norvig. Artificial intelligence: a modern approach. (2020) 4 ed. Prentice Hall. http://aima.cs.berkeley.edu/