In Chapter 2, we developed powerful techniques for finding solutions: search algorithms explore state spaces, heuristics guide them efficiently, and constraint satisfaction handles combinatorial problems. These methods work well when the agent has complete information about the environment and can enumerate possible states.
But what happens when the agent must operate with incomplete information? When it can only perceive local features and must reason about what lies beyond? When the rules of the world are complex and the agent needs to derive new facts from what it already knows?
This chapter introduces knowledge-based agents—agents that maintain an explicit representation of what they know about the world and use logical inference to decide what to do (Russell and Norvig, 2020, ch. 7). This approach complements search: rather than exploring a state space directly, the agent reasons about its knowledge to determine which states are possible, which are dangerous, and which actions are safe.
Consider our warehouse robot navigating a grid. In Chapter 2, we assumed the robot knew the complete layout: obstacle locations, pickup and dropoff points, everything. The search problem was fully specified, and algorithms like A* could find optimal paths.
Now suppose the warehouse has hazards the robot cannot see directly and an unknown pickup location. For example:
The robot has sensors, but they only detect local evidence:
The robot cannot see into adjacent squares. It must infer what is there based on its percepts and its knowledge of how the world works.
This is fundamentally different from search. The robot doesn't know the true state of the world—it only knows what it has perceived. It must maintain beliefs about possible world states and update them as new evidence arrives.
A knowledge-based agent uses a knowledge base (KB) to store what it knows and inference to derive new knowledge.
A knowledge base (KB) is a set of sentences in a formal language that represent facts the agent believes to be true about the world.
The agent interacts with its knowledge base through two operations:
The generic knowledge-based agent operates as follows:
This architecture separates what the agent knows (the KB) from how it reasons (the inference mechanism). We can change the domain by changing the KB without modifying the inference engine.
The knowledge-based approach is fundamentally declarative: we tell the agent facts about the world, and it figures out what to do. This contrasts with procedural approaches where we program specific behaviors directly.
Consider encoding warehouse safety rules:
Procedural approach:
if hear_creaking(current_location):
for neighbor in adjacent_squares(current_location):
mark_as_potentially_dangerous(neighbor)
if all neighbors already visited and safe:
mark current square as safe
else:
...
Declarative approach:
Creaking at location L implies damaged floor adjacent to L.
Damaged floor at location L implies robot destroyed if it enters L.
Robot should not enter locations where it might be destroyed.
The declarative version states what is true about the world. The inference engine derives what to do.
To make the TELL/ASK interface precise, we need a formal language with:
Logic provides all three. A logic defines which sentences are grammatical (syntax), which possible worlds make sentences true (semantics), and which conclusions follow from premises (inference).
A knowledge base KB entails a sentence \(\alpha\), written \(KB \models \alpha\), if \(\alpha\) is true in every possible world where all sentences in KB are true.
Entailment is the semantic notion of "follows from." If KB entails \(\alpha\), then \(\alpha\) must be true whenever the KB is true—it's a logical consequence.
An inference procedure derives sentences from a knowledge base. A procedure is sound if it only derives sentences that are entailed. A procedure is complete if it can derive every sentence that is entailed.
Soundness ensures we never conclude falsehoods from truths. Completeness ensures we can find every logical consequence. We want both.
Knowledge-based reasoning doesn't replace search—it complements it. Consider how the warehouse robot might operate:
The inference step constrains the search problem. Instead of searching the full grid, the robot searches only the squares it has deduced are safe. As it explores and gathers more percepts, it updates its knowledge and may unlock new safe paths.
This integration is powerful: logic handles what is possible given partial information, while search handles how to achieve goals given the resulting constraints.
The remainder of this chapter develops these ideas:
By the end, you will be able to design agents that reason about uncertain, partially observable environments—a crucial capability for real-world AI systems.
Comment 3.1 Declarative Advantage
Declarative knowledge is easier to modify and extend. Adding a new rule (e.g., "radioactive materials also emit a faint glow") requires adding a sentence to the KB, not rewriting the agent's decision logic.