Propositional logic is the simplest logic sufficient for reasoning in the Hazardous Warehouse and similar environments (Russell and Norvig, 2020, ch. 7). It provides a precise language for stating facts, clear rules for determining truth, and sound inference procedures for deriving conclusions.
Propositional logic builds complex sentences from simple building blocks.
An atomic sentence (or proposition) is an indivisible statement that is either true or false. We denote atoms by uppercase letters or descriptive symbols: \(P\), \(Q\), \(R\), \(\mathit{Damaged}_{1,2}\), \(\mathit{Safe}_{2,1}\).
In the Hazardous Warehouse, useful atoms include:
Logical connectives combine atomic sentences into compound sentences:
| Symbol | Name | English | Example |
|---|---|---|---|
| \(\neg\) | Negation | "not" | \(\neg D_{1,1}\) ("no damaged floor at \((1,1)\)") |
| \(\land\) | Conjunction | "and" | \(D_{2,1} \land D_{3,1}\) ("damaged at both") |
| \(\lor\) | Disjunction | "or" | \(D_{2,1} \lor D_{3,1}\) ("damaged at one or both") |
| \(\Rightarrow\) | Implication | "if...then" | \(C_{1,1} \Rightarrow D_{2,1} \lor D_{1,2}\) |
| \(\Leftrightarrow\) | Biconditional | "if and only if" | \(S_{x,y} \Leftrightarrow \neg D_{x,y} \land \neg F_{x,y}\) |
A well-formed formula (WFF) is defined recursively:
Parentheses clarify grouping. We adopt standard precedence: \(\neg\) binds tightest, then \(\land\), then \(\lor\), then \(\Rightarrow\), then \(\Leftrightarrow\).
Example: The sentence "If creaking at \((2,1)\), then damaged floor at \((1,1)\), \((3,1)\), or \((2,2)\)": \[ C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2}) \]
Syntax tells us how to write sentences. Semantics tells us what they mean.
An interpretation (or model) assigns a truth value (true or false) to every atomic sentence. Given \(n\) atoms, there are \(2^n\) possible interpretations.
For the Hazardous Warehouse with 16 squares, we might have atoms \(D_{1,1}, D_{1,2}, \ldots, D_{4,4}\) (damaged floor at each square). An interpretation specifies which squares actually have damaged floor. The true interpretation matches the real world; other interpretations represent alternative possibilities.
The semantics of connectives are defined by truth tables:
| \(\alpha\) | \(\beta\) | \(\neg \alpha\) | \(\alpha \land \beta\) | \(\alpha \lor \beta\) | \(\alpha \Rightarrow \beta\) | \(\alpha \Leftrightarrow \beta\) |
|---|---|---|---|---|---|---|
| T | T | F | T | T | T | T |
| T | F | F | F | T | F | F |
| F | T | T | F | T | T | F |
| F | F | T | F | F | T | T |
Given an interpretation, we evaluate compound sentences bottom-up:
Example: Let interpretation \(I\) assign: \(D_{2,1} = T\), \(D_{3,1} = F\), \(D_{2,2} = F\).
Evaluate \(D_{2,1} \lor D_{3,1}\): \[D_{2,1} \lor D_{3,1} = T \lor F = T\]
Evaluate \(\neg D_{2,2} \land (D_{2,1} \lor D_{3,1})\): \[\neg D_{2,2} \land (D_{2,1} \lor D_{3,1}) = \neg F \land T = T \land T = T\]
An interpretation \(I\) satisfies a sentence \(\alpha\) if \(\alpha\) evaluates to true under \(I\). We write \(I \models \alpha\). A sentence is satisfiable if some interpretation satisfies it.
A knowledge base \(\mathit{KB}\) entails sentence \(\alpha\), written \(\mathit{KB} \models \alpha\), if every interpretation that satisfies all sentences in \(\mathit{KB}\) also satisfies \(\alpha\).
Entailment is the key semantic concept: \(\mathit{KB} \models \alpha\) means \(\alpha\) is a logical consequence of \(\mathit{KB}\). If our knowledge base correctly describes the world, then any entailed sentence is guaranteed to be true.
A sentence is valid (a tautology) if it is true in every interpretation. A sentence is unsatisfiable (a contradiction) if it is false in every interpretation.
Examples:
Two sentences are logically equivalent if they have the same truth value in every interpretation. We write \(\alpha \equiv \beta\). Equivalences let us transform sentences into more useful forms without changing their meaning.
De Morgan's Laws: \[\neg (\alpha \land \beta) \equiv \neg \alpha \lor \neg \beta\] \[\neg (\alpha \lor \beta) \equiv \neg \alpha \land \neg \beta\]
Double Negation: \[\neg \neg \alpha \equiv \alpha\]
Commutativity: \[\alpha \land \beta \equiv \beta \land \alpha\] \[\alpha \lor \beta \equiv \beta \lor \alpha\]
Associativity: \[(\alpha \land \beta) \land \gamma \equiv \alpha \land (\beta \land \gamma)\] \[(\alpha \lor \beta) \lor \gamma \equiv \alpha \lor (\beta \lor \gamma)\]
Distributivity: \[\alpha \land (\beta \lor \gamma) \equiv (\alpha \land \beta) \lor (\alpha \land \gamma)\] \[\alpha \lor (\beta \land \gamma) \equiv (\alpha \lor \beta) \land (\alpha \lor \gamma)\]
Implication Elimination: \[\alpha \Rightarrow \beta \equiv \neg \alpha \lor \beta\]
Contrapositive: \[\alpha \Rightarrow \beta \equiv \neg \beta \Rightarrow \neg \alpha\]
Biconditional Elimination: \[\alpha \Leftrightarrow \beta \equiv (\alpha \Rightarrow \beta) \land (\beta \Rightarrow \alpha)\]
These equivalences can be verified by constructing truth tables. They are essential tools for manipulating logical expressions during inference.
Example: Using De Morgan's law to simplify negation of a disjunction: \[\neg (D_{2,1} \lor D_{1,2}) \equiv \neg D_{2,1} \land \neg D_{1,2}\]
This transforms "not (damaged at \((2,1)\) or damaged at \((1,2)\))" into "not damaged at \((2,1)\) and not damaged at \((1,2)\)"βthe same meaning, but in a form that directly tells us both squares are safe.
One way to check if \(\mathit{KB} \models \alpha\): enumerate all interpretations, check if \(\mathit{KB}\) is true in each, and verify that \(\alpha\) is true whenever \(\mathit{KB}\) is true.
Model checking (or model enumeration) decides entailment by exhaustively testing all interpretations.
For \(n\) propositional symbols, there are \(2^n\) interpretations. Model checking is exponential in the number of symbols.
Example: Does \(\{P \Rightarrow Q, P\} \models Q\)?
Enumerate all interpretations:
| \(P\) | \(Q\) | \(P \Rightarrow Q\) | \(P\) | \(\mathit{KB}\) true? | \(Q\) |
|---|---|---|---|---|---|
| T | T | T | T | Yes | T |
| T | F | F | T | No | β |
| F | T | T | F | No | β |
| F | F | T | F | No | β |
In the only interpretation where \(\mathit{KB}\) is true (row 1), \(Q\) is also true. Therefore \(\{P \Rightarrow Q, P\} \models Q\).
This inference pattern is called modus ponens.
Let's build a knowledge base for a simplified \(3 \times 3\) warehouse. We use symbols:
First, we encode how percepts relate to hazards. For each square, creaking occurs if and only if at least one adjacent square has damaged floor:
\[ C_{1,1} \Leftrightarrow D_{2,1} \lor D_{1,2} \] (3.1)
\[ C_{2,1} \Leftrightarrow D_{1,1} \lor D_{3,1} \lor D_{2,2} \] (3.2)
\[ C_{2,2} \Leftrightarrow D_{1,2} \lor D_{3,2} \lor D_{2,1} \lor D_{2,3} \] (3.3)
(Similar sentences for all squares and for rumbling/forklift.)
Why in the statements above do we use the biconditional (material implication) \(\Leftrightarrow\) instead of logical equivalence \(\equiv\)? Hint: the former is a statement about the world (the physics of how creaking relates to damaged floor), while the latter would be asserting a tautology that would hold in all interpretations regardless of the world.
The starting square is safe: \[ \neg D_{1,1} \land \neg F_{1,1} \] (3.4)
Suppose the robot perceives no creaking and no rumbling at \((1,1)\): \[ \neg C_{1,1} \] \[ \neg R_{1,1} \]
From \(\neg C_{1,1}\) and equation (3.1): \[\neg C_{1,1} \Leftrightarrow \neg(D_{2,1} \lor D_{1,2})\]
By De Morgan's law: \[\neg(D_{2,1} \lor D_{1,2}) \equiv \neg D_{2,1} \land \neg D_{1,2}\]
Therefore: \[\neg D_{2,1} \land \neg D_{1,2}\]
Similarly, from no rumbling, we derive the forklift is not in \((2,1)\) or \((1,2)\). Combined: \[S_{2,1} \land S_{1,2}\]
Both adjacent squares are safe. The robot can move to either.
Rather than checking all models, we can derive conclusions by applying inference rulesβpatterns that preserve truth.
Modus Ponens: From \(\alpha \Rightarrow \beta\) and \(\alpha\), infer \(\beta\).
Modus Tollens: From \(\alpha \Rightarrow \beta\) and \(\neg \beta\), infer \(\neg \alpha\).
And-Elimination: From \(\alpha \land \beta\), infer \(\alpha\) (or \(\beta\)).
And-Introduction: From \(\alpha\) and \(\beta\), infer \(\alpha \land \beta\).
Or-Introduction: From \(\alpha\), infer \(\alpha \lor \beta\).
Disjunctive Syllogism: From \(\alpha \lor \beta\) and \(\neg \beta\), infer \(\alpha\).
Resolution: From \(\alpha \lor \beta\) and \(\neg \beta \lor \gamma\), infer \(\alpha \lor \gamma\).
These rules are sound: they only derive entailed sentences.
Given: \(C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2})\), \(C_{2,1}\), \(\neg D_{1,1}\), \(\neg D_{2,2}\).
Prove: \(D_{3,1}\).
From \(C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2})\) and \(C_{2,1}\), by modus ponens: \(D_{1,1} \lor D_{3,1} \lor D_{2,2}\)
From \(D_{1,1} \lor D_{3,1} \lor D_{2,2}\) and \(\neg D_{1,1}\), by disjunctive syllogism: \(D_{3,1} \lor D_{2,2}\)
From \(D_{3,1} \lor D_{2,2}\) and \(\neg D_{2,2}\), by disjunctive syllogism: \(D_{3,1}\)
We have proven that \((3,1)\) has damaged floor without stepping on it or checking it directly, purely by logical inference from the knowledge base and indirect percepts.
Disjunctive syllogism is a special case of resolution if we let \(\gamma = \bot\) (false). From \(\alpha \lor \beta\) and \(\neg \beta \lor \bot\), we infer \(\alpha \lor \bot\), which is logically equivalent to \(\alpha\).
The inference rules above are useful for hand proofs, but we want a systematic procedure that can mechanically determine whether \(\mathit{KB} \models \alpha\). The resolution rule, when applied systematically, provides exactly this.
Resolution works on sentences in a standardized format.
A literal is an atomic sentence or its negation: \(P\) (positive literal) or \(\neg P\) (negative literal).
A clause is a disjunction of literals: \(L_1 \lor L_2 \lor \cdots \lor L_n\). A clause with a single literal is called a unit clause.
A sentence is in Conjunctive Normal Form if it is a conjunction of clauses: \[(\ell_{1,1} \lor \cdots \lor \ell_{1,k_1}) \land (\ell_{2,1} \lor \cdots \lor \ell_{2,k_2}) \land \cdots \land (\ell_{n,1} \lor \cdots \lor \ell_{n,k_n})\]
Example: \((A \lor \neg B) \land (\neg A \lor C \lor D) \land (B)\)
This sentence has three clauses: \((A \lor \neg B)\), \((\neg A \lor C \lor D)\), and \((B)\). The third is a unit clause.
Any propositional sentence can be converted to an equivalent CNF sentence using these steps:
Example: Convert \((P \Rightarrow Q) \Rightarrow R\) to CNF.
Result: two clauses, \((P \lor R)\) and \((\neg Q \lor R)\).
Resolution combines two clauses that contain complementary literals (one positive, one negative form of the same atom).
Given clauses \(C_1 = (\alpha_1 \lor \cdots \lor \alpha_m \lor L)\) and \(C_2 = (\beta_1 \lor \cdots \lor \beta_n \lor \neg L)\), the resolvent is: \[(\alpha_1 \lor \cdots \lor \alpha_m \lor \beta_1 \lor \cdots \lor \beta_n)\]
The complementary literals \(L\) and \(\neg L\) are eliminated; the remaining literals are combined.
Examples:
The empty clause, written \(()\) or \(\square\), contains no literals. It represents a contradiction (false), since a disjunction with no disjuncts cannot be satisfied.
To prove \(\mathit{KB} \models \alpha\) (the knowledge base entails \(\alpha\)), we use proof by contradiction:
Resolution is refutation-complete: if \(\mathit{KB} \models \alpha\), then resolution applied to \(\mathit{KB} \land \neg \alpha\) will eventually derive the empty clause.
The term "refutation-complete" means resolution can prove any entailment by refuting its negation. It will not miss valid conclusions.
Prove: \(\{P \Rightarrow Q, \; Q \Rightarrow R\} \models P \Rightarrow R\)
Step 1: Add the negation of what we want to prove.
We want to show \(P \Rightarrow R\). Its negation is \(\neg(P \Rightarrow R) \equiv \neg(\neg P \lor R) \equiv P \land \neg R\).
Our clauses come from:
Step 2: List all clauses.
Step 3: Apply resolution.
Conclusion: We derived a contradiction, so \(\{P \Rightarrow Q, Q \Rightarrow R\} \models P \Rightarrow R\). This proves the transitivity of implication.
Consider proving that \((3,1)\) has damaged floor from our earlier example. Our KB contains:
To prove \(D_{3,1}\), we add \(\neg D_{3,1}\) and seek a contradiction. Converting the biconditional to CNF and applying resolution would derive the empty clause, confirming \(D_{3,1}\).
A SAT solver, short for Satisfiability Solver, is a tool used to determine if a given propositional logic formula can be satisfied by some assignment of truth values to its variables (i.e., interpretation or model). Practical implementations use DPLL (Davis-Putnam-Logemann-Loveland) or CDCL (Conflict-Driven Clause Learning) algorithms rather than naive resolution.
DPLL Algorithm: This algorithm is a backtracking-based search algorithm that systematically explores the possible assignments of variables. It uses unit propagation and pure literal elimination to simplify the problem before making decisions, which helps in reducing the search space.
CDCL Algorithm: An enhancement of DPLL, CDCL incorporates conflict-driven learning. When a conflict is detected, it analyzes the conflict to learn a new clause that prevents the same conflict from occurring again. This allows the solver to backtrack more intelligently and often leads to faster solutions.
These algorithms add sophisticated heuristics for choosing which clauses to resolve, backtracking when stuck, and learning from conflicts. Modern SAT solvers can handle problems with millions of variables, making propositional reasoning practical for hardware verification, planning, and constraint satisfaction.
Propositional logic is simple and has efficient solvers, but it lacks expressiveness.
To state "creaking implies adjacent damaged floor," we must write a separate sentence for every square:
\[C_{1,1} \Leftrightarrow D_{2,1} \lor D_{1,2}\] \[C_{1,2} \Leftrightarrow D_{2,2} \lor D_{1,1} \lor D_{1,3}\] \[C_{2,1} \Leftrightarrow D_{1,1} \lor D_{3,1} \lor D_{2,2}\] \[\vdots\]
For a \(4 \times 4\) warehouse, this is manageable. For a \(100 \times 100\) facility, it becomes impractical. We cannot express the general pattern: "For all locations \(L\), creaking at \(L\) if and only if damaged floor adjacent to \(L\)."
Propositional logic has no notion of objects (squares, robots, packages) or relations (adjacent, at, carrying). Every fact requires its own symbol.
We cannot express:
These limitations motivate first-order logic, developed in Section 3.4: First-Order Logic.
Propositional logic provides:
For the Hazardous Warehouse:
The limitation is scalability: we need one sentence per square. First-order logic overcomes this with variables and quantifiers.
Comment 3.3 Material Implication
The implication \(\alpha \Rightarrow \beta\) is false only when \(\alpha\) is true and \(\beta\) is false. When \(\alpha\) is false, the implication is vacuously true. This matches "if damaged floor, then robot destroyed"βthe rule isn't violated when there's no damaged floor. That is, the rule only makes a claim about what happens when the condition is met, so whatever happens when the condition is not met does not affect the truth of the implication.