Press Ctrl+D to draw

Drawing Tools

Log in for saved annotations

1px

3.3  Propositional Logic

Propositional logic is the simplest logic sufficient for reasoning in the Hazardous Warehouse and similar environments (Russell and Norvig, 2020, ch. 7). It provides a precise language for stating facts, clear rules for determining truth, and sound inference procedures for deriving conclusions.

3.3.1 Syntax: Building Sentences

Propositional logic builds complex sentences from simple building blocks.

Definition 3.9  Atomic Sentence

An atomic sentence (or proposition) is an indivisible statement that is either true or false. We denote atoms by uppercase letters or descriptive symbols: \(P\), \(Q\), \(R\), \(\mathit{Damaged}_{1,2}\), \(\mathit{Safe}_{2,1}\).

In the Hazardous Warehouse, useful atoms include:

  • \(D_{x,y}\): There is damaged floor at \((x,y)\)
  • \(F_{x,y}\): The forklift is at \((x,y)\)
  • \(P_{x,y}\): The package is at \((x,y)\)
  • \(C_{x,y}\): Creaking is perceived at \((x,y)\)
  • \(R_{x,y}\): Rumbling is perceived at \((x,y)\)
  • \(S_{x,y}\): Square \((x,y)\) is safe (no damaged floor, no forklift)

Definition 3.10  Logical Connectives

Logical connectives combine atomic sentences into compound sentences:

Symbol Name English Example
\(\neg\) Negation "not" \(\neg D_{1,1}\) ("no damaged floor at \((1,1)\)")
\(\land\) Conjunction "and" \(D_{2,1} \land D_{3,1}\) ("damaged at both")
\(\lor\) Disjunction "or" \(D_{2,1} \lor D_{3,1}\) ("damaged at one or both")
\(\Rightarrow\) Implication "if...then" \(C_{1,1} \Rightarrow D_{2,1} \lor D_{1,2}\)
\(\Leftrightarrow\) Biconditional "if and only if" \(S_{x,y} \Leftrightarrow \neg D_{x,y} \land \neg F_{x,y}\)

3.3.1.1 Well-Formed Formulas

Definition 3.11  Well-Formed Formula

A well-formed formula (WFF) is defined recursively:

  1. Every atomic sentence is a WFF
  2. If \(\alpha\) is a WFF, then \(\neg \alpha\) is a WFF
  3. If \(\alpha\) and \(\beta\) are WFFs, then \((\alpha \land \beta)\), \((\alpha \lor \beta)\), \((\alpha \Rightarrow \beta)\), and \((\alpha \Leftrightarrow \beta)\) are WFFs
  4. Nothing else is a WFF

Parentheses clarify grouping. We adopt standard precedence: \(\neg\) binds tightest, then \(\land\), then \(\lor\), then \(\Rightarrow\), then \(\Leftrightarrow\).

Example: The sentence "If creaking at \((2,1)\), then damaged floor at \((1,1)\), \((3,1)\), or \((2,2)\)": \[ C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2}) \]

3.3.2 Semantics: Meaning and Truth

Syntax tells us how to write sentences. Semantics tells us what they mean.

Definition 3.12  Interpretation (Model)

An interpretation (or model) assigns a truth value (true or false) to every atomic sentence. Given \(n\) atoms, there are \(2^n\) possible interpretations.

For the Hazardous Warehouse with 16 squares, we might have atoms \(D_{1,1}, D_{1,2}, \ldots, D_{4,4}\) (damaged floor at each square). An interpretation specifies which squares actually have damaged floor. The true interpretation matches the real world; other interpretations represent alternative possibilities.

3.3.2.1 Truth Tables for Connectives

The semantics of connectives are defined by truth tables:

Truth tables for logical connectives
\(\alpha\) \(\beta\) \(\neg \alpha\) \(\alpha \land \beta\) \(\alpha \lor \beta\) \(\alpha \Rightarrow \beta\) \(\alpha \Leftrightarrow \beta\)
T T F T T T T
T F F F T F F
F T T F T T F
F F T F F T T

Comment 3.3  Material Implication

The implication \(\alpha \Rightarrow \beta\) is false only when \(\alpha\) is true and \(\beta\) is false. When \(\alpha\) is false, the implication is vacuously true. This matches "if damaged floor, then robot destroyed"β€”the rule isn't violated when there's no damaged floor. That is, the rule only makes a claim about what happens when the condition is met, so whatever happens when the condition is not met does not affect the truth of the implication.

3.3.2.2 Evaluating Compound Sentences

Given an interpretation, we evaluate compound sentences bottom-up:

Example: Let interpretation \(I\) assign: \(D_{2,1} = T\), \(D_{3,1} = F\), \(D_{2,2} = F\).

Evaluate \(D_{2,1} \lor D_{3,1}\): \[D_{2,1} \lor D_{3,1} = T \lor F = T\]

Evaluate \(\neg D_{2,2} \land (D_{2,1} \lor D_{3,1})\): \[\neg D_{2,2} \land (D_{2,1} \lor D_{3,1}) = \neg F \land T = T \land T = T\]

3.3.3 Entailment and Validity

Definition 3.13  Satisfaction

An interpretation \(I\) satisfies a sentence \(\alpha\) if \(\alpha\) evaluates to true under \(I\). We write \(I \models \alpha\). A sentence is satisfiable if some interpretation satisfies it.

Definition 3.14  Entailment

A knowledge base \(\mathit{KB}\) entails sentence \(\alpha\), written \(\mathit{KB} \models \alpha\), if every interpretation that satisfies all sentences in \(\mathit{KB}\) also satisfies \(\alpha\).

Entailment is the key semantic concept: \(\mathit{KB} \models \alpha\) means \(\alpha\) is a logical consequence of \(\mathit{KB}\). If our knowledge base correctly describes the world, then any entailed sentence is guaranteed to be true.

Definition 3.15  Validity

A sentence is valid (a tautology) if it is true in every interpretation. A sentence is unsatisfiable (a contradiction) if it is false in every interpretation.

Examples:

  • \(P \lor \neg P\) is valid (law of excluded middle)
  • \(P \land \neg P\) is unsatisfiable (contradiction)
  • \(P \lor Q\) is satisfiable but not valid

3.3.4 Logical Equivalences

Two sentences are logically equivalent if they have the same truth value in every interpretation. We write \(\alpha \equiv \beta\). Equivalences let us transform sentences into more useful forms without changing their meaning.

Definition 3.16  Standard Logical Equivalences

De Morgan's Laws: \[\neg (\alpha \land \beta) \equiv \neg \alpha \lor \neg \beta\] \[\neg (\alpha \lor \beta) \equiv \neg \alpha \land \neg \beta\]

Double Negation: \[\neg \neg \alpha \equiv \alpha\]

Commutativity: \[\alpha \land \beta \equiv \beta \land \alpha\] \[\alpha \lor \beta \equiv \beta \lor \alpha\]

Associativity: \[(\alpha \land \beta) \land \gamma \equiv \alpha \land (\beta \land \gamma)\] \[(\alpha \lor \beta) \lor \gamma \equiv \alpha \lor (\beta \lor \gamma)\]

Distributivity: \[\alpha \land (\beta \lor \gamma) \equiv (\alpha \land \beta) \lor (\alpha \land \gamma)\] \[\alpha \lor (\beta \land \gamma) \equiv (\alpha \lor \beta) \land (\alpha \lor \gamma)\]

Implication Elimination: \[\alpha \Rightarrow \beta \equiv \neg \alpha \lor \beta\]

Contrapositive: \[\alpha \Rightarrow \beta \equiv \neg \beta \Rightarrow \neg \alpha\]

Biconditional Elimination: \[\alpha \Leftrightarrow \beta \equiv (\alpha \Rightarrow \beta) \land (\beta \Rightarrow \alpha)\]

These equivalences can be verified by constructing truth tables. They are essential tools for manipulating logical expressions during inference.

Example: Using De Morgan's law to simplify negation of a disjunction: \[\neg (D_{2,1} \lor D_{1,2}) \equiv \neg D_{2,1} \land \neg D_{1,2}\]

This transforms "not (damaged at \((2,1)\) or damaged at \((1,2)\))" into "not damaged at \((2,1)\) and not damaged at \((1,2)\)"β€”the same meaning, but in a form that directly tells us both squares are safe.

3.3.5 Inference by Model Enumeration

One way to check if \(\mathit{KB} \models \alpha\): enumerate all interpretations, check if \(\mathit{KB}\) is true in each, and verify that \(\alpha\) is true whenever \(\mathit{KB}\) is true.

Definition 3.17  Model Checking

Model checking (or model enumeration) decides entailment by exhaustively testing all interpretations.

For \(n\) propositional symbols, there are \(2^n\) interpretations. Model checking is exponential in the number of symbols.

Example: Does \(\{P \Rightarrow Q, P\} \models Q\)?

Enumerate all interpretations:

\(P\) \(Q\) \(P \Rightarrow Q\) \(P\) \(\mathit{KB}\) true? \(Q\)
T T T T Yes T
T F F T No β€”
F T T F No β€”
F F T F No β€”

In the only interpretation where \(\mathit{KB}\) is true (row 1), \(Q\) is also true. Therefore \(\{P \Rightarrow Q, P\} \models Q\).

This inference pattern is called modus ponens.

3.3.6 A Propositional KB for the Hazardous Warehouse

Let's build a knowledge base for a simplified \(3 \times 3\) warehouse. We use symbols:

  • \(D_{x,y}\): Damaged floor at \((x,y)\)
  • \(F_{x,y}\): Forklift at \((x,y)\)
  • \(C_{x,y}\): Creaking perceived at \((x,y)\)
  • \(R_{x,y}\): Rumbling perceived at \((x,y)\)

3.3.6.1 Physics of the Warehouse

First, we encode how percepts relate to hazards. For each square, creaking occurs if and only if at least one adjacent square has damaged floor:

\[ C_{1,1} \Leftrightarrow D_{2,1} \lor D_{1,2} \] (3.1)

\[ C_{2,1} \Leftrightarrow D_{1,1} \lor D_{3,1} \lor D_{2,2} \] (3.2)

\[ C_{2,2} \Leftrightarrow D_{1,2} \lor D_{3,2} \lor D_{2,1} \lor D_{2,3} \] (3.3)

(Similar sentences for all squares and for rumbling/forklift.)

Material Implication vs Logical Equivalence

Why in the statements above do we use the biconditional (material implication) \(\Leftrightarrow\) instead of logical equivalence \(\equiv\)? Hint: the former is a statement about the world (the physics of how creaking relates to damaged floor), while the latter would be asserting a tautology that would hold in all interpretations regardless of the world.

3.3.6.2 Initial Knowledge

The starting square is safe: \[ \neg D_{1,1} \land \neg F_{1,1} \] (3.4)

3.3.6.3 Percepts at \((1,1)\)

Suppose the robot perceives no creaking and no rumbling at \((1,1)\): \[ \neg C_{1,1} \] \[ \neg R_{1,1} \]

3.3.6.4 Deducing Safety

From \(\neg C_{1,1}\) and equation (3.1): \[\neg C_{1,1} \Leftrightarrow \neg(D_{2,1} \lor D_{1,2})\]

By De Morgan's law: \[\neg(D_{2,1} \lor D_{1,2}) \equiv \neg D_{2,1} \land \neg D_{1,2}\]

Therefore: \[\neg D_{2,1} \land \neg D_{1,2}\]

Similarly, from no rumbling, we derive the forklift is not in \((2,1)\) or \((1,2)\). Combined: \[S_{2,1} \land S_{1,2}\]

Both adjacent squares are safe. The robot can move to either.

3.3.7 Inference Rules and Proofs

Rather than checking all models, we can derive conclusions by applying inference rulesβ€”patterns that preserve truth.

3.3.7.1 Standard Inference Rules

Definition 3.18  Key Inference Rules

Modus Ponens: From \(\alpha \Rightarrow \beta\) and \(\alpha\), infer \(\beta\).

Modus Tollens: From \(\alpha \Rightarrow \beta\) and \(\neg \beta\), infer \(\neg \alpha\).

And-Elimination: From \(\alpha \land \beta\), infer \(\alpha\) (or \(\beta\)).

And-Introduction: From \(\alpha\) and \(\beta\), infer \(\alpha \land \beta\).

Or-Introduction: From \(\alpha\), infer \(\alpha \lor \beta\).

Disjunctive Syllogism: From \(\alpha \lor \beta\) and \(\neg \beta\), infer \(\alpha\).

Resolution: From \(\alpha \lor \beta\) and \(\neg \beta \lor \gamma\), infer \(\alpha \lor \gamma\).

These rules are sound: they only derive entailed sentences.

3.3.7.2 Example Proof

Given: \(C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2})\), \(C_{2,1}\), \(\neg D_{1,1}\), \(\neg D_{2,2}\).

Prove: \(D_{3,1}\).

  1. From \(C_{2,1} \Rightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2})\) and \(C_{2,1}\), by modus ponens: \(D_{1,1} \lor D_{3,1} \lor D_{2,2}\)

  2. From \(D_{1,1} \lor D_{3,1} \lor D_{2,2}\) and \(\neg D_{1,1}\), by disjunctive syllogism: \(D_{3,1} \lor D_{2,2}\)

  3. From \(D_{3,1} \lor D_{2,2}\) and \(\neg D_{2,2}\), by disjunctive syllogism: \(D_{3,1}\)

We have proven that \((3,1)\) has damaged floor without stepping on it or checking it directly, purely by logical inference from the knowledge base and indirect percepts.

Dijunctive Syllogism is a Resolution

Disjunctive syllogism is a special case of resolution if we let \(\gamma = \bot\) (false). From \(\alpha \lor \beta\) and \(\neg \beta \lor \bot\), we infer \(\alpha \lor \bot\), which is logically equivalent to \(\alpha\).

3.3.8 Resolution and Completeness

The inference rules above are useful for hand proofs, but we want a systematic procedure that can mechanically determine whether \(\mathit{KB} \models \alpha\). The resolution rule, when applied systematically, provides exactly this.

3.3.8.1 Conjunctive Normal Form

Resolution works on sentences in a standardized format.

Definition 3.19  Literal

A literal is an atomic sentence or its negation: \(P\) (positive literal) or \(\neg P\) (negative literal).

Definition 3.20  Clause

A clause is a disjunction of literals: \(L_1 \lor L_2 \lor \cdots \lor L_n\). A clause with a single literal is called a unit clause.

Definition 3.21  Conjunctive Normal Form (CNF)

A sentence is in Conjunctive Normal Form if it is a conjunction of clauses: \[(\ell_{1,1} \lor \cdots \lor \ell_{1,k_1}) \land (\ell_{2,1} \lor \cdots \lor \ell_{2,k_2}) \land \cdots \land (\ell_{n,1} \lor \cdots \lor \ell_{n,k_n})\]

Example: \((A \lor \neg B) \land (\neg A \lor C \lor D) \land (B)\)

This sentence has three clauses: \((A \lor \neg B)\), \((\neg A \lor C \lor D)\), and \((B)\). The third is a unit clause.

3.3.8.2 Converting to CNF

Any propositional sentence can be converted to an equivalent CNF sentence using these steps:

  1. Eliminate biconditionals: Replace \(\alpha \Leftrightarrow \beta\) with \((\alpha \Rightarrow \beta) \land (\beta \Rightarrow \alpha)\)
  2. Eliminate implications: Replace \(\alpha \Rightarrow \beta\) with \(\neg \alpha \lor \beta\)
  3. Push negations inward using De Morgan's laws and double negation:
    • \(\neg(\alpha \land \beta) \equiv \neg \alpha \lor \neg \beta\)
    • \(\neg(\alpha \lor \beta) \equiv \neg \alpha \land \neg \beta\)
    • \(\neg \neg \alpha \equiv \alpha\)
  4. Distribute \(\lor\) over \(\land\): \(\alpha \lor (\beta \land \gamma) \equiv (\alpha \lor \beta) \land (\alpha \lor \gamma)\)

Example: Convert \((P \Rightarrow Q) \Rightarrow R\) to CNF.

  1. Eliminate outer implication: \(\neg(P \Rightarrow Q) \lor R\)
  2. Eliminate inner implication: \(\neg(\neg P \lor Q) \lor R\)
  3. Push negation inward (De Morgan): \((\neg\neg P \land \neg Q) \lor R\)
  4. Double negation: \((P \land \neg Q) \lor R\)
  5. Distribute \(\lor\) over \(\land\): \((P \lor R) \land (\neg Q \lor R)\)

Result: two clauses, \((P \lor R)\) and \((\neg Q \lor R)\).

3.3.8.3 The Resolution Rule

Resolution combines two clauses that contain complementary literals (one positive, one negative form of the same atom).

Definition 3.22  Resolution

Given clauses \(C_1 = (\alpha_1 \lor \cdots \lor \alpha_m \lor L)\) and \(C_2 = (\beta_1 \lor \cdots \lor \beta_n \lor \neg L)\), the resolvent is: \[(\alpha_1 \lor \cdots \lor \alpha_m \lor \beta_1 \lor \cdots \lor \beta_n)\]

The complementary literals \(L\) and \(\neg L\) are eliminated; the remaining literals are combined.

Examples:

  • From \((A \lor B)\) and \((\neg B \lor C)\), resolve on \(B\): \((A \lor C)\)
  • From \((P \lor Q \lor R)\) and \((\neg Q)\), resolve on \(Q\): \((P \lor R)\)
  • From \((A)\) and \((\neg A)\), resolve on \(A\): \(()\) β€” the empty clause

The empty clause, written \(()\) or \(\square\), contains no literals. It represents a contradiction (false), since a disjunction with no disjuncts cannot be satisfied.

3.3.8.4 Resolution Refutation

To prove \(\mathit{KB} \models \alpha\) (the knowledge base entails \(\alpha\)), we use proof by contradiction:

  1. Assume the opposite: \(\mathit{KB} \land \neg \alpha\)
  2. Convert to CNF
  3. Repeatedly apply resolution to generate new clauses
  4. If we derive the empty clause, we have a contradiction β€” therefore \(\mathit{KB} \models \alpha\)

Theorem 3.1  Resolution Completeness

Resolution is refutation-complete: if \(\mathit{KB} \models \alpha\), then resolution applied to \(\mathit{KB} \land \neg \alpha\) will eventually derive the empty clause.

The term "refutation-complete" means resolution can prove any entailment by refuting its negation. It will not miss valid conclusions.

3.3.8.5 Worked Example

Prove: \(\{P \Rightarrow Q, \; Q \Rightarrow R\} \models P \Rightarrow R\)

Step 1: Add the negation of what we want to prove.

We want to show \(P \Rightarrow R\). Its negation is \(\neg(P \Rightarrow R) \equiv \neg(\neg P \lor R) \equiv P \land \neg R\).

Our clauses come from:

  • \(P \Rightarrow Q \equiv \neg P \lor Q\)
  • \(Q \Rightarrow R \equiv \neg Q \lor R\)
  • \(P \land \neg R \equiv P\) and \(\neg R\) (two unit clauses)

Step 2: List all clauses.

  1. \(\neg P \lor Q\)
  2. \(\neg Q \lor R\)
  3. \(P\)
  4. \(\neg R\)

Step 3: Apply resolution.

  1. From (1) and (3), resolve on \(P\): \(Q\)
  2. From (2) and (5), resolve on \(Q\): \(R\)
  3. From (4) and (6), resolve on \(R\): \(()\) β€” empty clause!

Conclusion: We derived a contradiction, so \(\{P \Rightarrow Q, Q \Rightarrow R\} \models P \Rightarrow R\). This proves the transitivity of implication.

3.3.8.6 Resolution for the Warehouse

Consider proving that \((3,1)\) has damaged floor from our earlier example. Our KB contains:

  • \(C_{2,1} \Leftrightarrow (D_{1,1} \lor D_{3,1} \lor D_{2,2})\) β€” creaking rule
  • \(C_{2,1}\) β€” observed creaking
  • \(\neg D_{1,1}\) β€” starting square is safe
  • \(\neg D_{2,2}\) β€” deduced from no creaking at \((1,2)\)

To prove \(D_{3,1}\), we add \(\neg D_{3,1}\) and seek a contradiction. Converting the biconditional to CNF and applying resolution would derive the empty clause, confirming \(D_{3,1}\).

Comment 3.4  Modern SAT Solvers

A SAT solver, short for Satisfiability Solver, is a tool used to determine if a given propositional logic formula can be satisfied by some assignment of truth values to its variables (i.e., interpretation or model). Practical implementations use DPLL (Davis-Putnam-Logemann-Loveland) or CDCL (Conflict-Driven Clause Learning) algorithms rather than naive resolution.

DPLL Algorithm: This algorithm is a backtracking-based search algorithm that systematically explores the possible assignments of variables. It uses unit propagation and pure literal elimination to simplify the problem before making decisions, which helps in reducing the search space.

CDCL Algorithm: An enhancement of DPLL, CDCL incorporates conflict-driven learning. When a conflict is detected, it analyzes the conflict to learn a new clause that prevents the same conflict from occurring again. This allows the solver to backtrack more intelligently and often leads to faster solutions.

These algorithms add sophisticated heuristics for choosing which clauses to resolve, backtracking when stuck, and learning from conflicts. Modern SAT solvers can handle problems with millions of variables, making propositional reasoning practical for hardware verification, planning, and constraint satisfaction.

3.3.9 Limitations of Propositional Logic

Propositional logic is simple and has efficient solvers, but it lacks expressiveness.

3.3.9.1 The Grounding Problem

To state "creaking implies adjacent damaged floor," we must write a separate sentence for every square:

\[C_{1,1} \Leftrightarrow D_{2,1} \lor D_{1,2}\] \[C_{1,2} \Leftrightarrow D_{2,2} \lor D_{1,1} \lor D_{1,3}\] \[C_{2,1} \Leftrightarrow D_{1,1} \lor D_{3,1} \lor D_{2,2}\] \[\vdots\]

For a \(4 \times 4\) warehouse, this is manageable. For a \(100 \times 100\) facility, it becomes impractical. We cannot express the general pattern: "For all locations \(L\), creaking at \(L\) if and only if damaged floor adjacent to \(L\)."

3.3.9.2 No Objects or Relations

Propositional logic has no notion of objects (squares, robots, packages) or relations (adjacent, at, carrying). Every fact requires its own symbol.

We cannot express:

  • "Every robot should avoid damaged floors" (quantification over robots)
  • "If robot R is carrying package P, and R is at location L, then P is at L" (relations and variables)

These limitations motivate first-order logic, developed in Section 3.4: First-Order Logic.

3.3.10 Summary

Propositional logic provides:

  • Syntax: Atoms and connectives build sentences
  • Semantics: Truth tables define meaning; interpretations assign truth values
  • Entailment: \(\mathit{KB} \models \alpha\) when all models of \(\mathit{KB}\) satisfy \(\alpha\)
  • Inference: Model checking is complete but exponential; resolution provides proof-based inference

For the Hazardous Warehouse:

  • We can encode hazard physics as biconditionals
  • Percepts update our \(\mathit{KB}\)
  • Inference reveals which squares are safe

The limitation is scalability: we need one sentence per square. First-order logic overcomes this with variables and quantifiers.

Bibliography

  1. [AI] Russell, Stuart J. and Peter Norvig. Artificial intelligence: a modern approach. (2020) 4 ed. Prentice Hall. http://aima.cs.berkeley.edu/