Goal: Encode the Hazardous Warehouse environment in propositional logic and use logical inference to determine safe squares.
Setup: Consider a simplified \(3 \times 3\) warehouse grid. The robot starts at \((1,1)\). The environment contains:
Use the following propositional symbols:
Tasks:
Deliverables:
Goal: Practice translating natural language statements about the Hazardous Warehouse into first-order logic.
Predicates available: - \(\text{Damaged}(l)\): Location \(l\) has damaged floor - \(\text{Forklift}(l)\): The forklift is at location \(l\) - \(\text{Package}(l)\): The package is at location \(l\) - \(\text{Safe}(l)\): Location \(l\) is safe to enter - \(\text{Adjacent}(l_1, l_2)\): Locations \(l_1\) and \(l_2\) are adjacent - \(\text{Creaking}(l)\): Creaking is perceived at \(l\) - \(\text{Rumbling}(l)\): Rumbling is perceived at \(l\) - \(\text{Visited}(l)\): The robot has visited \(l\) - \(\text{At}(r, l)\): Robot \(r\) is at location \(l\) - \(\text{Carrying}(r, p)\): Robot \(r\) is carrying package \(p\)
Translate each statement to FOL:
"Every location adjacent to a damaged floor has creaking."
"The robot should never enter an unsafe location."
"If a location has no creaking and no rumbling, all adjacent locations are safe."
"There is exactly one forklift in the warehouse."
"If the robot is carrying the package and the robot is at the exit, the mission is complete."
"A location is dangerous if it has damaged floor or if the forklift is there."
"Every visited location is safe." (This reflects the fact that the robot survived visiting it.)
"If rumbling is heard at every location adjacent to \(L\), then the forklift is at \(L\)."
Bonus: For each translation, identify whether the statement uses universal quantification, existential quantification, or both.
Goal: Trace through forward chaining and backward chaining algorithms on a small knowledge base.
Knowledge Base:
Facts: - \(\text{Adjacent}(L_1, L_2)\) - \(\text{Adjacent}(L_1, L_4)\) - \(\text{Adjacent}(L_2, L_1)\) - \(\text{Adjacent}(L_2, L_3)\) - \(\text{Adjacent}(L_2, L_5)\) - \(\text{Adjacent}(L_4, L_1)\) - \(\text{Adjacent}(L_4, L_5)\) - \(\neg\text{Creaking}(L_1)\) - \(\text{Creaking}(L_2)\) - \(\neg\text{Rumbling}(L_1)\) - \(\neg\text{Rumbling}(L_2)\) - \(\text{Safe}(L_1)\) (starting square)
Rules: - R1: \(\forall l.\, \neg\text{Creaking}(l) \Rightarrow \forall l'.\, \text{Adjacent}(l, l') \Rightarrow \neg\text{Damaged}(l')\) - R2: \(\forall l.\, \neg\text{Rumbling}(l) \Rightarrow \forall l'.\, \text{Adjacent}(l, l') \Rightarrow \neg\text{Forklift}(l')\) - R3: \(\forall l.\, \neg\text{Damaged}(l) \land \neg\text{Forklift}(l) \Rightarrow \text{Safe}(l)\)
Grid layout (for reference):
L4 -- L5
| |
L1 -- L2 -- L3
Tasks:
Forward chaining: Starting from the initial facts, trace the forward chaining process.
Backward chaining: Query: Is \(L_5\) safe?
Unification: For each rule application in your traces, explicitly show the substitution (unifier) used.
Comparison: Which approach (forward or backward) examined fewer rules/facts for this query? When would the other approach be preferable?
Goal: Follow the walkthrough in Section 3.6: Building a Knowledge-Based Agent to build your own knowledge-based agent for the Hazardous Warehouse using Z3.
Setup: You will need z3-solver (install via pip install z3-solver) and hazardous_warehouse_env.py in your working directory.
Tasks:
Setup and exploration: Create a new Python file. Import Bool, Bools, Or, And, Not, Solver, unsat from z3, and HazardousWarehouseEnv from hazardous_warehouse_env.py. Verify you can:
P, Q = Bools('P Q')s.add(P == Q) and a fact: s.add(P)s.check() should return sats.model() should show P = True, Q = Truez3_entails function from Section 3.6: Building a Knowledge-Based Agent using push/pop. Verify that after adding P == Q and P, the solver entails Q.Symbols and physics: Write the Bool variable helper functions (damaged, forklift_at, creaking_at, rumbling_at, safe) and build_warehouse_kb() as described in Section 3.6: Building a Knowledge-Based Agent. After building the solver, verify it is satisfiable (solver.check() returns sat).
Manual reasoning: Before building the full agent, use your solver to replicate the reasoning from Section 3.2: The Hazardous Warehouse Environment:
build_warehouse_kb()True.Agent loop: Implement the full WarehouseKBAgent class with the perceive \(\to\) tell \(\to\) ask \(\to\) act cycle described in Section 3.6: Building a Knowledge-Based Agent. Include path planning (BFS through safe squares) and the action conversion logic.
Testing: Run your agent on the example layout (using configure_rn_example_layout from hazardous_warehouse_viz.py). Does it retrieve the package and exit successfully? Record the number of steps and total reward.
Reflection: In 2–3 sentences, describe a situation where the agent either gets stuck (cannot find the package) or behaves conservatively (avoids squares that a human would risk). What additional reasoning capability would help?
Deliverables: Your Python file with the agent implementation, plus a brief summary (replace the template's README.md file) of your results and observations from tasks 3, 5, and 6.
Bonus: Extend the agent to use the emergency shutdown device. The agent should reason about the forklift's location and, when it can identify the forklift along a line from its current position, fire the shutdown device before proceeding.
Goal: Follow the walkthrough in Section 3.7: Building a FOL Agent with Z3 to extend the propositional agent from Section 3.6: Building a Knowledge-Based Agent to use quantified first-order logic, and compare the two encodings.
Setup: You will need z3-solver (install via pip install z3-solver), hazardous_warehouse_env.py, and hazardous_warehouse_viz.py in your working directory. You should have a working propositional agent from problem 3.4.
Tasks:
FOL domain setup: Create a new Python file. Import DeclareSort, Function, BoolSort, Const, ForAll, Exists, Distinct from z3 (in addition to the Or, And, Not, Solver, unsat you already know from Section 3.6: Building a Knowledge-Based Agent). Verify you can:
Location = DeclareSort('Location')P = Function('P', Location, BoolSort())L1 = Const('L1', Location)ForAll(L, P(L)) where L = Const('L', Location)Quantified physics rules: Implement build_warehouse_kb_fol() as described in Section 3.7: Building a FOL Agent with Z3. This includes:
ForAll(L, Or([L == loc[...] for ...])) and Distinct)Manual reasoning: Replicate the reasoning walkthrough from Section 3.6.14: Manual Walkthrough using the FOL KB:
build_warehouse_kb_fol()self.preds['Creaking'](loc[(1,1)]) instead of creaking_at(1, 1)True.)True.)Full agent: Implement the WarehouseZ3Agent class with the same decision loop as Section 3.6: Building a Knowledge-Based Agent. The key difference is using FOL predicates applied to location constants instead of grounded Bool variables. Run it on the example layout using configure_rn_example_layout. Record the number of steps and total reward. Confirm the results are identical to the propositional agent.
Domain closure investigation (challenge): Remove the domain closure axiom from build_warehouse_kb_fol() and re-run the manual reasoning from task 3. Which entailment queries now fail? Explain why Z3 can construct a satisfying model where no real grid square is damaged, even when creaking is perceived. Restore the axiom when done.
Reflection: In 3–4 sentences, compare the propositional and FOL encodings. Address: (a) How does rule count scale with grid size for each approach? (b) Which encoding is more readable? (c) What is the role of domain closure, and why doesn't the propositional encoding need it?
Deliverables: Your Python file with the FOL agent, the output trace from task 4, and the written comparison from task 6.
Goal: Understand the expressiveness gap between propositional and first-order logic.
Part A: Propositional Encoding Cost
Consider an \(n \times n\) warehouse grid. Count the number of propositional sentences needed to encode:
Express your answers as functions of \(n\). What happens as \(n\) grows to 100?
Part B: FOL Comparison
Write the same rules in first-order logic. How many sentences are needed regardless of \(n\)?
Part C: Translation Challenge
Can the following statement be expressed in propositional logic? If so, how? If not, why not?
"There is exactly one damaged floor section in the entire warehouse."
Write the statement in FOL. Then attempt a propositional encoding for a \(3 \times 3\) grid. How does the complexity scale?
Part D: Practical Implications
A real warehouse might be \(100 \times 100\) with 10,000 locations. Discuss:
Goal: Practice model checking by enumerating interpretations.
Setup: Consider a tiny \(2 \times 2\) warehouse with squares \(A\), \(B\), \(C\), \(D\) arranged as:
C -- D
| |
A -- B
Use propositional symbols: - \(D_X\): Damaged floor at square \(X\) (for \(X \in \{A, B, C, D\}\)) - \(C_X\): Creaking at square \(X\)
Knowledge Base: - \(C_A \Leftrightarrow D_B \lor D_C\) (creaking at A iff damage at adjacent B or C) - \(C_B \Leftrightarrow D_A \lor D_D\) (creaking at B iff damage at adjacent A or D) - \(\neg D_A\) (starting square A is safe) - \(\neg C_A\) (no creaking at A) - \(C_B\) (creaking at B)
Tasks:
List all propositional symbols in this KB.
Enumerate all interpretations (there are \(2^n\) for \(n\) symbols—use only the 4 damage symbols since creaking is determined by the rules).
Filter models: Which interpretations satisfy all sentences in the KB?
Check entailment: Does the KB entail \(D_D\)? Does it entail \(\neg D_B\)? Does it entail \(D_B \lor D_C\)?
Practical reflection: With 4 damage symbols, we have 16 interpretations. How many interpretations would a \(10 \times 10\) warehouse have? Is model enumeration practical at that scale?