Many engineering problems are naturally framed as assignment or scheduling tasks: assigning robots to jobs, scheduling manufacturing operations, booking shared lab equipment, or allocating resources to projects. While you could model these as state-space search problems, that approach often obscures the problem structure and makes algorithms inefficient. Constraint Satisfaction Problems (CSPs) provide a more natural and powerful framework.
Consider scheduling three robots to deliver five packages, each with a time window for pickup and dropoff. A search-based approach would explore paths through enormous state spaces—every possible sequence of assignments, pickups, and deliveries. But the core challenge is not how to reach a goal state; it's which assignments satisfy all constraints simultaneously.
CSPs shift focus from paths to assignments: finding values for variables that satisfy all constraints. This declarative approach:
Manufacturing engineers use CSP techniques for shop-floor scheduling, maintenance planning, and resource allocation. Understanding CSPs gives you both a modeling tool and access to powerful solvers.
A constraint satisfaction problem (CSP) consists of:
A solution is an assignment of values to all variables that satisfies every constraint. A CSP is satisfiable if at least one solution exists.
Suppose you manage a mechanical engineering lab with shared equipment: a 3D printer, a laser cutter, and a materials testing machine. Three project teams need to book equipment for specific time slots today:
Formally:
A valid solution: \(X_1 = \text{printer}\), \(X_2 = \text{laser}\), \(X_3 = \text{tester}\).
This toy example illustrates the CSP components. Real scheduling problems add temporal constraints, resource capacities, and preferences.
Constraints come in several flavors:
Any \(n\)-ary constraint can be converted to binary constraints by introducing auxiliary variables. Most CSP solvers focus on binary constraints, but global constraints are crucial for engineering problems because they exploit structure. For instance, an all-different constraint for robot assignments is more efficient than \(O(n^2)\) pairwise inequality checks.
CSPs are NP-complete in general, so we often use search with constraint propagation to find solutions efficiently.
The standard CSP algorithm is backtracking search: a depth-first search that assigns variables one at a time and backtracks when a constraint is violated.
Basic backtracking pseudocode:
function BACKTRACK(assignment, csp):
if assignment is complete:
return assignment
var = SELECT-UNASSIGNED-VARIABLE(csp)
for each value in ORDER-DOMAIN-VALUES(var, csp):
if value is consistent with assignment:
add {var = value} to assignment
result = BACKTRACK(assignment, csp)
if result ≠ failure:
return result
remove {var = value} from assignment
return failure
Without heuristics, backtracking degenerates to brute-force enumeration. The key is choosing which variable to assign next and which value to try first.
The minimum remaining values heuristic selects the variable with the fewest legal values remaining in its domain. Also called the "most constrained variable" or "fail-first" heuristic.
Intuition: MRV prunes the search tree early by detecting failures fast. If a variable has no legal values, backtrack immediately rather than exploring doomed branches.
The degree heuristic selects the variable involved in the most constraints with unassigned variables. Often used as a tie-breaker when multiple variables have the same MRV.
Intuition: Assign variables that constrain others most, reducing future branching.
The least constraining value heuristic tries values that rule out the fewest choices for neighboring variables first.
Intuition: LCV leaves maximum flexibility for future assignments, increasing the chance of finding a solution without backtracking.
Rather than waiting for backtracking to detect conflicts, constraint propagation (also called inference) prunes domains in advance by enforcing local consistency.
Forward checking propagates constraints after each assignment: whenever \(X_i = v\), remove inconsistent values from the domains of unassigned variables constrained by \(X_i\).
Forward checking detects some dead ends early, reducing backtracking. For example, if you assign \(X_1 = \text{printer}\) and \(X_1 \neq X_2\), forward checking removes "printer" from \(D_2\).
Arc consistency is a stronger form of propagation. In this context, an arc is a directed pair \((X_i, X_j)\) indicating “check \(X_i\) against \(X_j\)” for a binary constraint. A variable \(X_i\) is arc-consistent with respect to another variable \(X_j\) if for every value in \(D_i\), there exists some value in \(D_j\) that satisfies the binary constraint between them.
The AC-3 algorithm enforces arc consistency by iteratively removing values from domains until all arcs are consistent. It maintains a queue of arcs to check and propagates changes when domains shrink.
Maintaining arc consistency (MAC) interleaves AC-3 with backtracking: after each assignment, run AC-3 to propagate constraints. MAC is more expensive per node than forward checking but often reduces the search tree more dramatically, especially on hard problems.
When problems are large and finding any solution is hard, local search methods can be effective. These algorithms start with a complete assignment (possibly violating constraints) and iteratively improve it.
The min-conflicts heuristic selects a variable involved in a constraint violation and reassigns it to the value that minimizes the number of violated constraints.
Algorithm outline:
function MIN-CONFLICTS(csp, max_steps):
current = random complete assignment
for i = 1 to max_steps:
if current satisfies all constraints:
return current
var = random variable from the set of conflicted variables
value = value for var that minimizes conflicts
set var = value in current
return failure
Min-conflicts is surprisingly effective for large scheduling problems and can find solutions in linear time on average for many problem classes Russell and Norvig, 2020, § 6.4. It is commonly used in real-world resource allocation systems.
Tradeoffs: Local search can get stuck in local minima and offers no completeness guarantees. For critical systems requiring provable solutions, use systematic search with propagation. For large, over-constrained problems, local search often finds good-enough solutions faster.
The structure of the constraint graph (nodes = variables, edges = constraints) affects solvability. Some special structures enable efficient algorithms:
Tree-structured CSPs: If the constraint graph is a tree (no cycles), CSPs can be solved in \(O(nd^2)\) time using tree CSP algorithm:
Nearly tree-structured CSPs: Use cutset conditioning: find a small set of variables whose removal leaves a tree. Try all assignments to the cutset, solve the resulting tree CSP for each.
Warehouse example: If robot assignments have no inter-dependencies (each robot is independent), the constraint graph is a forest (multiple trees), solvable efficiently. Adding capacity constraints or precedence creates cycles, requiring backtracking.
Let's return to the warehouse scenario from Chapter 1 and model a simplified multi-robot scheduling problem as a CSP.
You have three robots and five delivery jobs, each with:
Goal: Assign each job to a robot such that:
This formulation abstracts away path planning (handled by A* from the previous section) and focuses on high-level assignment decisions. The capacity and battery constraints are global constraints that could be decomposed into binary constraints but are more efficient to handle directly.
We'll build a minimal CSP solver with backtracking, MRV, and forward checking. The code will:
This is a didactic implementation; production systems use libraries like Google OR-Tools, Gurobi, or constraint programming languages. But building it from scratch clarifies how CSP solvers exploit structure.
Goal: Implement a backtracking CSP solver and apply it to robot job assignment.
CSP class with variables, domains, and constraints.Reflection: How does forward checking reduce backtracking compared to naive backtracking? What happens when the problem becomes over-constrained?
CSPs distinguish between:
Standard CSP solvers focus on feasibility. To optimize, you can:
For the robot scheduling problem, a feasible schedule might waste time. An optimal schedule minimizes total delivery time or energy use. We will revisit optimization in later chapters when we cover reinforcement learning and trajectory optimization.
Use CSPs when:
Use state-space search (A*, UCS) when:
Use learning/optimization (later chapters) when:
Many engineering systems combine all three: use CSPs for high-level scheduling, A* for routing, and RL for low-level control. Understanding each tool's strengths helps you architect robust intelligent systems.
Real-world CSP applications require attention to:
CSP solvers are workhorses in manufacturing, logistics, and resource management. They are less glamorous than machine learning but indispensable for engineered systems.
Constraint satisfaction problems provide a declarative framework for assignment and scheduling tasks common in engineering. By separating variables, domains, and constraints, CSPs enable:
You now have the foundations to formulate CSPs, choose appropriate algorithms, and apply them to engineering problems. The next section extends these ideas to multi-agent scenarios, where robot interactions require coordination beyond single-agent assignment.
Levels of Consistency
Beyond arc consistency, stronger notions exist: path consistency (3-consistency), k-consistency, and global consistency. In practice, arc consistency offers the best tradeoff between propagation power and computational cost for most CSPs. Stronger consistency levels are rarely used due to exponential cost.