January 3rd - Back to ML at Stanford

Stanford CS221: AI (Autumn 2021) Notes

Constraint Satisfaction Problems (CSPs) 4 - Dynamic Ordering

Video: Dynamic Ordering

CSP - Dynamic Ordering

Backtrack(x, w, Domains)

  • If x is a complete assignment:
    • Update best and return.
  • Otherwise:
    • Choose unassigned variable (X_i) (using Most Constrained Variable).
    • Order values (Domain_i) of chosen (X_i) (using Least Constrained Value).
    • For each weight in that order:
      • Update the weight by multiplying values. If the result is zero, stop recursion for that branch.
      • Perform a lookahead (Domain'_i) using forward checking.
      • If Domain'_i is empty, stop recursion for that branch.

Strategies:

  • Most Constrained Variable:
    • Fewer variables = simpler problem.
    • Useful when some factors are constrained, allowing pruning of weight zero assignments.
  • Least Constrained Value:
    • More options = better outcomes.
    • Useful when all factors are constraints (quickly find weight = 1 or prune at weight = 0).
  • Both Strategies:
    • Require forward checking.

Constraint Satisfaction Problems (CSPs) 5 - Arc Consistency

Video: Arc Consistency

Ways to Trim Domains More Aggressively in Backtracking

  • Forward Checking: Lookahead for Domains'.
  • Arc Consistency (AC-3):
    • Enforce arc consistency by removing variables from (Domain_i) to make (X_i) consistent with (X_j) (e.g., variables (X_i) and (X_j) must equal 4).
    • Repeatedly enforce arc consistency on all variables (neighbors).

Video: Beam Search

Overview:

  • Backtrack is exhaustive and expensive. Beam search sacrifices correctness for efficiency.
  • At each level, follow the highest-weight assignment, keeping up to (K) candidates (beam size).

Time Complexity:

  • (O(\text{depth} \times K \times b \times \log(K))), where:
    • Depth: Number of variables.
    • K: Beam/candidate size.
    • b: Branching factor.
  • Comparison:
    • (K = 1): Greedy search ((O(nb))).
    • (K = \infty): Breadth-first search ((O(b^n))).

Strategy:

  • Backtracking: Depth-first search.
  • Beam Search: Pruned breadth-first search (best for local factors).

Video: Local Search

Overview:

  • Allows exploration by modifying local assignments.
  • ICM (Iterated Conditional Modes):
    • Initialize with a random complete assignment.
    • Iterate over variables, updating to maximize weight.
    • Each iteration increases the weight (no decrease).

Challenges:

  • Finds local optima, not global.
  • Mitigation:
    • Vary multiple nodes or introduce randomness.

Constraint Search Summary

| Algorithm | Strategy | Optimality | Time Complexity | |———————|—————————–|————–|—————–| | Backtracking search | Extend partial assignments | Exact | Exponential | | Beam search | Extend partial assignments | Approximate | Linear | | Local search (ICM) | Modify complete assignments | Approximate | Linear |


Markov Networks 1 - Overview

Video: Overview

  • Based on Factor Graphs + Probabilities.
  • Normalize weights to get probability distributions.
  • Applications:
    • Compute marginal probabilities for specific variables.
CSPs Markov Networks
Variables (known/unknown) Random variables (probabilistic interpretation)
Weights Probabilities
Maximum weight assignment Marginal probabilities

Markov Networks 2 - Gibbs Sampling

Video: Gibbs Sampling

  • Avoid exponential computation by random sampling + normalization.
  • Comparison:
Iterated Conditional Modes Gibbs Sampling
Maximum weight assignment Marginal probabilities
Choose best value Sample a value
Converges to local optimum Marginals converge to correct answer*

Bayesian Networks 1 - Overview

Video: Overview

  • Use factor graphs to define probability distributions.
  • Applications:
    • Topic modeling.
    • Vision as inverse graphics.
    • Error correction (e.g., LoRaWAN).
    • DNA matching.
    • Incorporate prior knowledge (e.g., physics, Mendelian inheritance).

Logic 1 - Overview: Logic-Based Models

Video: Overview

Modeling Paradigms:

  1. State-Based Models:
    • Examples: Route finding, game playing.
    • Think in terms of states, actions, and costs.
  2. Variable-Based Models:
    • Examples: Scheduling, medical diagnosis.
    • Think in terms of variables and factors.
  3. Logic-Based Models:
    • Examples: Theorem proving, HW/SW verification.
    • Think in terms of logical formulas and inference rules.

Resources