Artificial Intelligence Review

Agents (Ch. 1)

see pace

Search and Games (Ch. 3-4-5)

Problem types:

- deterministic (observable): by doing action A, I know what state will be next.
- non-observable: no idea where I am.
- non-deterministic:
- unknown state space: "online" exploration.

Problem formulation:

- initial state
- actions 'successor functions'
- goal test
- path cost

Search strategies: strategy means the order of picking nodes. To evaluate it, check:

- completeness:
- time complexity:
- space complexity:
- optimality:

Measurement terms: (b - max branching factor, d - depth of least-cost solution, m - max depth of state space)

Search (Uninformed) algorithm:

- Depth First:              keep checking the child     of each node until: NULL/VISITED then expand or GOAL then stop
- Breadth First:            keep checking the neighbor  of each node until: NULL/VISITED then expand or GOAL then stop
- Iterative Deepening:      BFS with limited depth(+1) at each iteration (lots of repeated work, but memory efficient)

- Uniform search "cheapest first":
    - keep adding "expanding" a new path (once at a time) to the frontier, as you expand:
        - follow the cheapest total path among the available ones in frontier.

- Greedy search algorithm "closer to target":
    - keep adding "expanding" a new path (once at a time) to the frontier, as you expand:
        - follow the path the looks closer to the goal.

- A* search "cheapest & minimum cost first":
    - combine greedy search approach (closer to goal) AND uniform cost (shortest path)
    - function: f = g + h
    - where:
        - g(path) = path cost
        - h(path) = estimated distance to path using heuristic function
        - SO:
            - minimizing g maintains shortest path.
            - minimizing h maintains closer to the goal.


Uniform cost search:


search algorithms classified:

Uninformed Search:

Informed Search: 'Heuristic'

Optimization search: 'local search'


see A* berkeley

see Uniform Cost - udacity

search optimality, Greedy best-first, A*, DPS, and BFS:


Heuristic search, hill-climbing, evaluation function:

pace , youtube: optimization algorithms

Search practice:

berkeley search quiz


Minimax algorithms

see pace , udacity minimax , berkeley minimax quiz

alpha-beta algorithm:

norvig pruning in 60sec , evaluation function and alpha-beta , berkeley step-by-step alpha-beta

Constraint satisfaction problems (Ch. 6)


Go back to the previous (non-selected) choice when fails with the current one.

i.e. assign VAL1 to VAR1, then VAL2 to VAR2 ... etc fail ?  go back and assign VAL2 to VAR1 then proceed.

Forward checking:

Keep track of remaining legal values for unassigned variables.
Terminate search when any variable has no legal values.

see: pace , berkeley lec4-5

Logic (Ch. 7-8)

a sentence that is ..




Propositional Logic

see pace

Inference in PL

see pace


Handling uncertainty (Ch. 13-14)

Expected Return:

Joint probability and uncertainty:


see pace

Bayesian network:

see udacity


reference and tutorial on Bayesian network in python


see youtube: normalization trick


A MUST READ: Unifying Logic and Probability: A New Dawn for AI? (Russel) ppt and paper

Learning (Ch. 18)

Decision Trees

    H(D) = - SUM [pi * log(pi) ]
Information Gain:
    G(D,A) = H(D) - SUM [ Di / D * H(Di)]

see: assignment 6

decision trees at udacity

Machine Learning explained simply:

see: udacity

SVM, Kernel Trick, and more

see: udacity

Practice Q/A:

UC Berkeley