**The final will be held on Tuesday, May 19th from 8am to 11am in 390 Hearst Mining.**

The final will be closed notes, books, laptops, and people. However, you may use up to two (two-sided) note sheets of your own design (group design ok but not recommended).

You may also use a **basic**, non-programmable calculator, which is not required, but which may be helpful. Your TI-86 is not allowed. Neither is your iPhone.

- Spring 09 Midterm (solutions)
- Fall 08 Final (solutions)
- Fall 08 Midterm (solutions)
- Fall 07 Final
- Fall 07 Midterm
- Fall 06 Final
- Fall 06 Midterm 1 (solutions)
- Spring 06 midterm (solutions)
- Spring 06 practice midterm (solutions)
- Spring 06 final
- Spring 06 practice final (solutions)

You can also look at much older exams from other versions of the class.

Thursday 5/14 11am-1pm in Hearst Mining 390: Nimar will review questions from the Fall 2008 final exam

Friday 5/15 11am-1pm in Soda 310: John will review some important concepts and techniques (MDPs, Bayes nets and classification)

Office hours:

- BFS, DFS, UCS, A*, Greedy search
- Search algorithms' strengths and weaknesses
- Properties: completeness, optimality
- Admissibility and consistency for A* heuristics
- Local search
- Be able to phrase search problems and create heuristics

- Basic definitions and solution with DFS
- Forward checking, arc consistency
- Conditions under which CSPs are efficiently solvable
- Local search for CSPs
- Be able to phrase CSPs

- Minimax search
- Alpha-beta pruning
- Expectimax search
- Evaluation function design

- The minimum expected utilitiy (MEU) principle
- Reflex agents and policies
- Markov decision process definition
- Reward functions, values and q-values
- Bellman Equations
- Value and policy iteration
- Be able to phrase a problem as an MDP

- Exploration vs exploitation
- Model-based learning
- TD value learning / Q-learning
- Linear value function approximation

- Joint, conditional and marginal distributions
- Independence and conditional independence
- Inference from joint distributions

- Representation and semantics
- Inferring joint distributions from conditional probability tables
- Inference from joint distributions
- Conditional independence and d-separation
- Variable elimination
- Prior sampling, rejection sampling and likelihood weighting
- Decision diagrams
- Value of perfect information

- Representation and semantics
- Exact inference (forward algorithm, belief updates)
- Stationary distributions
- Particle filtering

- Basic concepts: learning, generalization, overfitting, experimental methodology
- The naive Bayes classifier
- The perceptron classifier
- The MIRA classifier
- The nearest neighbor classifier
- Estimation and smoothing
- Purposes of held-out (validation) data

- Natural language processing
- Robotics
- Unsupervised learning (clustering)
- Semi-supervised learning