**The final exam will be held on Friday, December 16th from 8am to 11am in 230 Hearst Gym if the last digit of your student ID number is odd, and 237 Hearst Gym if the last digit of your student ID number is even. **

The final is comprehensive and covers material from both before and after the midterm exam.

The final will be closed notes, books, laptops, and people. However, you may use two two-sided cheat sheets (i.e. four sides of paper total) of your own design (group design okay but not recommended). You may wish to reuse your sheet from the midterm exam and prepare your second sheet with material from the second half of the course.

You may also use a **basic**, non-programmable calculator, which is not required, but which may be helpful and is recommended. (No TI-86's, iPhones, etc.)

- Spring 11 Final
- Spring 11 Midterm (solutions)
- Spring 11 practice midterm
- Fall 10 Final
- Fall 10 midterm (solutions)
- Spring 10 Final
- Spring 10 Midterm (solutions)
- Fall 09 Final
- Fall 09 Midterm (solutions)
- Spring 09 Final (solutions)
- Spring 09 Midterm (solutions)
- Fall 08 Final (solutions)
- Fall 08 Midterm (solutions)
- Fall 07 Final
- Fall 07 Midterm
- Fall 06 Final
- Fall 06 Midterm 1 (solutions)
- Spring 06 final
- Spring 06 practice final (solutions)
- Spring 06 midterm (solutions)
- Spring 06 practice midterm (solutions)

You can also look at much older exams from other versions of the class, but be aware that the syllabus has changed over time.

Topical review: Office hours from 12/5 up through the exam will be held according to a special schedule. The majority of office hours during this period will be themed review sessions, as were held prior to the midterm, but there will also be usual office hours where you may ask general questions.

RRR and finals week office hours (in progress):

Topic | Time | Location | GSI |
---|---|---|---|

Search | Tuesday 12/13, 2pm-3pm | 611 Soda | Georgia |

Search | Wednesday 12/14, 2pm-3pm | 611 Soda | Woody |

CSPs | Tuesday 12/13, 5pm-6pm | 651 Soda | Bharath |

Games | Wednesday 12/14 11:00am-noon | 611 Soda | Jon Long |

MDPs and RL | Monday 12/12, 10:30am-noon | 611 Soda | Mohit |

MDPs and RL | Wednesday 12/14, 3pm-4pm | 611 Soda | Woody |

Bayes' Nets | Monday 12/12, noon-1:30pm | 411 Soda | Georgia |

Bayes' Nets | Tuesday 12/13, noon-1:00pm | 611 Soda | Jon Long |

Utility, VPI | Tuesday 12/13, 3pm-4:30pm | 651 Soda | Bharath |

Utility, VPI | |||

HMMs and Particle Filtering | Friday 12/9, 4pm-5:30pm | 611 Soda | Mohit |

HMMs and Particle Filtering | Monday 12/12, 9-10am | 611 Soda | Woody |

HMMs and Particle Filtering | Thursday 12/15, 2pm-3pm | 411 Soda | Greg |

Machine Learning | Thursday 12/8, 2pm-3:30pm | 651 Soda | Georgia |

Machine Learning | Friday 12/9, 2pm-3:30pm | 411 Soda | Bharath |

Machine Learning | Thursday 12/15, 3pm-4:30pm | 411 Soda | Greg |

General | Wednesday 12/7, 4pm-6pm | 611 Soda | Bharath |

General | Thursday 12/8, 3:30pm-5pm | 411 Soda | Greg |

General | Monday 12/12, 4:30pm-6pm | 611 Soda | Mohit |

General | Tuesday 12/13, 3pm-4pm | 611 Soda | Georgia |

General | Wednesday 12/14, noon-1pm | 611 Soda | Jon Long |

General | Wednesday 12/14, 1pm-2pm | 611 Soda | Woody |

General | Thursday 12/15, 4:30pm-5:30pm | 611 Soda | Greg |

- BFS, DFS, UCS, A*, Greedy search (tree and graph)
- Search algorithms' strengths and weaknesses
- Properties: completeness, optimality
- Admissibility and consistency for A*
- Be able to formulate search problems and create heuristics

- Basic definitions and solution with DFS
- Forward checking, arc consistency
- Conditions under which CSPs are efficiently solvable
- Local search for CSPs
- Be able to formulate CSPs

- Minimax search
- Alpha-beta pruning
- Expectimax search
- Evaluation functions

- The maximum expected utilitiy (MEU) principle
- Reflex agents and policies
- Markov decision process definition
- Reward functions, values and q-values
- Bellman Equations
- Value and policy iteration
- Be able to formulate a problem as an MDP

- Exploration vs exploitation
- Model-based and model-free learning
- TD value learning / Q-learning
- Linear value function approximation

- Joint, conditional and marginal distributions
- Independence and conditional independence
- Inference by enumeration from joint distributions

- Representation and semantics
- Building joint distributions from conditional probability tables
- Inference from joint distributions
- Variable elimination
- Sampling / approximate inference
- Conditional independence and d-separation
- Formulating Bayes' nets for problems

- Drawing and reasoning about decision networks
- Finding actions that maximize expected utilities
- Manipulating Bayes' nets to compute conditional probabilities
- Computing VPI of a random variable

- HMM structure and Bayes' net properties
- Forward algorithm, computing belief distributions
- Particle filtering: running dynamics model, reweighting particles, resampling
- Using the forward algorithm and particle filtering for DBNs

- Naive Bayes: model, inference
- Maximum likelihood estimation, Laplace smoothing
- Perceptron: training procedure, decision rule
- Linear separability of data
- Qualitative convergence properties of the perceptron (i.e. no rates)