About the Exam
The final will be held on Tuesday, May 19th from 8am to 11am in 390 Hearst Mining.
The final will be closed notes, books, laptops, and people. However, you may use up to two (two-sided) note sheets of your own design (group design ok but not recommended).
You may also use a basic, non-programmable calculator, which is not required, but which may be helpful. Your TI-86 is not allowed. Neither is your iPhone.
Practice Exams
You can also look at much older exams from other versions of the class.
Thursday 5/14 11am-1pm in Hearst Mining 390: Nimar will review questions from the Fall 2008 final exam
Friday 5/15 11am-1pm in Soda 310: John will review some important concepts and techniques (MDPs, Bayes nets and classification)
Office hours:
Friday: John 3pm-4pm in Soda 711
Monday: Nick 2pm-5pm in Soda 711
Possible Exam Topics
Note: exam questions will in many cases ask you to extend or combine basic ideas and algorithms from class. Make sure you understand the fundamentals in addition to being able to procedurally execute algorithms. The exam will not test your knowledge of Python, however questions may assume familiarity with the projects (see past exams for examples).
Search:
- BFS, DFS, UCS, A*, Greedy search
- Search algorithms' strengths and weaknesses
- Properties: completeness, optimality
- Admissibility and consistency for A* heuristics
- Local search
- Be able to phrase search problems and create heuristics
Constraint Satisfaction Problems:
- Basic definitions and solution with DFS
- Forward checking, arc consistency
- Conditions under which CSPs are efficiently solvable
- Local search for CSPs
- Be able to phrase CSPs
Games:
- Minimax search
- Alpha-beta pruning
- Expectimax search
- Evaluation function design
Markov Decision Processes:
- The minimum expected utilitiy (MEU) principle
- Reflex agents and policies
- Markov decision process definition
- Reward functions, values and q-values
- Bellman Equations
- Value and policy iteration
- Be able to phrase a problem as an MDP
Reinforcement Learning:
- Exploration vs exploitation
- Model-based learning
- TD value learning / Q-learning
- Linear value function approximation
Probability:
- Joint, conditional and marginal distributions
- Independence and conditional independence
- Inference from joint distributions
Bayes' Nets:
- Representation and semantics
- Inferring joint distributions from conditional probability tables
- Inference from joint distributions
- Conditional independence and d-separation
- Variable elimination
- Prior sampling, rejection sampling and likelihood weighting
- Decision diagrams
- Value of perfect information
Hidden Markov Models:
- Representation and semantics
- Exact inference (forward algorithm, belief updates)
- Stationary distributions
- Particle filtering
Classification:
- Basic concepts: learning, generalization, overfitting, experimental methodology
- The naive Bayes classifier
- The perceptron classifier
- The MIRA classifier
- The nearest neighbor classifier
- Estimation and smoothing
- Purposes of held-out (validation) data
Applications and Advanced Topics:
Note: larger questions will not assume knowledge of these topics, but they may appear in true/false and short answer questions.
- Natural language processing
- Robotics
- Unsupervised learning (clustering)
- Semi-supervised learning