{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Autocorrect (Viterbi)\n", "#### Authors:\n", "v1.0 (2017 Spring) Tavor Baharav, Kabir Chandrasekhar, Sinho Chewi, Andrew Liu, Kamil Nar, David Wang, and Kannan Ramchandran\n", "\n", "v1.1 (2017 Fall) Sinho Chewi, Avishek Ghosh, Chen Meng, Abhay Parekh, and Jean Walrand\n", "\n", "For this option, you will be implementing autocorrect using the Viterbi algorithm.\n", "\n", "## Hidden Markov Model\n", "\n", "We will model the English language as a HMM. The state space is the set of all English words. The *hidden states* are the intended words of the writer, while the *emissions* are the actual words written down, which include typos. As an example, suppose an author intends to write the sentence \"the dog barks\". The sequence of hidden states is\n", "\n", "$$X(0) = \\text{the},$$\n", "$$X(1) = \\text{dog},$$\n", "$$X(2) = \\text{barks}.$$\n", "\n", "However, the author is prone to making errors while typing, so the emissions (sometimes called observations) are\n", "\n", "$$Y(0) = \\text{the},$$\n", "$$Y(1) = \\text{dog},$$\n", "$$Y(2) = \\text{barjs}.$$\n", "\n", "The Viterbi algorithm gives the most likely sequence of hidden states, i.e. the most likely sequence of intended words which produced the typo-filled output text.\n", "\n", "In the field of natural language processing (NLP), the literature often mentions *$n$-grams* models. A *unigram* is a single word. A *bigram* is a pair of words which appear consecutively in a text.\n", "\n", "You will use the unigram counts to determine the relative frequencies of words. The word frequencies specify the initial distribution of the HMM.\n", "\n", "The bigram counts are used to infer word-to-word transition probabilities.\n", "\n", "## Emission Model: Edit Distance\n", "\n", "The emission probabilities are based on a measure of string dissimilarity known as *edit distance* or *Levenshtein distance*. The edit distance between two strings $s_1$ and $s_2$ (with lengths $m$ and $n$ respectively) is defined by the recursive equations\n", "\n", "$$d(s_1, s_2) =\n", "\\begin{cases}\n", " m & \\text{if } s_2 \\text{ is empty}, \\\\\n", " n & \\text{if } s_1 \\text{ is empty}, \\\\\n", " \\min\n", " \\begin{cases}\n", " d(s_1(2, \\dotsc, m), s_2) + 1 \\\\\n", " d(s_1, s_2(2, \\dotsc, n)) + 1 \\\\\n", " d(s_1(2, \\dotsc, m), s_2(2, \\dotsc, n)) + \\mathbf{1}\\{s_1(1) \\ne s_2(1)\\}\n", " \\end{cases}\n", " & \\text{otherwise}.\n", "\\end{cases}\n", "$$\n", "The notation is as follows: $s_1(1)$ represents the first character in the string; and $s_1(2, \\dotsc, n)$ represents the substring which starts at the second character and ends at the $n$th character of $s_1$. Similarly for $s_2$.\n", "\n", "Here is some intuition for edit distance: it represents the minimum number of operations to change $s_1$ into $s_2$, where an operation is either\n", "- adding a new character,\n", "- deleting an existing character,\n", "- or changing one character into another.\n", "\n", "You can simply think of edit distance as a way to measure the distance between two strings. An example is shown below, demonstrating that the edit distance between \"elephant\" and \"relevant\" is 3, which is achieved by taking \"relevant\", deleting the r, replacing the v with a p, and adding an h immediately after.\n", "\n", "\n", "\n", "The emission model is given by a Poisson distribution. If the hidden state $x$ and the emission $y$ have edit distance $d(x, y) = k$, then the emission probability is\n", "\n", "$$Q(x, y) = \\frac{e^{-\\lambda} \\lambda^k}{k!}.$$\n", "\n", "The larger the edit distance, the smaller the emission probability, which aligns with our intuition.\n", "\n", "## Dynamic Programming\n", "\n", "Ensure that you implement the edit distance function using dynamic programming. Dynamic programming means solving the edit distance equations iteratively: start by computing the edit distance between $s_1(m)$ and $s_2(n)$; and then the edit distance between $s_1(m-1, m)$ and $s_2(n)$; and then the edit distance between $s_1(m)$ and $s_2(n-1, n)$; and so on. The key idea is to solve the equations backwards in order to avoid computationally prohibitive recursive calls that branch out into more recursive calls. As a benchmark, our implementation allows us to correct a sentence of three words in roughly 15 seconds.\n", "\n", "A concrete example of a dynamic programming problem is to find the longest path in a DAG (directed acyclic graph). Suppose you are given a DAG $G = (V, E)$, and we wish to find the longest path in the DAG. Then, let the vertices be $v_1,\\dotsc,v_n$, where $n := |V|$ and the order in which the vertices are listed is compatible with the DAG structure in the sense that if $1 \\le i < j \\le n$ are positive integers, then the edge $(v_j, v_i)$ does not appear in the graph. (Sorting the vertices in this way is possible because we are working with a DAG; this is known as a topological sort.) Starting from vertex $v_i$, where $i \\in \\{1,\\dotsc,n\\}$, let $L(v_i)$ denote the longest path in the DAG using the nodes $\\{v_i,v_{i+1},\\dotsc,v_{n-1},v_n\\}$, starting from $v_i$. Then, we can write the recurrence $L(v_i) = 1 + \\max\\{L(v_j) : (v_i, v_j) \\in E\\}$, where $L(v_n) := 1$. Now we can solve these equations backwards: solve for $L(v_n)$ first, and then once $L(v_n)$ is known, we can solve for $L(v_{n-1})$. Next, solve for $L(v_{n-2})$ now that $L(v_{n-1})$ and $L(v_n)$ are known, and continue. The point is that we store the computed values of $L$ so we do not have to recompute them in each iteration, which leads to huge reductions in the runtime of the algorithm.\n", "\n", "## Data Source\n", "\n", "We will be using data from the source http://norvig.com/ngrams/ for the unigram and bigram counts.\n", "\n", "We took the unigrams from the file \"count_1w100k.txt\" and the bigrams from the file \"count_2w.txt\".\n", "\n", "## Implementation Advice\n", "\n", "You do not have to follow these guidelines, but they may make your life a little easier. For starters, here is a short checklist of what to implement:\n", "- load in the data, compute the unigram and bigram frequencies, and create matrices containing log probabilities (we recommend using log probabilities rather than actual probability values to avoid numerical underflow)\n", "- a function which takes two strings as inputs and returns the edit distance between them\n", "- a function which takes two strings as inputs and returns an emission probability\n", "- implement the Viterbi algorithm\n", "\n", "Make sure to test your implementation starting with a small list of words, because running the algorithm on a large set of words can be very time-consuming.\n", "\n", "You will probably not want to use all of the words in the dataset, because the performance will not be too fast. Instead, consider limiting the number of words you use to a small subset of the dataset, based on the most frequently appearing words. (We recommend limiting the number of words to 10,000, but you can play around with this number too.)\n", "\n", "If your implementation is not optimized, the performance may be very slow. Try to use NumPy's functions whenever possible because they are *vectorized* (optimized for high performance).\n", "\n", "For the emission model, you may want to play around with different values of $\\lambda$ to see which one works best. We tried using $\\lambda = 0.01$.\n", "\n", "Good luck and have fun!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def viterbi(sentence):\n", " # your code here\n", " return -1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import time\n", "start = time.time()\n", "print viterbi(\"whyy is th ski bluui\")\n", "end = time.time()\n", "print \"Execution Time:\", end - start\n", "# Staff solution returns \"why is the sky blue\",\n", "# takes under 4 seconds to run" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reference: https://www.cs.sfu.ca/~mori/courses/cmpt310/a5.pdf" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }