{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 5 (Option 1) - PageRank\n", "\n", "#### Authors:\n", "\n", "v1.0 (2014 Fall) Rishi Sharma \\*, Sahaana Suri \\*, Kangwook Lee \\*\\*, Kannan Ramchandran \\*\\*
\n", "v1.1 (2015 Fall) Kabir Chandrasekher \\*, Max Kanwal \\*, Kangwook Lee \\*\\*, Kannan Ramchandran \\*\\*
\n", "v1.2 (2016 Spring) Kabir Chandrasekher, Tony Duan, David Marn, Ashvin Nair, Kangwook Lee, Kannan Ramchandran
\n", "\n", "## Introduction\n", "\n", "From Wikipedia: \n", "\n", "\n", "> PageRank is an algorithm used by Google Search to rank websites in their search engine results. PageRank was named after Larry Page, one of the founders of Google. PageRank is a way of measuring the importance of website pages. According to Google:\n", "\n", ">>PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.\n", "\n", "\n", "There are four common frameworks by which academics view Google's PageRank algorithm. The first looks at the social impact, both positive and negative, of immediate access to previously unimaginable knowledge through one centralized terminal. The second, and most mathematical, sees PageRank as a computation of the Singular Value Decomposition (SVD) of the adjacency matrix of the graph formed by the internet, with particular emphasis paid to the first few singular vectors. The third, and most far-reaching, practical technical implication of Google's work is the implementation of algorithms and computation at enormous scale. Much of the computing infrastructure which operates at a global scale deployed today can trace its origins to Google's need to perform SVD on an object as enormous as the Internet. Finally, a more intuitive way to look at the PageRank algorithm is through the lens of a web crawler (or many web crawler) acting as an agent (or agents) in a Markov Chain the size of the web. We will investigate this viewpoint.\n", "\n", "This crawler is searching for an approximate \"invariant\" distribution (why does a true invariant distribution almost certainly not exist?) and will rank pages based on their \"probability\" in this generated distribution. In order to do so, our crawler chooses to follow a link uniformly at random from the page it is on in order to arrive at a new page, keeping tally of how many times it has visited each page. If this crawler runs for a really, really long time, the fraction of time it has spent on each webpage will approximately be the probability of being on that page (assuming we account for pathologies in the Markov chain which we will discuss soon). We then rank pages in decreasing order of probability." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alright, great! Let's do stuff. First, visit the following webpage, and see how many web pages can be reached by clicking the links on each page.\n", "http://www.eecs.berkeley.edu/~kw1jjang/ee126/1.html\n", "\n", "There are total of $8$ pages, and they are connected as follows.\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we choose a link at uniform from each page, the probability of going between pages $x$ and $y$ is $\\Large \\frac{\\text{# of pages from x to y}}{\\text{# of pages leaving x}}$\n", "\n", "Thus the Markov chain generated by the web pages above is\n", "\n", "\n", "\n", "and the transition matrix of the Markov chain is\n", "\n", "$$\n", "\\left( \\begin{array}{cccccccc}\n", "0 & \\frac{1}{5} & \\frac{1}{5} & \\frac{1}{5} & \\frac{1}{5} & 0 & 0 & \\frac{1}{5} \\\\\n", "\\frac{1}{2} & 0 & 0 & 0 & \\frac{1}{2} & 0 & 0 & 0 \\\\\n", "0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n", "0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n", "0 & \\frac{1}{3} & \\frac{1}{3} & \\frac{1}{3} & 0 & 0 & 0 & 0 \\\\\n", "\\frac{1}{4} & 0 & \\frac{1}{4} & 0 & 0 & 0 & \\frac{1}{4} & \\frac{1}{4} \\\\\n", "\\frac{1}{4} & \\frac{1}{4} & 0 & 0 & \\frac{1}{4} & \\frac{1}{4} & 0 & 0 \\\\\n", "\\frac{1}{5} & \\frac{1}{5} & \\frac{1}{5} & \\frac{1}{5} & 0 & 0 & \\frac{1}{5} & 0\n", "\\end{array} \\right)\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $\\mathcal{Q}$1. Find the steady-state (invariant/stationary) distribution $\\pi$ of the Markov chain above. How do you know that it exists? \n", "\n", " The Markov matrix is copied in code below. This might make your computation easier, but you can solve this in any way you wish. (Note: don't forget about the difference between right and left eigenvectors) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "from __future__ import division\n", "\n", "P = np.matrix([[0, 1/5, 1/5, 1/5, 1/5, 0, 0, 1/5],\n", " [1/2, 0, 0, 0, 1/2, 0, 0, 0],\n", " [0, 1, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 1, 0, 0, 0],\n", " [0, 1/3, 1/3, 1/3, 0, 0, 0, 0],\n", " [1/4,0,1/4,0,0,0,1/4,1/4],\n", " [1/4, 1/4, 0, 0, 1/4, 1/4, 0, 0],\n", " [1/5, 1/5, 1/5, 1/5, 0, 0, 1/5, 0]])\n", "\n", "P_transpose = P.T\n", "\n", "print P" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulation time!\n", "\n", "We now want to *empirically* test what we solved above by modeling a random user hopping along those webpages. \n", "\n", "We will start the user at \"1.html\" and behave as per the Markov chain above. In the code below, we simulate this and keep track of the average amount of time a user spends in each state. We will expect that after enough iterations, the fraction of time spent in each state should approach the stationary distribution.\n", "\n", "We use the `parse_links()` method to parse all hyperlinks in a page. We use the library Beautiful Soup in order to complete this portion of the lab in order to easiliy parse pages. Once you download the latest release, you must build and install setup.py. Alternatively, use pip or easy_install (help).\n", "\n", "Note that this code relies on having **Python 2** installed -- it will not work with Python 3." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import re\n", "import sys\n", "import urllib\n", "import urlparse\n", "import random\n", "from bs4 import BeautifulSoup\n", "\n", "class MyOpener(urllib.FancyURLopener):\n", " version = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15'\n", "\n", "def domain(url):\n", " \"\"\"\n", " Parse a url to give you the domain.\n", " \"\"\"\n", " # urlparse breaks down the url passed it, and you split the hostname up \n", " # ex: hostname=\"www.google.com\" becomes ['www', 'google', 'com']\n", " hostname = urlparse.urlparse(url).hostname.split(\".\")\n", " hostname = \".\".join(len(hostname[-2]) < 4 and hostname[-3:] or hostname[-2:])\n", " return hostname\n", " \n", "def parse_links(url, url_start):\n", " \"\"\"\n", " Return all the URLs on a page and return the start URL if there is an error or no URLS.\n", " \"\"\"\n", " url_list = []\n", " myopener = MyOpener()\n", " try:\n", " # open, read, and parse the text using beautiful soup\n", " page = myopener.open(url)\n", " text = page.read()\n", " page.close()\n", " soup = BeautifulSoup(text, \"html\")\n", "\n", " # find all hyperlinks using beautiful soup\n", " for tag in soup.findAll('a', href=True):\n", " # concatenate the base url with the path from the hyperlink\n", " tmp = urlparse.urljoin(url, tag['href'])\n", " # we want to stay in the berkeley EECS domain (more relevant later)...\n", " if domain(tmp).endswith('berkeley.edu') and 'eecs' in tmp:\n", " url_list.append(tmp)\n", " if len(url_list) == 0:\n", " return [url_start]\n", " return url_list\n", " except:\n", " return [url_start]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $\\mathcal{Q}$2. Simulating a Random Walk\n", "\n", "In the following code block, use the above functions to surf the web pages described by the Markov chain above. This code block may take a while to run. If it is taking more than a couple of minutes, maybe try reducing `num_of_visits` in order to at least get results. Also, running your code while connected to AirBears may help if you have a slow internet connection at home." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import random\n", "\n", "# the url we want to begin with \n", "url_start = \"http://www.eecs.berkeley.edu/~kw1jjang/ee126/1.html\"\n", "current_url = url_start\n", "\n", "# parameter to set the number of transitions you make/different pages you visit\n", "num_of_visits = 1000\n", "\n", "# dictionary of pages visited so far\n", "visit_history = {}\n", "\n", "# initialize dictionary since we know exactly where we'll end up\n", "for i in range(1, 9):\n", " page = \"http://www.eecs.berkeley.edu/~kw1jjang/ee126/\" + str(i) + \".html\"\n", " visit_history[page] = 0\n", "\n", "for i in range(num_of_visits):\n", "\n", " # Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print your results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for i in range(1, 9):\n", " page = \"http://www.eecs.berkeley.edu/~kw1jjang/ee126/\" + str(i) + \".html\"\n", " print 'Fraction of time staying on page %d is %f' % (i, float(visit_history[page])/num_of_visits)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does this approximately match the invariant distribution you expected?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generalizing to the Web\n", "\n", "The toy websites given above conveniently form an *irreducible* Markov chain (look up what this means if you do not remember from class), but most of the web will not look like this. There will be fringes of the internet containing only self-loops, or some web pages which do not link to others at all. In order to account for such pathologies in the web, we need to make a more intelligent surfer. The simplest idea would be to just jump back to the starting page if there are no links found on the page you are on, and to always return to a \"good\" starting point with probability $p$ on every page.\n", "\n", "This is a very naive scheme, and there are many more intelligent methods by which you can sample from the distribution of the Internet, accounting for its pathologies and all." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Ranking Berkeley Professors\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code is a (weak) attempt to rank the Berkeley faculty based on a crawler which begins on the EECS research homepage." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [], "source": [ "url_start = \"http://www.eecs.berkeley.edu/Research/\"\n", "current_url = url_start\n", "num_of_visits = 200\n", "\n", "#List of professors obtained from the EECS page\n", "profs = ['Abbeel','Agrawala','Alon','Anantharam','Arcak','Arias','Asanović','Bachrach','Bajcsy','Bodik','Bokor','Boser','Brewer','Canny','Chang-Hasnain','Culler','Darrell','Demmel','Fearing','Fox','Franklin','Garcia','Goldberg','Hartmann','Harvey','Hellerstein','Javey','Joseph','Katz','Keutzer','Liu','Klein','Kubiatowicz','Lee','Lustig','Maharbiz','Malik','Nguyen','Niknejad','Nikolic',\"O'Brien\",'Parekh','Patterson','Paxson','Pisano','Rabaey','Ramchandran','Roychowdhury','Russell','Sahai','Salahuddin','Sanders','Sangiovanni-Vincentelli','Sastry','Sen','Seshia','Shenker','Song','Song','Spanos','Stoica','Stojanovic','Tomlin','Tygar','Walrand','Wawrzynek','Wu','Yablonovitch','Yelick','Zakhor']\n", "#Bad URLs help take care of some pathologies that ruin our surfing\n", "bad_urls = ['http://www.erso.berkeley.edu/','http://www.eecs.berkeley.edu/Rosters/roster.name.nostudentee.html','http://www.eecs.berkeley.edu/Resguide/admin.shtml#aliases','http://www.eecs.berkeley.edu/department/EECSbrochure/c1-s3.html']\n", "\n", "#Creating a dictionary to keep track of how often we come across a professor\n", "profdict = {}\n", "for i in profs:\n", " profdict[i] = 0\n", "\n", "for i in range(num_of_visits):\n", " print i , ' Visiting... ', current_url\n", " if random.random() < 0.95: #follow a link!\n", " url_list = parse_links(current_url, url_start)\n", " updated = False\n", " while not updated:\n", " current_url = random.choice(url_list)\n", " updated = True\n", " if current_url in bad_urls or \"iris\" in current_url or \"Deptonly\" in current_url or \"anchor\" in current_url or \"erso\" in current_url: #dealing with more pathologies:\n", " updated = False\n", " myopener = MyOpener()\n", " page = myopener.open(current_url)\n", " text = page.read()\n", " page.close()\n", " #Figuring out which professor is mentioned on a page.\n", " for p in profs:\n", " profdict[p]+= 1 if \" \" + p + \" \" in text else 0 #can use regex re.findall(i,text), but it's overkill\n", " else: #click the \"home\" button!\n", " current_url = url_start" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [], "source": [ "prof_ranks = [pair[0] for pair in sorted(profdict.items(), key = lambda item: item[1], reverse=True)]\n", "top_score = profdict[prof_ranks[0]]\n", "for i in range(len(prof_ranks)):\n", " print \"%d %f: %s\" % (i+1,profdict[prof_ranks[i]]/top_score, prof_ranks[i])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print 'Top score: ', top_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $\\mathcal{Q}$3. Try your hand at applying the above idea to a website you personally visit (somewhat) frequently! Do a simple crawl (similar to the above) and see if you can figure out something interesting. (Keep it simple.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Your code here" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2.0 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }