{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Echo Correction" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%reset -f\n", "%pylab inline\n", "import scipy.io.wavfile as wav\n", "from IPython.display import Audio" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Some helper functions.\n", "\n", "def playAudio(x, framerate = 44100):\n", " return Audio(x.real, rate = framerate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Audio Equalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to recover the audio file that has an echo encoded into it. We know that the room, or the \"echo system\", has the following impulse response function: $h[n] = \\delta[n] + \\alpha \\delta[n-t_d]$. Using this knowledge, we shall recover the original signal back! Let us assume we know that $\\alpha = 0.8$ and $t_d = 0.3s$.\n", "\n", "The frame rate (or sample rate) says how many samples were taken per second for the audio file. You will find the functions np.fft.fft and np.fft.ifft very useful. You will want to apply your transform to each chunk in the audio file to fix it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let us load the audio file and listen to it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "(framerate, y) = wav.read('echo.wav');\n", "\n", "delay = int(0.3 * framerate); # Time at which the second impulse occurs.\n", "alpha = 0.8;\n", "\n", "playAudio(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pretty echo-y, correct? Let us calculate the DFT basis of the echo system. First, we'll model the impulse response." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h = np.zeros(y.shape)\n", "h[0] = 1\n", "h[delay] = alpha " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! Now we will calculate the DFT basis of the system response." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "H = np.fft.fft(h) # We use the fft call without the \"ortho\" because we want the eigenvalues" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If $X[k]$ is the spectrum of the true audio and $Y[k]$ is the spectrum of the signal that we hear (the echo), then,\n", "$$Y[k] = H[k] \\; X[k]$$\n", "To recover $X[k]$, we simply need to somehow undo this. What can we do in the frequency domain to undo the effect of multiplying by $H[k]$ ?\n", "We will use this fact to recover the clean audio! \n", "\n", "# Q. Fill in the arrays below as indicated by the comments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Y = np.fft.fft(y, norm='ortho') \n", "G = 0 #STUDENT, replace 0 with what you think will help recover the signal.\n", "X = G * Y # Recover X by using the inverse DFT you've stored in G.\n", "x = np.fft.ifft(X, norm='ortho') # Take the inverse transform of X to get back the clean signal." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yay, you've fixed it! Let us listen to it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "playAudio(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This audio clip is from the Create-Commons Non-Commercial With Attribution Licensed song \"Mandelbrot Set\" by Jonathan Coulton. You can find the whole song at http://www.jonathancoulton.com/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Denoising Signals using the DFT" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%reset -f\n", "%pylab inline\n", "from IPython.display import Audio" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Some helper functions.\n", "\n", "def playAudio(x):\n", " return Audio(x.real,rate=44100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let us listen to the message." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Make sure your volume isn't set too high.\n", "y = np.load('noisyaudio.npy');\n", "playAudio(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us see if the the signal has any **structure**. Observe the below two plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.figure(figsize=(15, 7))\n", "plt.subplot(1, 2, 1)\n", "plt.plot(y)\n", "plt.title('Entire audio.')\n", "plt.subplot(1, 2, 2)\n", "plt.plot(y[0:500])\n", "plt.title('First 500 samples.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There isn't much structure discernible, is there? All hope is not lost! \n", "\n", "Let us take the Fourier transform of the above signal.\n", "\n", "# Q. Fill in the missing values as indicated by the comments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Y = 0 #STUDENT, Replace 0 so that Y has contains the coefficients of the audio signal viewed in the DFT basis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will now plot the magntiude of the spectrum! Maybe some structure will be revealed...\n", "\n", "# Q. Fill in the missing values as indicated by the comments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Plotting code.\n", "\n", "magY = 0 #STUDENT, replace 0 and store the magnitude of the spectrum in magY.\n", "\n", "plt.figure(figsize=(15, 3))\n", "plt.subplot(1, 3, 1)\n", "plt.plot(magY)\n", "plt.title('Entire spectrum (Magnitude).')\n", "plt.subplot(1, 3, 2)\n", "plt.plot(np.arange(1000, 1250, 1), magY[1000:1250])\n", "plt.title('1000 - 1250 DFT basis (Magnitude).')\n", "plt.subplot(1, 3, 3)\n", "plt.plot(np.arange(219250, 219500, 1), magY[219250:219500])\n", "plt.title('219250 - 219500 DFT basis (Magnitude).')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Interesting... It looks like there are two spikes on either side of the spectrum...\n", "\n", "We know the Professor Maharbiz generated pure tones. In the DFT basis, these pure tones are represented by 2 coefficients. Note the conjudate symmetry. (Note the conjugate part isn't seen because we have taken the magntiude.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### There is a simple method to denoise this signal: A simple threshold! \n", "Threshold the DFT spectrum by keeping the coefficients whose ***absolute*** values lies above a certain value. Then take the inverse DFT and listen to the audio. You will be given a range of possible values to test. Write the threshold value you think works best. Save the corrected audio in a variable call $x$.\n", "\n", "# Q. Fill in the missing values as indicated by the comments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Possible threshold values to try: {30, 40, 50, 60, 70, 80, 90, 100, 110, 120}\n", "\n", "threshold = 0 #STUDENT, replace 0 with different values of thresholds to see what works\n", "\n", "Y[magY < threshold] = 0;\n", "y = np.fft.ifft(Y, norm='ortho').real\n", "\n", "playAudio(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hurray! Professor Sahai gets to listen to Professor Maharbiz's tones! Let us look at the signal in the time domain now." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.figure(figsize=(15, 7))\n", "plt.subplot(1, 2, 1)\n", "plt.plot(y)\n", "plt.title('Entire audio.')\n", "plt.subplot(1, 2, 2)\n", "plt.plot(y[0:500])\n", "plt.title('First 500 samples.')" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" }, "name": "_merged" }, "nbformat": 4, "nbformat_minor": 0 }