{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# EE 123 Lab1 - Time Domain Sonar Lab\n", "\n", "### Written by Miki Lustig and Frank Ong 2016" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this lab, we will interact with physical time-domain signals. The first part will involve generating and recording sounds on your computer. We will use the chirp signal to characterize the response of the speaker-microphone system and look at detecting signals using cross-correlation.\n", "In the second part, we will build on part one and use the computer sound system to develop a simple sonar." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Import functions and libraries\n", "from __future__ import division\n", "import numpy as np, matplotlib.pyplot as plt, bokeh.plotting as bk\n", "import threading,time, Queue, pyaudio\n", "from matplotlib.pyplot import *\n", "import matplotlib.cm as cm\n", "from scipy import signal\n", "from numpy import *\n", "from bokeh.models import GlyphRenderer\n", "from threading import Lock\n", "bk.output_notebook()\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Chirping!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this assignment you will have to use iPython, and a laptop equipped with a speaker and a microphone. When playing a sound and recording on the computer speaker, the signal goes through several systems. In particular it goes through the response of the speaker, the room we are in and the response of the microphone recording system.\n", "\n", "A chirp is a a signal in which the frequency increases linearly with time. In this assignment we will generate a chirp signal and use it to measure the amplitude of the frequency response of our speaker-room-microphone system.\n", "\n", "A simultaneous frequency is defined as the derivative of the phase of a signal, $f = \\frac{1}{2\\pi} \\frac{d\\phi (t)}{ dt} $. For example, the simultaneous frequency of $\\cos(\\phi(t))=\\cos(2\\pi f_0 t)$ is \n", "\n", "$$f = \\frac{d\\phi (t)}{2\\pi dt} = f_0$$ \n", "\n", "\n", "For a linear chirp, the frequency changes linearly over time. The simultaneous frequency is therefore defined as \n", "\n", "
$$ f(t) = f_0 + kt. $$
\n", "\n", "\n", "So, \n", "\n", "
$$ x(t) = \\sin(2\\pi\\int_0^t f(t')dt') = \\sin(2\\pi\\int_o^t(f_0+kt')dt') = \\sin(2\\pi(f_0+\\frac{k}{2}t)t) $$
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I Task I: Generating the Chirp\n", "\n", "Generate a 15 seconds long chirp signal, sampled at 44,100[Hz] with a frequency range of 20[Hz] to 20000[Hz]. Set the magnitude of the chirp to 0.5. This will help prevent non-linearities when we play the sound later. \n", "\n", "\n", "* Set the sample-rate frequency `fs = 44100` Hz\n", "* Generate a time index from `t=0` to `t=15` with sampling rate of 44100 Hz\n", "* Generate a vector of frequency vs time: `f_of_t` ( $f(t)$ ) that changes linearly from 20Hz to 20Khz over 15 seconds\n", "* Generate a vector of phase vs time: `phi_of_t` ( $\\phi(t) = 2\\pi \\int_0^t f(t)dt$ ) by numerically integrating f_of_t. You will find the function `np.cumsum` useful.\n", "* Generate the chirp function `s_chirp` with amplitude of 0.5\n", " \n", " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fs = 44100\n", "# your code here\n", "# generate time index\n", "\n", "# generate f_of_t\n", "\n", "# generate phi_of_t\n", "\n", "# generate chirp signal\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the first $\\frac{1}{2}$ a second of the chirp (`s_chirp`), you will notice that the carrier frequency increases and that the chirp has a constant envelope. To get a nice figure, make sure the aspect ratio of the figure is height/width = 0.2. Label the axis and figure appropriately. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Set the aspect ratio such that the image is wide\n", "width, height = figaspect(0.2)\n", "fig = figure(figsize=(width,height))\n", "\n", "#Your code below:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the magnitude frequency response of the sequence from 0 to $\\pi$ using the function `signal.freqz`. Note, that the digital frequency range represents a physical frequency range of 0[hz] to 22050[Hz]. To get a nice figure, make sure the aspect ratio of the figure is height/width = 0.2. Label the axis and figure appropriately.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# generate frequency response of chirp\n", "\n", "# generate frequency index\n", "\n", "# plot\n", "width, height = plt.figaspect(0.2)\n", "fig = plt.figure(figsize=(width,height))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Explain why the chirp is an appropriate signal to measure the magnitude frequency response of a system. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answer here:\n", "\n", "* * *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I Task II: Playing and Recording the Chirp\n", "Now, we will play the sound of the chirp on our computer speaker and simultaneously record using the microphone. \n", "\n", "* On Apple computers it is recommended that you turn off the ambient noise reduction by going to system-preferences, selecting sound, choose the input tab and make sure that the \"Use ambient noise reduction\" box is unchecked. In some windows system there's ambient noise reduction as well. Make sure it is also turned off. \n", "\n", "* Your laptop most likely has two speakers. It is best if we work only with one. Go to the operating system's sound settings and change the stereo settings such that the speaker that is closest to the microphone is active. Your result will be much better that way. \n", "\t\t\n", "* Make sure your output volume is at 70-80% and that the laptop's microphone is on, again to avoid non-linear distorsions. \n", "\n", "* We will record 12 seconds just to make sure we capture the entire sequence. \n", "\n", "The code below defines some functions to use with pyaudio -- a multi-platform audio python interface. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def play_audio( Q, p, fs , dev=None):\n", " # play_audio plays audio with sampling rate = fs\n", " # Q - A queue object from which to play\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number\n", " \n", " # Example:\n", " # fs = 44100\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # Q = Queue.queue()\n", " # Q.put(data)\n", " # Q.put(\"EOT\") # when function gets EOT it will quit\n", " # play_audio( Q, p, fs,1 ) # play audio\n", " # p.terminate() # terminate pyAudio\n", " \n", " # open output stream\n", " ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),output=True,output_device_index=dev)\n", " # play audio\n", " while (1):\n", " data = Q.get()\n", " if data is \"EOT\" :\n", " break\n", " try:\n", " ostream.write( data.astype(np.float32).tostring() )\n", " except:\n", " break\n", " \n", "def record_audio( queue, p, fs ,dev=None,chunk=1024,lock=None):\n", " # record_audio records audio with sampling rate = fs\n", " # queue - output data queue\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number\n", " # chunk - chunks of samples at a time default 1024\n", " #\n", " # Example:\n", " # fs = 44100\n", " # Q = Queue.queue()\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # record_audio( Q, p, fs, 1) #\n", " # p.terminate() # terminate pyAudio\n", " \n", " istream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),input=True,input_device_index=dev,frames_per_buffer=chunk)\n", "\n", " # record audio in chunks and append to frames\n", " frames = [];\n", " while (1):\n", " try: # when the pyaudio object is destroyed, stops\n", " with lock if lock is not None else 1:\n", " data_str = istream.read(chunk) # read a chunk of data\n", " except:\n", " break\n", " data_flt = np.fromstring( data_str, 'float32' ) # convert string to float\n", " queue.put( data_flt ) # append to list\n", "\n", "def xciever(sig, fs):\n", " # function takes a signal and a sampling frequency\n", " # it then plays and records at the same time. The function returns\n", " # the recorded sound.\n", "\n", " rcv = [];\n", "\n", " # create an input output FIFO queues\n", " Qin = Queue.Queue()\n", " Qout = Queue.Queue()\n", "\n", " #lock for controlling access to shared resources\n", " lock = Lock()\n", " \n", " # create a pyaudio object\n", " p = pyaudio.PyAudio()\n", "\n", " # initialize a recording thread.\n", " t_rec = threading.Thread(target = record_audio, args = (Qin, p, fs ), kwargs={'lock': lock})\n", " t_play_audio = threading.Thread(target = play_audio, args = (Qout, p, fs ))\n", "\n", " # start the recording and playing threads\n", " t_rec.start()\n", " t_play_audio.start()\n", "\n", " Qout.put( sig );\n", " Qout.put( \"EOT\" );\n", "\n", " # pause for RECORD_SECS seconds\n", " RECORD_SECS = len(sig)/fs + 2.0\n", " time.sleep( RECORD_SECS )\n", "\n", " # terminate pyAudio\n", " with lock:\n", " p.terminate()\n", " \n", " # append to output\n", " while ( not Qin.empty()) :\n", " data = Qin.get()\n", " rcv = np.append( rcv, data )\n", "\n", " return rcv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Playing and recording audio:__\n", "\n", "* Run the following code. It is an example of how to play and record sound at the same time and uses threading for the play and record threads.\n", "\n", "The resulting received sequence will be stored in the variable `rcv_chirp`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## Play and record chirp at the same time\n", "\n", "fs = 44100 # sampling rate = 44100 Hz\n", "\n", "rcv_chirp = xciever( s_chirp, fs)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the frequency response of the received sequence. \n", "* Also, plot the absolute value of the received signal. Plotting the absolute value (sort of) displays the envelope of the chirp. \n", "\n", "Label the figures and use an aspect ratio of Height/Width = 0.2\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## Plot chirp response\n", "\n", "# Your code below:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Comment on the results you got. In addition, what is the implicit assumption we are making in order to claim that the result is a frequency response? \n", "(HINT: consider the case when the chirp was very short)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Answers here:\n", "\n", "* * *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I, Task III: Envelope detection with Hilbert transform. \n", "The absolute value of the of the result \"sort of\" displays the envelope, however it is still modulated by the (now rectified) frequency sweep carrier. If we write down the response, it can be expressed approximately as \n", "\n", "$$y[n] = |H[n]| \\sin(2\\pi (f_0 +k[n*T])nT + \\angle H[n])$$\n", "\n", "where $|H[n]|$ is the frequency response for the instantaneous frequency at the nth sample and $\\angle H[n]$ is its phase response. \n", "\n", "The reason that it is only an approximation is that there is an inherent assumption that we do not look at transient effects, only steady state effect for each frequency. This is a good approximation because our chirp is very slow compared to the propagation of sound in the room. \n", "\n", "One way to get the envelope $|H[n]|$ is to convert it to its analytic signal. The analytic signal $x_a(t)$ of signal $x(t)$ is:\n", "\n", "$$x_a = F^{-1}(F(x)\\cdot 2U) = x + j y$$\n", "\n", "where $F$ is the Fourier transform, $U$ the unit step function,\n", "and $y$ the Hilbert transform of $x$. In other words, the negative half of the frequency spectrum is zeroed\n", "out, turning the real-valued signal into a complex signal. This is similar to the question in HW2!\n", "\n", "The analytic signal of the received chirp will then be: \n", "\n", "$$ y_a[n] = |H[n]|e^{j2\\pi (f_0 +k[n*T])nT + \\angle H[n]} $$\n", "\n", "The envelope can be detected by taking the magnitude. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Compute the analytic signal by using the function `signal.hilbert` and plot its absolute value. Note that the discrete hilbert transform is not perfect, since it uses FIR filtering. This will show up as ripple in the envelope.\n", "\n", "* Label the figures and use an aspect ration of Height/Width = 0.2" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## Your lovely code here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I, Task IV: Auto-correlation Properties of the Chirp:\n", "\n", "In part II of the lab, we will be sending and receiving chirp pulses to estimate delays between the tranceived pulses. This is done by cross correlating / matched filtering the received signal with the known chirp pulse to detect the echoes. In this task, we will investigate the correlation properties of the chirp.\n", "\n", "A cross correlation is defined as:\n", "\n", "$$ R_{xy}[n] = \\sum_{m=-\\infty}^\\infty x[m]y^*[m-n] = (x[m]*y^*[-m])[n]$$\n", "\n", "which is like a convolution, without flipping one of the signals. It can be implemented using a convolution as shown above. In general, the more correlated the two signals is at position $n$, the higher the value will be. That's why it is useful in a sonar system.\n", "\n", "Because we will be doing cross-correlations between a chirp pulse and its echoes, it is useful to look at the auto-correlation, which is basically a cross correlation of the signal with itself. A discrete autocorrelation of a signal is defined as: \n", "\n", "$$ R_{xx}[n] = \\sum_{m=-\\infty}^\\infty x[m]x^*[m-n] = (x[m]*x^*[-m])[n]$$ \n", "\n", "The chirp has a very nice property that its auto-correlation is very narrow. Since the spread of the resulting correlation determines how fast you can detect, the width of the auto-correlation is important. This property is called pulse compression and is widely considered in radar design. Random noise and some other pseudo-random like sequences also possess this property. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate a 512 sample chirp pulse with a frequency sweep from 17KHz-18KHz and sampling rate fs=44100. \n", "* Validate its frequency response by plotting it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fs = 44100\n", "t = r_[0.0:512]/fs\n", "\n", "f0 = 17000.0\n", "f1 = 18000.0\n", "\n", "# your code here\n", "# generate chirp signal\n", "\n", "\n", "# generate frequency response of chirp\n", "\n", "\n", "# plot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Compute the autocorrelation of the chirp discrete convolution, either with `signal.convolve` or `signal.fftconvolve`. Remember that you have to flip the signal since convolution does that already. You can flip a signal `x` by doing `x[::-1]`. Use mode=''full'' for convolution.\n", "* Plot the autocorrelation. Your plot should be spiky because we did not do envolope detection yet. Use miliseconds as the x-axis" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## Your fantastic code here:\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar way as we did before, it is possible to recover the envelope of the autocorrelation by performing a cross-correlation with the analytic signal and then taking the absolute value. In this case, we know exactly what is the analytic function is!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate `s_chirp_a`, the analytic function of the chirp by computing: `s_chirp_a = exp(1j* phi_of_t )`. Perform cross correlation between `s_chirp_a` and `s_chirp` and show the envelope. This is also called a matched filter. \n", "* Comment on the size of the main lobe (full-width at half max) of the matched-filter with respect to the length of the pulse. That ratio is also called pulse compression. \n", "\n", "Use miliseconds as the x-axis\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# your nice script to produce beautiful chirps, xcorralations and figures here:\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answer here:\n", "\n", "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will look at why the chirp pulse is better for cross-correlation detection than a pure tone.\n", "- Repeat Task III for:\n", " 1. A constant frequency of 17500Hz, 512 samples in length. \n", " 2. A chirp with a frequency sweep from 17000Hz - 18000Hz, 512 in length. \n", " 3. A chirp with a frequency sweep from 15500Hz - 19500Hz, 512 in length\n", "- Compare the size of the main lobes (full width at half max). How much \"Pulse Compression\" are you getting by using a chirps for detection compared to a single frequency pulse?\n", "- What is the approximate bandwidth of the pure frequency pulse and what is the bandwidth of the chirp pulses? Comment on the tradeoff between bandwidth and pulse compression\n", "\n", "To easily zoom in to the main lobe, you can use bokeh to plot the autocorrelation:\n", "\n", " fig = bk.figure( title = \"autocorrelation\", \n", " y_axis_label = \"Amplitude\", x_axis_label = \"t[ms]\", \n", " plot_height = 300, plot_width = 800)\n", " fig.line( t, abs(acorr)/max(abs(acorr)), legend = 'autocorrelation', color = 'blue')\n", " bk.show(fig)\n", " \n", "where `acorr` is the auto-correlation result and `t` is the index." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# your solution here\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answers and interpretations here:\n", "\n", "\n", "* * *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Now, repeat tast III for \n", " 1. a chirp with a frequency sweep from 16500Hz - 18500Hz, 256 in length\n", "\n", "- Compare the size of the main lobe (full width at half max) to the previous case of 15500Hz - 19500Hz, 512 in length.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# your solution here\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Dealing with sidelobes\n", "As you can see, the chirp provides good pulse compression of the main-lobe. However, there exists very strong sidelobes. This is because the chirp is multiplied with a rect function, that is abrupt. Instead, we will window the chirp with one of the smooth window functions to taper off the sidelobes. \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Repeat Task 1.4.3 above for a chirp with a sweep from 17KHz to 18KHz, and from 15KHz to 20KHz. This time, multiply the chirp (and its analytic function) with a hanning window. You will find the function `np.hanning` useful. What happened to the side-lobes? What happened to the main lobe? What's the tradeoff?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# your solution here\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### You are now ready to proceed to the Sonar Lab" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Sonar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this part of the lab we will write a simple application that implements a sonar using the laptop internal speaker and microphone. \n", "\n", "The basic idea is very simple and is the basis of sonar and ultrasound images -- Objects reflect sound waves. If we send a pulse of sound, we will get reflection echoes of that pulse. Detecting the echos and their time-of-flight will reveal their distance from the source, based on the speed of sound in air. \n", "\n", "The way we are going to implement the sonar is to generate a series of rapid pulses, and use matched filtering to detect the source and the returning echos. There are many parameters in this lab that can be tweaked to get different results. We encourage you to experiment. We enjoyed very much making the lab and played quite a bit! We hope you enjoy it too. \n", "\n", "Unfortunately, the quallity of the sonar system is going to be highly dependent on your laptop quallity, and the position of the speakers and microphone. It is recommended that you adjust the sound settings on your system such that only the speaker that is closer to the microphone is active. For example, MacBookAirs have the microphone on the side of the computer -- so you should set adjust the audio settings to left speaker only. Also, it is recommended that the speaker volume be set to half of its maximum to avoid non-linear distortions. \n", "\n", "If you are getting poor results, please consult with us. \n", "\n", "This lab was inspired from an iphone app called active-radar. \n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part II, Task I: Generating Chirp Pulses\n", "\n", "First, we will look at short chirp pulses with a pure frequency carrier. These are very common in radar and sonar systems. There are several considerations that we need to make. To be able to detect short distances, and also to be able to get good localization in time our pulses need to be very short. At the same time, shorter pulses carry less energy and have a wider bandwidth (why?). These reduce the signal to noise ratio and reduce our ability to detect targets. At the end, we will have to compromise in order to get satisfactory results in terms of detectibility and resolution. \n", "\n", "We are going to design a pulsed sonar system in which we repeatedly send pulses and then listen to the returning echoes. The arrival time of the echos will correspond to double the distance from a target. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Write a function that generates a chirp pulse:\n", "`pulse = genChirpPulse(Npulse, f0, f1, fs)` \n", "\n", "The function will accept: `Npulse` = number of samples, `f0,f1` = starting and ending frequency and `fs` = sampling frequency. The function will return the analytic function of the chirp $\\exp (j 2\\pi \\int_0^t f(t)dt )$ with amplitude 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def genChirpPulse(Npulse, f0, f1, fs):\n", " # Function generates an analytic function of a chirp pulse\n", " # Inputs:\n", " # Npulse - pulse length in samples\n", " # f0 - starting frequency of chirp\n", " # f1 - end frequency of chirp\n", " # fs - sampling frequency\n", " # Output:\n", " # pulse - chirp pulse\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "* To validate that the function works display the pulse generated with Npulse = 200, f0=1000, f1 = 8000, fs = 44100. Remember the pulse is complex, so plot the real and imaginary part separately." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pulse = genChirpPulse(200, 1000, 8000, 44100)\n", "\n", "# your code here:\n", "# plot\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Generate Pulse Trains__\n", "\n", "Next, we will use the pulse generated by `genChirpPulse` and generate a pulse train.\n", "\n", "* Write a new function `ptrain = genPulseTrain(pulse, Nrep, Nseg)`\n", "The function accepts `pulse` = a pulse generated by `genChirpPulse`, `Nrep` = number of pulse repetitions and `Nseg` = length of each pulse train segment (which is >= to the length of `pulse`).\n", "\n", "The function returns `ptrain` which is a vector of length `Nrep` x `Nseg` (Hint: use `np.tile`)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def genPulseTrain(pulse, Nrep, Nseg):\n", " # Funtion generates a pulse train from a pulse. \n", " #Inputs:\n", " # pulse = the pulse generated by genChirpPulse\n", " # Nrep = number of pulse repetitions\n", " # Nseg = Length of pulse segment >= length(pulse)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part II, Task II: Echos in with Chirp pulse train\n", "\n", "We now have components to generate pulses, generate a pulse train, play and record it. Lets see what we get!\n", "We will start with very short pulses with a single carrier frequency. Rectangular pulses are difficult for the speaker\n", "to produce as they exhibit discontinuities in the beginning and the end of the pulse. Therefore we will multiply the pulses\n", "with a smooth window. Here, we will use a hanning window." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate a f0=f1=15KHz, Npulse=60 pulse with fs=44100. Window the pulse with a hanning window. This will result in a pulse slightly longer than 1.5ms. You should be able to hear this tone, though most adults above 40 would not.\n", "* Plot the real and imaginary part of the pulse" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fs = 44100\n", "f0 = 15000\n", "f1 = 15000\n", "Npulse = 60\n", "\n", "# your code here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Use the real part of the pulse to generate a pulse train of Nrep=15 pulses, Nseg=4096 samples" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# your code here:\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Play and record the pulse train. Scale the amplitude of the pulses to 1/2. Make sure your volume is set to maximum of 70% and look at the plot with the input pulse train and the received pulse train. A bokeh plot cell is provided so you can zoom in to look at echoes in between." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rcv = xciever(ptrain / 2, fs) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = bk.figure( title = \"Tranceived pulse train\", \n", " y_axis_label = \"Amplitude\", x_axis_label = \"Index\", \n", " plot_height = 300, plot_width = 800)\n", "fig.line( r_[:len(ptrain):10], ptrain[::10], legend = 'transmited', color = 'orange')\n", "fig.line( r_[:len(rcv):10], rcv[::10], legend = 'received', color = 'blue')\n", "bk.show(fig)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Extract a single pulse from the received pulse train. You can find the pulse index either automatically using a threshold test followed by `np.nonzero` or manually using the `bokeh` plot cell above. Extract at least 2 Npulse samples before the pulse and 20 Npulse samples after using `rcv_pulse = rcv[idx-2*Npulse:idx+Npulse*20]` \n", "\n", "* Plot the received pulse. Can you see any echoes?\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# your code here:\n", "# find index of start pulse\n", "\n", "# plot\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Matched Filtering\n", "\n", "The strong pulses we see are a result of direct feed-through from the transmitter to the receiver that does not scatter off targets. The echoes we see are a result of echoes from reflecting surfaces. The problem in our setup is that we don't know the exact delay between the transmitter and the receive hardware (in PyAudio). Instead, we will assume that the travel time for sound between the speaker and the microphone is negligible and much smaller than scatering targets. We can then detect when the pulses start based on the direct feedthrough signal. \n", "\n", "We will detect both the feedthrough and echoes using matched filtering. \n", "\n", "* Write a function `Xrcv = crossCorr( rcv, pulse_a )` to calculate the cross correlation (matched filter) of the received signal with the analytic function of the pulse. You can use `signal.fftconvolve`\n", "* Take the absolute value of `Xrcv` to recover its envelope. Call the result `Xrcv_a`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def crossCorr( rcv, pulse_a ):\n", " # Funtion generates cross-correlation between rcv and pulse_a\n", " # Inputs:\n", " # rcv - received signal\n", " # pulse_a - analytic pulse\n", " # Output:\n", " # Xrcv - cross-correlation between rcv and pulse_a" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Xrcv_a = abs( crossCorr(rcv, pulse_a) )\n", "\n", "fig = bk.figure( title = \"Matched filtered signal\", \n", " y_axis_label = \"Amplitude\", x_axis_label = \"Index\", \n", " plot_height = 300, plot_width = 800)\n", "fig.line( r_[:len(Xrcv_a):10], Xrcv_a[::10], legend = 'received', color = 'blue')\n", "bk.show(fig)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Again, extract a single pulse from the received pulse train using the same index. Extract at least 2 Npulse samples before the pulse and 20 Npulse samples after. Plot the received pulse. Can you see any echoes?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# find index of start pulse\n", "\n", "# plot\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Sonar System\n", "\n", "In order to automate the system and visualize the results we need a few more components. To extract the pulses we need to know the position of the first feedthrough pulse. \n", "\n", "\n", "* Write a function `idx = findDelay(Xrcv_a, Nseg)` that takes the result of the matched filtering and finds the index of the first feedthrough pulse. Try testing on the actual signal to check whether the function is correct. There are multiple ways of doing it. `Nseg` is not required." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def findDelay(Xrcv, Nseg):\n", " # finds the first pulse\n", " # Inputs: \n", " # Xrcv - the received matched filtered signal\n", " # Nseg - length of a segment\n", " # Output:\n", " # idx - index of the beginning of the first pulse\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "idx = findDelay(Xrcv_a,Nseg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now can correct for delays and detect echoes. The only thing left now is to convert the time between echoes into actual distance.\n", "\n", "If we send a pulse of sound, we will get reflection echoes of that pulse. Detecting the echos and their time-of-flight will reveal their distance from the source, based on the speed of sound in air. The speed of sound in air is given by the following equation:\n", "\n", "$$ v_s = 331.5\\sqrt{1+T/273.15}~\\mathrm{m/s}~,$$ \n", "\n", "where T is the temperature in degree celcius. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Create a function `t = dist2time( dist, temperature )` that takes in the distance to the target in cm and converts it into the time in seconds between the transmitted pulse and its echo. Remember the arrival time include the time to the target and back and therefore the time should be doubled. \n", "For example, for temperature = 20 celcius and dist = 400 cm, the time it takes is 0.023 secs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def dist2time( dist, temperature):\n", " # Converts distance in cm to time in secs\n", " # Inputs:\n", " # dist - distance to object in cm\n", " # temperature - in celcius\n", " # Output:\n", " # t - time in seconds between transmitted pulse and echo\n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You now have a working sonar! It would be much easier though to play with different parameters if we automate things, so we created some wrappers for real-time plotting in a separate notebook (lab1-RealTime-Sonar). \n", "\n", "* Copy-and-paste the 5 functions you created, including genPulseTrain(), genChirpPulse(), crossCorr(), findDelay(), and dist2time(), to the specified code cell in the real-time Sonar lab.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### You are now ready to proceed to the Real-Time Sonar Lab" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }