{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# EE 123 Lab1 - Time Domain Sonar Lab\n", "\n", "### Written by Miki Lustig and Frank Ong 2016\n", "#### Edited, debugged and ported to Rapberry Pi Nick Antipa, Li-Hao Yeh and Miki Lustig 2018" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this lab, we will interact with physical time-domain signals. The first part will involve generating and recording sounds on the Raspberry-Pi. We will use the chirp signal to characterize the response of the speaker-microphone system and look at detecting signals using cross-correlation.\n", "In the second part, we will build on part one and use the speaker-microphone system to develop a simple sonar.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### When using the raspberry pi -- \n", "\n", "The RaspberryPi does not have a microphone input. In order to use both audio input and output, we will use a USB soundcard. This USB soundcard is in fact and Analog-to-Digital and Digital-to-Analog device. \n", "\n", "* please connect the USB audio to the pi\n", "* connect the microphone to the mic input of the USB audio\n", "* connect the speaker to the speaker output of th USB audio\n", "* make sure the speaker is powered -- e.g. connected to a USB either on your computer, or the pi\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Import functions and libraries\n", "import numpy as np, matplotlib.pyplot as plt\n", "import threading,time, queue, pyaudio \n", "from matplotlib.pyplot import *\n", "import matplotlib.cm as cm\n", "from scipy import signal\n", "from numpy import *\n", "from threading import Lock\n", "\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Chirping!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this assignment you will use the the raspberry-pi equipped with a USB sound card, a USB powered speaker and a microphone. When playing a sound and recording, the signal goes through several systems. In particular it goes through the response of the USB soundcard output, the speaker, the room we are in and the response of the microphone and receive part of the USB soundcard.\n", "\n", "A chirp is a a signal in which the frequency increases linearly with time. In this assignment we will generate a chirp signal and use it to measure the amplitude of the frequency response of our speaker-room-microphone system. This lab will work best in a quiet environment -- We recommend that you execute the lab at home or in a quiet place before submitting it. \n", "\n", "A simultaneous frequency is defined as the derivative of the phase of a signal, $f = \\frac{1}{2\\pi} \\frac{d\\phi (t)}{ dt} $. For example, the simultaneous frequency of $\\cos(\\phi(t))=\\cos(2\\pi f_0 t)$ is \n", "\n", "$$f = \\frac{d\\phi (t)}{2\\pi dt} = f_0$$ \n", "\n", "\n", "For a linear chirp, the frequency changes linearly over time. The simultaneous frequency is therefore defined as \n", "\n", "
$$ f(t) = f_0 + kt. $$
\n", "\n", "\n", "So, \n", "\n", "
$$ x(t) = \\sin(2\\pi\\int_0^t f(t')dt') = \\sin(2\\pi\\int_o^t(f_0+kt')dt') = \\sin(2\\pi(f_0+\\frac{k}{2}t)t) $$
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I Task I: Generating the Chirp\n", "\n", "Generate a 10 seconds long chirp signal, sampled at 48,000[Hz] with a frequency range of 20[Hz] to 20,000[Hz]. Set the magnitude of the chirp to 0.5. This will help prevent non-linearities when we play the sound later. \n", "\n", "* Given $T$=total time length, $f_0$=start frequency, $f_1$ = end frequency, derive a formula $f(t)$ for the frequency sweep.\n", "* Find the formula for the phase by integrating $\\phi(t) = \\int_0^T f(t)dt$ to get the phase function.\n", "\n", "Now, \n", "* Set the sample-rate frequency `fs = 48000` Hz\n", "* Generate a time index from `t=0` to `t=10` with sampling rate of 48000 Hz\n", "* Generate a vector of frequency vs time: `phi_of_t` ( $\\phi(t)$ )\n", "* Generate the chirp function `s_chirp` with amplitude of 0.5 by plugging the phase into a sinusoid. \n", " \n", " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fs = 48000\n", "f0 = 20\n", "f1 = 20000\n", "T = 10\n", "# your code here\n", "\n", "# generate time index and phase\n", "\n", "\n", "\n", "# generate chirp signal\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the first $\\frac{1}{2}$ second of the chirp (`s_chirp`), you will notice that the carrier frequency increases and that the chirp has a constant envelope. To get a nice figure, make sure the aspect ratio of the figure is height/width = 0.2 . Label the axis and figure appropriately. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Set the aspect ratio such that the image is wide\n", "width, height = figaspect(0.2)\n", "fig = figure(figsize=(width,height))\n", "\n", "#Your code below:\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the magnitude frequency response of the sequence from 0 to $\\pi$ using the function `signal.freqz`. Note, that the digital frequency range represents a physical frequency range of 0[hz] to 24000[Hz]. To get a nice figure, make sure the aspect ratio of the figure is height/width = 0.2. Label the axis and figure appropriately. \n", "\n", "The `signal.freqz` function on the Pi is very slow-- be patient. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# # generate frequency response of chirp\n", "\n", "\n", "\n", "# generate frequency index\n", "\n", "\n", "\n", "\n", "\n", "width, height = plt.figaspect(0.2)\n", "fig = plt.figure(figsize=(width,height))\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Explain why the chirp is an appropriate signal to measure the magnitude frequency response of a system. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answer here:\n", "\n", "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I Task II: Playing and Recording the Chirp\n", "Now, we will play the sound of the chirp on our computer speaker and simultaneously record using the microphone. \n", "\n", "#### Instructions for the Raspberry pi (EE123 2018):\n", "\n", "* To setup the output and input volume, in a terminal, run `alsamixer`. Make sure the card showing is: USB PnP Sound Device. \n", "\n", "You can use arrows to move between Speaker, Mic and auto gain, and also change the gain. Use the `M` key to toggle between on (00) and Mute (MM). Use the Keys Q and Z to change the gain of the left speaker, and E and C to change the gain of the right speaker. \n", "\n", "For the sonar to work well we will need to use either the right or left speaker, but not both. Make sure the gain for one of them is 0.\n", "Also, Make sure that the Auto-Gain-Control is disabled. \n", "\n", "\n", "Get a setting similar to below: \n", "\n", "```Card: USB PnP Sound Device F1: Help │\n", "│ Chip: USB Mixer F2: System information │\n", "│ View: F3:[Playback] F4: Capture F5: All F6: Select sound card │\n", "│ Item: Mic [dB gain: 18.37] Esc: Exit │\n", "│ │\n", "│ │\n", "│ │\n", "│ │\n", "│ ┌──┐ ┌──┐ │\n", "│ │ │ │ │ │\n", "│ │ │ │ │ │\n", "│ │ │ │ │ │\n", "│ │ │ │ │ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ │ ▒│ │▒▒│ │\n", "│ ├──┤ ├──┤ ┌──┐ │\n", "│ │OO│ │OO│ │MM│ │\n", "│ └──┘ └──┘ └──┘ │\n", "│ 0<>74 77 │\n", "│ Speaker < Mic >Auto Gain Control │\n", "│ │\n", "│ │\n", "│ │\n", "└───────────────────────────────────────────────────────────────────────────────────────────────────────────┘```\n", "\n", "\n", "\n", "\n", "Press Fn-F4 to switch to the Capture settings. These should be changed to:\n", "\n", "```┌────────────────────────────── AlsaMixer v1.1.3 ──────────────────────────────┐\n", "│ Card: USB PnP Sound Device F1: Help │\n", "│ Chip: USB Mixer F2: System information │\n", "│ View: F3: Playback F4:[Capture] F5: All F6: Select sound card │\n", "│ Item: Mic [dB gain: 16.36] Esc: Exit │\n", "│ │\n", "│ ┌──┐ │\n", "│ │ │ │\n", "│ │ │ │\n", "│ │ │ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ │▒▒│ │\n", "│ L└──┘R │\n", "│ CAPTURE │\n", "│ 69 │\n", "│ < Mic > │\n", "│ │\n", "└──────────────────────────────────────────────────────────────────────────────┘```\n", "\n", "\n", "#### Instruction for laptops (for anyone wanting to try this also on a laptop not EE123)\n", "* On Apple computers it is recommended that you turn off the ambient noise reduction by going to system-preferences, selecting sound, choose the input tab and make sure that the \"Use ambient noise reduction\" box is unchecked. In some windows system there's ambient noise reduction as well. Make sure it is also turned off. \n", "\n", "* Your laptop most likely has two speakers. It is best if we work only with one. Go to the operating system's sound settings and change the stereo settings such that the speaker that is closest to the microphone is active. Your result will be much better that way. \n", "\t\t\n", "* Make sure your output volume is at 70-80% and that the laptop's microphone is on, again to avoid non-linear distorsions. \n", "\n", "* We will record 12 seconds just to make sure we capture the entire sequence. \n", "\n", "The code below defines some functions to use with pyaudio -- a multi-platform audio python interface. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def play_audio( Q, p, fs , dev=None):\n", " # play_audio plays audio with sampling rate = fs\n", " # Q - A queue object from which to play\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number\n", " \n", " # Example:\n", " # fs = 44100\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # Q = Queue.queue()\n", " # Q.put(data)\n", " # Q.put(\"EOT\") # when function gets EOT it will quit\n", " # play_audio( Q, p, fs,1 ) # play audio\n", " # p.terminate() # terminate pyAudio\n", " \n", " # open output stream\n", " ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),output=True,output_device_index=dev)\n", " # play audio\n", " while (1):\n", " data = Q.get()\n", " if data is \"EOT\" :\n", " break\n", " try:\n", " ostream.write( data.astype(np.float32).tostring() )\n", " except:\n", " break\n", " \n", "def record_audio( queue, p, fs ,dev=None,chunk=2048,lock=None):\n", " # record_audio records audio with sampling rate = fs\n", " # queue - output data queue\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number\n", " # chunk - chunks of samples at a time default 1024\n", " #\n", " # Example:\n", " # fs = 44100\n", " # Q = Queue.queue()\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # record_audio( Q, p, fs, 1) #\n", " # p.terminate() # terminate pyAudio\n", " \n", " istream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),input=True,input_device_index=dev,frames_per_buffer=chunk)\n", "\n", " # record audio in chunks and append to frames\n", " frames = [];\n", " while (1):\n", " try: # when the pyaudio object is destroyed, stops\n", " with lock if lock is not None else 1:\n", " data_str = istream.read(chunk, exception_on_overflow=False) # read a chunk of data\n", " except:\n", " break\n", " data_flt = np.fromstring( data_str, 'float32' ) # convert string to float\n", " queue.put( data_flt ) # append to list\n", "\n", "def xciever(sig, fs):\n", " # function takes a signal and a sampling frequency\n", " # it then plays and records at the same time. The function returns\n", " # the recorded sound.\n", "\n", " rcv = [];\n", "\n", " # create an input output FIFO queues\n", " Qin = queue.Queue()\n", " Qout = queue.Queue()\n", "\n", " #lock for controlling access to shared resources\n", " lock = Lock()\n", " \n", " # create a pyaudio object\n", " p = pyaudio.PyAudio()\n", "\n", " # initialize a recording thread.\n", " t_rec = threading.Thread(target = record_audio, args = (Qin, p, fs ), kwargs={'lock': lock})\n", " t_play_audio = threading.Thread(target = play_audio, args = (Qout, p, fs ))\n", "\n", " # start the recording and playing threads\n", " t_rec.start()\n", " t_play_audio.start()\n", "\n", " Qout.put( sig );\n", " Qout.put( \"EOT\" );\n", "\n", " # pause for RECORD_SECS seconds\n", " RECORD_SECS = len(sig)/fs + 2.0\n", " time.sleep( RECORD_SECS )\n", " # terminate pyAudio\n", " with lock:\n", " p.terminate()\n", " \n", " # append to output\n", " while ( not Qin.empty()) :\n", " data = Qin.get()\n", " rcv = np.append( rcv, data )\n", "\n", " return rcv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Playing and recording audio:__\n", "\n", "* Run the following code. It is an example of how to play and record sound at the same time and uses threading for the play and record threads.\n", "\n", "The resulting received sequence will be stored in the variable `rcv_chirp`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "## Play and record chirp at the same time\n", "\n", "fs = 48000 # sampling rate = 48000 Hz\n", "\n", "rcv_chirp = xciever( s_chirp, fs) # Qin = queue.Queue() Qout = queue.Queue() / Note: queue instead of Queue for python3\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Plot the frequency response of the received sequence. \n", "* Also, plot the absolute value of the received signal. Plotting the absolute value (sort of) displays the envelope of the chirp. \n", "\n", "Label the figures and use an aspect ratio of Height/Width = 0.2\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Plot chirp response\n", "\n", "# generate frequency response of chirp\n", "\n", "# generate frequency index\n", "\n", "# generate a time index\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "# free code for your plot:\n", "width, height = figaspect(0.2)\n", "fig1 = figure(figsize=(width,height))\n", "plt.plot(f, abs( RCV_chirp ) )\n", "plt.title('Frequency response of the trancieved chirp (Hz)')\n", "plt.xlabel('f[Hz]')\n", "\n", "fig1 = figure(figsize=(width,height))\n", "plt.plot(t, abs(rcv_chirp))\n", "plt.title('Absolute value of trancieved chirp');\n", "plt.xlabel('time[s]')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Comment on the results you got. In addition, what is the implicit assumption we are making in order to claim that the result is a frequency response? \n", "(HINT: consider the case when the chirp was very short)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Answers here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I, Task III: Envelope detection with Hilbert transform. \n", "The absolute value of the of the result \"sort of\" displays the envelope, however it is still modulated by the (now rectified) frequency sweep carrier. If we write down the response, it can be expressed approximately as \n", "\n", "$$y[n] = |H[n]| \\sin(2\\pi (f_0 +k[n*T])nT + \\angle H[n])$$\n", "\n", "where $|H[n]|$ is the frequency response for the instantaneous frequency at the nth sample and $\\angle H[n]$ is its phase response. \n", "\n", "The reason that it is only an approximation is that there is an inherent assumption that we do not look at transient effects, only steady state effect for each frequency. This is a good approximation because our chirp is very slow compared to the propagation of sound in the room. \n", "\n", "One way to get the envelope $|H[n]|$ is to convert it to its analytic signal. The analytic signal $x_a(t)$ of signal $x(t)$ is:\n", "\n", "$$x_a = F^{-1}(F(x)\\cdot 2U) = x + j y$$\n", "\n", "where $F$ is the Fourier transform, $U$ the unit step function,\n", "and $y$ \"is\" the Hilbert transform of $x$. In other words, the negative half of the frequency spectrum is zeroed\n", "out, turning the real-valued signal into a complex signal. This is similar to the question in HW2!\n", "\n", "The analytic signal of the received chirp will then be: \n", "\n", "$$ y_a[n] = |H[n]|e^{j2\\pi (f_0 +k[n*T])nT + \\angle H[n]} $$\n", "\n", "The envelope can be detected by taking the magnitude. \n", "(analytic function of y seems to have a one more phase shift $\\pi/2$ since it is a $\\sin$ function)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Compute the analytic signal by using the function `signal.hilbert` and plot its absolute value. Note that the discrete hilbert transform is not perfect, since it uses FIR filtering. This will show up as ripple in the envelope.\n", "\n", "* Label the figures and use an aspect ration of Height/Width = 0.2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig1 = figure(figsize=(width,height))\n", "\n", "## Your lovely code here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part I, Task IV: Auto-correlation Properties of the Chirp:\n", "\n", "In part II of the lab, we will be sending and receiving chirp pulses to estimate delays between the tranceived pulses. This is done by cross correlating / matched filtering the received signal with the known chirp pulse to detect the echoes. In this task, we will investigate the correlation properties of the chirp.\n", "\n", "A cross correlation is defined as:\n", "\n", "$$ R_{xy}[n] = \\sum_{m=-\\infty}^\\infty x[m]y^*[m-n] = (x[m]*y^*[-m])[n]$$\n", "\n", ", where $y^*[-m]$ is the complex conjugat of $y[-m]$. This similar to a convolution, without flipping one of the signals. It can be implemented using a convolution as shown above. In general, the more correlated the two signals is at position $n$, the higher the value will be. That's why it is useful in a sonar system.\n", "\n", "#### Matched filter \n", "When we look for a very specific shape in a signal, we can comput a cross correlation between the signal and the shape we are interested in. In that case, the operation of the cross correlation is also called a matched filter -- i.e. correlating with a filter that is matched to the shape we look for. \n", "\n", "Because we will be doing cross-correlations between a chirp pulse and its echoes, it is useful to look at the auto-correlation, which is basically a cross correlation of the signal with itself. A discrete autocorrelation of a signal is defined as: \n", "\n", "$$ R_{xx}[n] = \\sum_{m=-\\infty}^\\infty x[m]x^*[m-n] = (x[m]*x^*[-m])[n]$$ \n", "\n", "The chirp has a very nice property that its auto-correlation is very narrow. Since the spread of the resulting correlation determines how fast you can detect, the width of the auto-correlation is important. This property is called pulse compression and is widely considered in radar design. Random noise and some other pseudo-random like sequences also possess this property. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate a 512 sample chirp pulse with a frequency sweep from 17KHz-18KHz and sampling rate fs=48000. \n", "* Validate its frequency response by plotting it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Your beautiful code here:\n", "\n", "fs = 48000\n", "T = 512/fs\n", "t = r_[0.0:512]/fs\n", "f0 = 17000.0\n", "f1 = 18000.0\n", "\n", "# generate chirp signal\n", "\n", "\n", "# generate frequency response of chirp\n", "\n", "# plot " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "* Compute the autocorrelation of the chirp \"using\" discrete convolution, either with `signal.convolve` or `signal.fftconvolve`. Remember that you have to flip the signal since convolution does that already. You can flip a signal `x` by doing `x[::-1]`. Use mode=''full'' for convolution.\n", "* Plot the autocorrelation. Your plot should be spiky because we did not do envolope detection yet. Use miliseconds as the x-axis" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Your fantastic code here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar way as we did before, it is possible to recover the envelope of the autocorrelation by performing a cross-correlation with the analytic signal and then taking the absolute value. In this case, we know exactly what is the analytic function is!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate `s_chirp_a`, the analytic function of the chirp by computing: `s_chirp_a = exp(1j* phi_of_t )`. Perform cross correlation between `s_chirp_a` and `s_chirp` and show the envelope. As stated before, this could also be called a matched filter. \n", "* Measure the full-width at half max (FWHM) of the main lobe of the autocorrelation. \n", "* Comment on the FWHM of the main lobe of the matched-filter with respect to the length of the pulse. That ratio is also called pulse compression. For simplicity, normalize the plot such that the maximum is 1, but record the maximum value of the autocorrelation and display it in the title of the figure. \n", "\n", "Use the pragma ``%matplotlib notebook`` for making the figure interactive, and ``plt.grid('on')`` for displaying a grid. \n", "\n", "Use miliseconds as the x-axis\n", "\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "%matplotlib notebook\n", "\n", "# your nice script to produce beautiful chirps, xcorralations and figures here:\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answer here:\n", "\n", "_____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will look at why the chirp pulse is better for cross-correlation detection than a pure tone.\n", "- Repeat Task \"III\" <- \"IV\" for:\n", " 1. A constant frequency of 17000Hz, 512 samples in length. \n", " 2. A chirp with a frequency sweep from 16500Hz - 17500Hz (1KHz Bandwidth), 512 in length. \n", " 3. A chirp with a frequency sweep from 15000Hz - 19000Hz (4KHz Bandwidth), 512 in length\n", "- Compare the size of the main lobes (full width at half max). How much \"Pulse Compression\" are you getting by using a chirps for detection compared to a single frequency pulse?\n", "- What is the approximate bandwidth of the pure frequency pulse and what is the bandwidth of the chirp pulses? Comment on the tradeoff between bandwidth and pulse compression\n", "- What is the maximum autocorrelation for each pulse?\n", "\n", "\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "# your solution here\n", "fs = 48000\n", "T = 512/fs\n", "t = r_[0.0:512]/fs\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answers and interpretations here:\n", "\n", "\n", "The autocorrelation of a constant frequency signal takes up is 11.6ms, the main lobe for the 1Khz whide chirp is \\~1.24ms and for the 4Khz chirp is \\~0.3ms. Their pulse compression is 9.6 and 38.6 respectively.\n", "At the same time, the bandwidth of the constant frequency pulse is approximately 1/(its length) = 86 Hz, and the bandwidth for the chirps are 1KHz and 4KHz -- these are inversly proportional to the main lobe width!\n", "So, you can't be compact in one domain without expanding in the other...." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Now, repeat task \"III\" <- \"IV\" for \n", " 1. a chirp with a frequency sweep from 15000Hz - 19000Hz, 256 in length\n", "\n", "- Compare the size of the main lobe (full width at half max) to the previous case of 15000Hz - 19000Hz, 512 in length.\n", "- Compare the maximum autocorrelation as well. \n", "\n", "What's the effect of having more bandwidth? what's the effect of having longer/shorter pulses?\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# your solution here\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answer below:\n", "\n", "The pulse compression is proportional to the bandwidth. The peak correlation is proportional to the length of the pulse. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dealing with sidelobes\n", "As you can see, the chirp provides good pulse compression of the main-lobe. However, there exists very strong sidelobes. This is because the chirp is multiplied with a rect function, that is abrupt. Instead, we will window the chirp with one of the smooth window functions to taper off the sidelobes. \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Repeat the above for a chirp with a sweep from 16.5KHz to 17.5KHz, and from 15KHz to 19KHz. This time, multiply the chirp (and its analytic function) with a hanning window. You will find the function `np.hanning` useful. \n", "\n", "* plot the normalized autocorrelations (in the same figure) \n", "* Comment on the magnitude of the side-lobes? \n", "* Comment on the width of the main lobes? \n", "* What's the tradeoff?\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# your solution here\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Your answers here:\n", "\n", "The width of the main lobe is doubled, the maximum autocerrleation is reduced. But, the sidelobes are significantly smaller!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### You are now ready to proceed to the Sonar Lab" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Sonar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this part of the lab we will write a simple application that implements a sonar using the laptop internal speaker and microphone. \n", "\n", "The basic idea is very simple and is the basis of sonar and ultrasound images -- Objects reflect sound waves. If we send a pulse of sound, we will get reflection echoes of that pulse. Detecting the echos and their time-of-flight will reveal their distance from the source, based on the speed of sound in air. \n", "\n", "The way we are going to implement the sonar is to generate a series of rapid pulses, and use matched filtering to detect the source and the returning echos. There are many parameters in this lab that can be tweaked to get different results. We encourage you to experiment. We enjoyed very much making the lab and played quite a bit! We hope you enjoy it too. \n", "\n", "#### Instructions for RaspberryPi:\n", "The microphone and speaker you have are somewhat directional. Make sure that the microphone and speaker point in the same direction. You will get the best quallity in a quiet room, without interference from other sources of noise -- especially from your fellow students playing chirp pulses at the same time as you are!\n", "\n", "#### Instructions for other laptops:\n", "\n", "Unfortunately, the quallity of the sonar system is going to be highly dependent on your laptop quallity, and the position of the speakers and microphone. It is recommended that you adjust the sound settings on your system such that only the speaker that is closer to the microphone is active. For example, MacBookAirs have the microphone on the side of the computer -- so you should set adjust the audio settings to left speaker only. Also, it is recommended that the speaker volume be set to half of its maximum to avoid non-linear distortions. \n", "\n", "If you are getting poor results, please consult with us. \n", "\n", "This lab was inspired from an iphone app called active-radar. \n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part II, Task I: Generating Chirp Pulses\n", "\n", "Recall from Part I, that the with of the main lobe of the autocorrelation depends on the bandwidth of the pulse. \n", "For a constant frequency pulse, the bandwidth will be inversly proportional to its length. Short pulses are localized in time, and therefore we will be able to separate echoes from targets that are close. However, short pulses carry less energy (for the same amplitude) and this will reduce our signal to noise ratio (SNR) in the detection and reduce our ability to detect the targets at all. So, in summary: for constant frequency pulse, there's an inerent tradeoff between the resolution of the sonar (distinguish between close targets) and the signal to noise ratio. \n", "\n", "If we use a chirp pulse, we can increase the length of the pulse while also increasing the bandwidth. This will enable us to improve our signal to noise ratio as well as keeping the resolution of our sonar (by preserving the BW).\n", "\n", "In our implemetation we are going to design a pulsed sonar system in which we repeatedly send pulses and then listen to the returning echoes. The arrival time of the echos will correspond to double the time-of-flight of sound propagation from our system to the target. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Write a function that generates a chirp pulse:\n", "`pulse = genChirpPulse(Npulse, f0, f1, fs)` \n", "\n", "The function will accept: `Npulse` = number of samples, `f0,f1` = starting and ending frequency and `fs` = sampling frequency. The function will return the analytic function of the chirp $\\exp (j 2\\pi \\int_0^t f(t)dt )$ with amplitude 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def genChirpPulse(Npulse, f0, f1, fs):\n", " # Function generates an analytic function of a chirp pulse\n", " # Inputs:\n", " # Npulse - pulse length in samples\n", " # f0 - starting frequency of chirp\n", " # f1 - end frequency of chirp\n", " # fs - sampling frequency\n", " # Output:\n", " # pulse - chirp pulse\n", " \n", " \n", "\n", "\n", "\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "* To validate that the function works display the pulse generated with Npulse = 200, f0=1000, f1 = 8000, fs = 48000. Remember the pulse is complex, so plot the real and imaginary part separately." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "%matplotlib inline\n", "pulse = genChirpPulse(200, 1000, 8000, 48000)\n", "\n", "# your code here:\n", "\n", "\n", "# plot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Generate Pulse Trains__\n", "\n", "Next, we will use the pulse generated by `genChirpPulse` and generate a pulse train.\n", "\n", "* Write a new function `ptrain = genPulseTrain(pulse, Nrep, Nseg)`\n", "The function accepts `pulse` = a pulse generated by `genChirpPulse`, `Nrep` = number of pulse repetitions and `Nseg` = length of each pulse train segment (which is >= to the length of `pulse`).\n", "\n", "The function returns `ptrain` which is a vector of length `Nrep` x `Nseg` (Hint: use `np.tile`)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def genPulseTrain(pulse, Nrep, Nseg):\n", " # Funtion generates a pulse train from a pulse. \n", " #Inputs:\n", " # pulse = the pulse generated by genChirpPulse\n", " # Nrep = number of pulse repetitions\n", " # Nseg = Length of pulse segment >= length(pulse)\n", " \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part II, Task II: Echos in with Chirp pulse train\n", "\n", "We now have components to generate pulses, generate a pulse train, play and record it. Lets see what we get!\n", "We will start with very short pulses with a single carrier frequency. Rectangular pulses are difficult for the speaker\n", "to produce as they exhibit discontinuities in the beginning and the end of the pulse. Therefore we will multiply the pulses\n", "with a smooth window. Here, we will use a hanning window." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Generate a f0=f1=8KHz, Npulse=96 pulse with fs=48000. Window the pulse with a hanning window. This will result in a pulse length of 2ms. You should be able to hear this tone.\n", "* Plot the real and imaginary part of the pulse" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fs = 48000\n", "f0 = 8000\n", "f1 = 8000\n", "Npulse = 96\n", "\n", "# your code here:\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Use the real part of the pulse to generate a pulse train of Nrep=15 pulses, Nseg=4096 samples" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# your code here:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Play and record the pulse train. Scale the amplitude of the pulses to 1/2. Make sure your volume is set to maximum of 70% and look at the plot with the input pulse train and the received pulse train.\n", "\n", "Use the pragma ``%matplotlib notebook`` for interactive plots, so you can zoom into the result." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rcv = xciever(ptrain/2.0 , fs) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "\n", "\n", "width, height = figaspect(0.3)\n", "fig = figure(figsize=(width,height))\n", "\n", "# your code here\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Extract a single pulse from the received pulse train. You can find the pulse index from the interactive plot. Extract at least 2 Npulse samples before the pulse and 20 Npulse samples after using `rcv_pulse = rcv[idx-2*Npulse:idx+Npulse*20]` \n", "\n", "* Plot the received pulse. Can you see any echoes?\n", "\n", "You can disable interactivity by the pragma ``matplotlib inline``\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "# your code here:\n", "# find index of start pulse\n", "\n", "# your code here\n", "# find index of start pulse\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Matched Filtering\n", "\n", "The strong pulses we see are a result of direct feed-through from the transmitter to the receiver that do not scatter off targets. The echoes we see are a result of echoes from reflecting surfaces. The problem in our setup is that we don't know the exact delay between the transmitter and the receive hardware (in PyAudio). Instead, we will assume that the travel time for sound between the speaker and the microphone is negligible and much smaller than scatering targets. We can then detect when the pulses start based on the direct feedthrough signal. This assumption is very good as long as your speaker is close to the microphone!\n", "\n", "We will detect both the feedthrough and echoes using matched filtering. \n", "\n", "* Write a function `Xrcv = crossCorr( rcv, pulse_a )` to calculate the cross correlation (matched filter) of the received signal with the analytic function of the pulse. You can use `signal.fftconvolve`\n", "* Take the absolute value of `Xrcv` to recover its envelope. Call the result `Xrcv_a`.\n", "\n", "Make sure the plot is interactive with ``matplotlib notebook``" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def crossCorr( rcv, pulse_a ):\n", " # Funtion generates cross-correlation between rcv and pulse_a\n", " # Inputs:\n", " # rcv - received signal\n", " # pulse_a - analytic pulse\n", " # Output:\n", " # Xrcv - cross-correlation between rcv and pulse_a\n", " \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "\n", "# your code here:\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Again, extract a single pulse from the received pulse train using the same index. Extract at least 2 Npulse samples before the pulse and 20 Npulse samples after. Plot the received pulse. Can you see any echoes?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "# find index of start pulse\n", "\n", "\n", "# plot\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Sonar System\n", "\n", "In order to automate the system and visualize the results we need a few more components. To extract the pulses we need to know the position of the first feedthrough pulse. \n", "\n", "\n", "* Write a function `idx = findDelay(Xrcv_a, Nseg)` that takes the result of the matched filtering and finds the index of the first feedthrough pulse. Try testing on the actual signal to check whether the function is correct. There are multiple ways of doing it. `Nseg` is not necessarily required." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def findDelay(Xrcv, Nseg):\n", " # finds the first pulse\n", " # Inputs: \n", " # Xrcv - the received matched filtered signal\n", " # Nseg - length of a segment\n", " # Output:\n", " # idx - index of the beginning of the first pulse\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "idx = findDelay(Xrcv_a,Nseg)\n", "print(idx)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now can correct for delays and detect echoes. The only thing left now is to convert the time between echoes into actual distance.\n", "\n", "If we send a pulse of sound, we will get reflection echoes of that pulse. Detecting the echos and their time-of-flight will reveal their distance from the source, based on the speed of sound in air. The speed of sound in air is given by the following equation:\n", "\n", "$$ v_s = 331.5\\sqrt{1+T/273.15}~\\mathrm{m/s}~,$$ \n", "\n", "where T is the temperature in degree celcius. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Create a function `t = dist2time( dist, temperature )` that takes in the distance to the target in cm and converts it into the time in seconds between the transmitted pulse and its echo. Remember the arrival time include the time to the target and back and therefore the time should be doubled. \n", "For example, for temperature = 20 celcius and dist = 400 cm, the time it takes is 0.023 secs.\n", "\n", "* Create a function `dist = time2dist( t, temperature )` that takes in the time to the target in seconds and converts it into the distance in cm between the transmitted pulse and its echo. Remember the arrival time include the time to the target and back and therefore the time should be halfed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "def dist2time( dist, temperature=21):\n", " # Converts distance in cm to time in secs\n", " # Inputs:\n", " # dist - distance to object in cm\n", " # temperature - in celcius\n", " # Output:\n", " # t - time in seconds between transmitted pulse and echo\n", " \n", " \n", "\n", "def time2dist(t,temperature=21):\n", " # Converts time in seconds to distance in cm\n", " # Inputs:\n", " # t - time of echo\n", " # temperature - in celcius\n", " # Output:\n", " # dist - distance in cm of the target\n", " \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A sonar (almost)\n", "\n", "* The following function will use your functions to generate pulses and display the matched filtering of each pulse as intensity of a horizontal line in an image. If nothing is moving, you will be able to see constant vertical lines representing echos. If something is moving, you will be able to track the object's distance. \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# some code for you\n", "\n", "def sortOfASonar(Npulse, f0, f1,fs, Nrep, Nseg):\n", " pulse_a = genChirpPulse(Npulse, f0, f1, fs)*np.hanning(Npulse)\n", " pulse = pulse_a.real\n", " ptrain = genPulseTrain(pulse, Nrep, Nseg)\n", " rcv = xciever(ptrain/2.0 , fs) \n", " Xrcv_a = abs( crossCorr(rcv, pulse_a) )\n", " \n", " idx = findDelay(Xrcv_a,Nseg) \n", " img = np.zeros((Nrep,Nseg))\n", " img[0,:]=Xrcv_a[idx:idx+Nseg]\n", " \n", " # Look for peak in each pulse in the pulse train to avoid drift between xmit and receive\n", " for n in range(1,Nrep):\n", " idxx = findDelay(Xrcv_a[idx+Nseg//2:idx+Nseg//2+Nseg],Nseg)\n", " idx = idx + idxx+Nseg//2\n", " img[n,:]=Xrcv_a[idx:idx+Nseg]\n", " \n", " \n", " return img\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, use the function above to: \n", "\n", "* Generate a pulse train of 100 pulses. Each (hamming windowed) pulse should be length of 72 samples (1.5ms) and constant frequency of 8KHz. The spacing between puslses should be 0.1 seconds (Nseg=4800). \n", "* Display the image" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Npulse = 72\n", "f0 = 8000\n", "f1 = 8000\n", "fs = 48000\n", "Nrep = 100\n", "Nseg = 4800\n", "img = sortOfASonar(Npulse, f0, f1,fs, Nrep, Nseg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Display the result. Pay attention to the width of the echos. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "%matplotlib notebook\n", "\n", "# display up to 2.5m approximately 700 samples at 21 degrees C in 48000 sampling rate\n", "\n", "vmax = 0.3 # threshold -- lower will be able to see smaller echos\n", "\n", "plt.imshow(img[:,0:700]/max(img.ravel()),vmax=vmax, aspect=10,cmap='gray',interpolation='bilinear',extent=(0,time2dist(700/48000),Nrep*Nseg/fs,0))\n", "plt.xlabel('cm')\n", "plt.ylabel('sec')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, repeat the experiment with a chirp length of Nseg = 360 samples, and a frequency sweep from KHz to 12KHz.\n", "Pay attention to the resolution of the lines. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Npulse = 72*5\n", "f0 = 6000\n", "f1 = 12000\n", "fs = 48000\n", "Nrep = 100\n", "Nseg = 4800\n", "img = sortOfASonar(Npulse, f0, f1,fs, Nrep, Nseg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib notebook\n", "\n", "# display up to 2.5m approximately 700 samples at 21 degrees C in 48000 sampling rate\n", "\n", "vmax = 0.3 # threshold -- lower will be able to see smaller echos\n", "\n", "plt.imshow(img[:,0:700]/max(img.ravel()),vmax=0.2, aspect=10,cmap='gray',interpolation='bilinear',extent=(0,time2dist(700/48000),Nrep*Nseg/fs,0))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Feel free to repeat while moving a target-- can you see the echoes changing? Try playing with different parameters. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "## A Real (time) Sonar\n", "You now have a working sonar! It would be much easier though to play with different parameters if we automate things, so we created some wrappers for real-time plotting in a separate notebook (lab1-RealTime-Sonar). \n", "\n", "* Copy-and-paste the 5 functions you created, including genPulseTrain(), genChirpPulse(), crossCorr(), findDelay(), and dist2time(), to the specified code cell in the real-time Sonar lab.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### You are now ready to proceed to the Real-Time Sonar Lab" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.3" } }, "nbformat": 4, "nbformat_minor": 1 }