{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 1: Time Domain\n", "## Part 2: Sonar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this part of the lab we will write a simple application that implements a sonar using the laptop internal speaker and microphone. \n", "\n", "The basic idea is very simple and is the basis of sonar and ultrasound imagers -- Objects reflect sound waves. If we send a pulse of sound, we will get reflection echoes of that pulse. Detecting the echos and their time-of-flight will reveal their distance from the source, based on the speed of sound in air. \n", "\n", "The way we are going to implement the sonar is to generate a series of rapid pulses, and use matched filtering to detect the source and the returning echos. There are many parameters in this lab that can be tweaked to get different results. We encourage you to experiment. We enjoyed very much making the lab and played quite a bit! we hope you enjoy it too. \n", "\n", "Unfortunately, the quallity of the sonar system is going to be highly dependent on your laptop quallity, and the position of the speakers and microphone. It is recommended that you adjust the sound settings on your system such that only the speaker that is closer to the microphone is active. For example, macBookAirs have the microphone on the side of the computer -- so you should set adjust the audio settings to left speaker only. Also, it is recommended that the speaker volume be set to half of its maximum to avoid non-linear distortions. \n", "\n", "If you are getting poor results, please consult with us. \n", "\n", "This lab was inspired from an iphone app called active-radar. \n", "\n", "\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Import functions and libraries\n", "import numpy as np\n", "from numpy import *\n", "import matplotlib.pyplot as plt\n", "import pyaudio\n", "from numpy import pi\n", "from numpy import sin\n", "from numpy import cos\n", "from numpy import sqrt\n", "from numpy import zeros\n", "from numpy import r_\n", "from scipy import signal\n", "\n", "# Task II\n", "import threading,time\n", "\n", "# Task IV\n", "from rtlsdr import RtlSdr\n", "from numpy import mean\n", "from numpy import power\n", "from numpy.fft import fft\n", "from numpy.fft import fftshift\n", "from numpy.fft import ifft\n", "from numpy.fft import ifftshift\n", "%matplotlib inline" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task I: \n", "\n", "#### Generate Pulses\n", "At first, we will look at short pulses with a pure frequency carrier. These are very common in radar and sonar systems. There are several considerations that we need to make. To be able to detect short distances, and also to be able to get good localization in time our pulses need to be very short. At the same time, shorter pulses carry less energy and have a wider bandwidth (why?). These reduce the signal to noise ratio and reduce our ability to detect targets. At the end, we will have to compromise in order to get satisfactory results in terms of detectibility and resolution. \n", "\n", "Here we are going to design a pulsed sonar system in which we repeatedly send pulses and then listen to the returning echoes. The arrival time of the echos will correspond to double the distance from a target. \n", "\n", "\n", "* Write a function that generates a chirp pulse:\n", "`pulse = genChirpPulse(Npulse, f0, f1, fs)` \n", "\n", "The function will accept: `Npulse` = number of samples, `f0,f1` = starting and ending frequency and `fs` = sampling frequency. The function will return the analytic function of the chirp $e^{j 2\\pi \\int_0^t f(t)dt}$ with amplitude 1.\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def genChirpPulse(Npulse, f0, f1, fs):\n", " # Function generates an analytic function of a chirp pulse\n", " # Inputs:\n", " # Npulse - pulse length in samples\n", " # f0 - starting frequency of chirp\n", " # f1 - end frequency of chirp\n", " # fs - sampling frequency\n", " \n", " # your code here:" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* To validate that the function works display the real-part of the pulse generated with Npulse = 200, f0=1000, f1 = 8000, fs = 44100" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Generate Pulse Trains\n", "\n", "We will use the pulses generated by `genChirpPulse` in a pulse train.\n", "\n", "* Write a new function `ptrain = genPulseTrain(pulse, Nrep, Nseg)`\n", "The function accepts `pulse` = a pulse generated by `genChirpPulse`, `Nrep` = number of pulse repetitions and `Nseg` = length of each pulse train segment (which is >= to the length of `pulse`).\n", "\n", "The function returns `ptrain` which is a vector of length `Nrep` x `Nseg`" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def genPulseTrain(pulse, Nrep, Nseg):\n", " # Funtion generates a pulse train from a pulse. \n", " #Inputs:\n", " # pulse = the pulse generated by genChirpPulse\n", " # Nrep = number of pulse repetitions\n", " # Nseg = Length of pulse segment >= length(pulse)\n", " \n", " # your code here:\n", " \n", " \n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Define audio recording functions:\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\n", "import numpy as np\n", "import threading,time\n", "import pyaudio\n", "\n", "## Define functions that play and record audio\n", "\n", "def play_audio( data, p, fs):\n", " # play_audio plays audio with sampling rate = fs\n", " # data - audio data array\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # \n", " # Example:\n", " # fs = 44100\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # play_audio( data, p, fs ) # play audio\n", " # p.terminate() # terminate pyAudio\n", " \n", " # open output stream\n", " ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),output=True)\n", " # play audio\n", " ostream.write( data.astype(np.float32).tostring() )\n", " \n", " \n", "def record_audio( odata, p, fs, record_seconds ):\n", " # record_audio records audio with sampling rate = fs\n", " # odata - output data\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # record_seconds - record seconds\n", " #\n", " # Example:\n", " # fs = 44100\n", " # record_seconds = 5\n", " # odata = zeros( fs * record_seconds ) # initialize odata\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # record_audio( odata, p, fs, record_seconds ) # play audio\n", " # p.terminate() # terminate pyAudio\n", " \n", " # open input stream\n", " chunk = 1024\n", " istream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),input=True,frames_per_buffer=chunk)\n", "\n", " # record audio in chunks and append to frames\n", " frames = [];\n", " for i in range(0, int(fs / chunk * record_seconds)):\n", " data_str = istream.read(chunk) # read a chunk of data\n", " data_flt = np.fromstring( data_str, 'float32' ) # convert string to float\n", " frames.append( data_flt ) # append to list\n", " \n", " # flatten list to array\n", " data = np.concatenate( frames )\n", " # copy to output\n", " np.copyto(odata[0:len(data)], data)\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we will create a function called xciever that will serve as a tranciever to transmit and receive pulses. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "def xciever(ptrain, fs):\n", " # function takes a pulse train and a sampling frequency\n", " # it then plays and records at the same time. The function returns\n", " # the recorded sound. \n", " \n", " \n", " RECORD_SECS = float32(len(ptrain))/fs + 1.0\n", " # allocate receive signal for signal + 1 second\n", " rcv_sig = zeros( len(ptrain) + fs );\n", "\n", " # instantiate PyAudio\n", " p = pyaudio.PyAudio()\n", " \n", " # initialize threads\n", " t_play = threading.Thread( target = play_audio, args = (ptrain, p, fs, ))\n", " t_record = threading.Thread( target = record_audio, args = (rcv_sig, p, fs, RECORD_SECS , ))\n", "\n", " # start recording and playing threads\n", " \n", " t_record.start()\n", " t_play.start()\n", " \n", "\n", " # pause for Tplay+1 seconds\n", " time.sleep( RECORD_SECS+1)\n", "\n", " # terminate pyAudio\n", " p.terminate()\n", " return rcv_sig" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task II:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have components to generate pulses, generate a pulse train, play and record it. Lets see what we get!\n", "We will start with very short pulses with a single carrier frequency. Rectangular pulses are difficult for the speaker\n", "to produce as they exhibit discontinuities in the beginning and the end of the pulse. Therefore we will multiply the pulses\n", "with a smooth window. Here, we will use a square-rooted hanning window. Since we will be detecting with a matched filter, the result\n", "of the cross correlation will be the effect of a full hanning window on the spectrum of the cross correlation. \n", "\n", "* Generate a f0=f1=15KHz, 60 samples pulse with fs=44100. Window the pulse with a square-rooted hanning window. This will result in a pulse slightly longer than 1.5ms. You should be able to hear this tone, though most adults above 40 would not." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the pulse to generate a pulse train of Nrep=15 pulses, Nseg=4096 samples" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Play and record the pulse train. Scale the amplitude of the pulses to 1/2. Make sure your volume is set to maximum of 70%\n", "* Plot the input pulse train and the received pulse train on the same plot. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Extract a single pulse from the received pulse train. Extrat at least 2 Npulse samples before the pulse and 20 Npulse samples after. Plot the received pulse. Can you see any echoes?\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Matched Filtering\n", "\n", "The strong pulses we see are a result of direct feed-through from the transmitter to the receiver that does not scatter off targets. The echoes we see are a result of echoes from reflecting surfaces. The problem in our setup is that we don't know the exact delay between the transmitter and the receive hardware (in PyAudio). Instead, we will assume that the travel time for sound between the speaker and the microphone is negligible and much smaller than scatering targets. We can then detect when the pulses start based on the direct feedthrough signal. \n", "\n", "We will detect both the feedthrough and echoes using matched filtering. \n", "\n", "* Compute the cross correlation (matched filter) of the received signal with the analytic function of the pulse. You can use `signal.fftconvolve'\n", "* Take the absolute value to recover its envelope" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, extract a single pulse from the received pulse train. Extract at least 2 Npulse samples before the pulse and 20 Npulse samples after. Plot the received pulse. Can you see any echoes?" ] }, { "cell_type": "code", "collapsed": false, "input": [ "i# your code here:\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Sonar System\n", "\n", "In order to automate the the system and visualize the results we need a few more components. To extract the pulses we need to know the position of the first feedthrough pulse. \n", "\n", "* Write a function `idx = findDelay(rcv, Nseg)` that takes the result of the matched filtering and finds the index of the first feedthrough pulse. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "def findDelay(Xrcv, Nseg):\n", " # finds the first pulse\n", " # Inputs: \n", " # Xrcv - the received matched filtered signal\n", " # Nseg - length of a segment\n", " \n", " # your code here:\n", " \n", " \n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Find the position of the first pulse and Extract the portion of the entire pulse train\n", "* Reshape the vector of pulses into a matrix in which each row is a time-segment. \n", "* For the purpose of display we will normalize the feedthrough matched filter result to 1. For that, compute the average of the first column\n", "of the matrix and normalize the matrix by the result. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have a matrix in which the rows represent distance of targets and column which represent time. \n", "\n", "The speed of sound in air is given by the following equation:\n", "$$ v_s = 331.5\\sqrt{1+T/273.15}~\\mathrm{m/s}~,$$ \n", "where T is the temperature in degree celcius. \n", "\n", "* Create a vector corresponding echo-arrival time to distance of the target from the source. Don't forget that \n", "the arrival time include the time to the target and back and therefore the distance should be halved. \n", "* Create a vector corresponding to the start time of each impulse in the impulse train. \n", "\n", "* Display the result as image intensity. Use `aspect='auto'` and set `vmax=0.2` so that you can see the echoes. Set the axis to show only distances between 0cm and 3m. Label the axis appropriately. \n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# your code here:\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You now have a working sonar! It would be much easier though to play with different parameters if we automate things. \n", "Therefore, write a function `sonar(Npulse, f0, f1, fs, Nseg, Nrep, temperature=20,maxDist=400,vmax=0.2)` The function will play sound, capture it and display the resulting scattering map. \n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def sonar(Npulse, f0, f1, fs, Nseg, Nrep, T=20,maxDist=400,vmax=0.2):\n", " # S sonar function\n", " # Inputs: \n", " # Npulse - number of samples in a pulse\n", " # f0 - starting frequency for chirp\n", " # f1 - ending chirp frequency\n", " # fs - sampling rate\n", " # Nseg - number of samples in a segment\n", " # Nrep - number of pulse repetitions\n", " # T = Temperature\n", " # maxDist = maximum distance to display in plot\n", " # vmax - windowing levvel of the image so echoes can be seen. \n", " #\n", " #\n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task III: Play with different parameters. \n", "\n", "The detection resolution and sensitivity will depend on the parameters you will choose for the sonar. The longer the pulse is, the worse time resolution you will get, but much stronger matched filtering amplitude. You can buy the time resolution by using pulse compression with chirp pulses at the expense of increasing the bandwidth. \n", "\n", "* Run the sonar with different parameters. You can get interesting targets by moving your laptop, moving yourself next to the computer, using a book as a reflecting surface. It would be easier for you to distinguish the target if you move it back and forth. Play with the pulse lengths, the frequency sweeps, etc. \n", "* Submit 4 interesting set of parameters. Explain why you chose them. \n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "sonar(300,10000,19000,44100,4410,100,T=18.5,maxDist=300)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## I hope you enjoyed the lab\n", "\n", "### Task IV (Optional Bonus only!): A real-time sonar application in python.\n", "#### I challenge those who are interested to write a real-time application in which the computer continiously playes pulses and the processing is done and displayed in real-time. Please contact me if you have any questions. I can also give you some suggestions on how to go about it. This will count as an additional lab with full score. " ] } ], "metadata": {} } ] }