{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#Lab2 - Time Frequency\n", "
\n", "originally designed by John Pauly, modified and extended by Michael Lustig, Translated to python by Frank Ong\n", "\n", "This week we will look at the processing and spectrum of time-varying signals. In the first part of the lab we will look at the short-time fourier transform and spectrograms. We will use this to analyze audio signals and broadcast FM. In the second we will look at frequency offset correction of the SDR using GSM cellphone basestation signals. In the last part we will look at generating digital packets using frequency modulation. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Import general functions and libraries\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pyaudio\n", "from numpy import pi\n", "from numpy import sin\n", "from numpy import zeros\n", "from numpy import ones\n", "from numpy import r_\n", "from scipy import signal\n", "\n", "from numpy import hanning\n", "from numpy import outer\n", "\n", "from rtlsdr import RtlSdr\n", "from numpy import mean\n", "from numpy import power\n", "from numpy.fft import fft\n", "from numpy.fft import fftshift\n", "from numpy.fft import ifft\n", "from numpy.fft import ifftshift\n", "\n", "from numpy import isreal\n", "\n", "#Task VI\n", "\n", "%matplotlib inline" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part I: Spectrograms\n", "In class we used the DFT to estimate the frequency of a segment of a signal. When we do this, we are implicitly assuming that the frequency is constant over that interval. This is a reasonable assumption for some signals, especially for a very short time windows. \n", " \n", "There are many times when we'd like analyze signals whose frequency is changing over time. In fact, most signals aren't interesting unless they do change! There are many examples, including speech, music, and the sounds that surround you in daily life. In this lab we will learn how to process these signals to determine how their spectrum changes with time.\n", "\n", "The basic problem is that we have long segment of a signal $x[n]$, where $n=0, ... ,N-1$. We want to know what its frequency is as a function of time. There are two basic approaches. One is to pass the signal through a bank of bandpass filters, and plot the outputs of the filters as a function of time. \n", "\n", "\n", "The second approach is based on the short-time Fourier transform and is to break the signal up into short segments, and compute the spectrum of each of these separately. This is the approach we will use in this lab.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A sample signal is shown in Figure 1. You've seen this signal during the class. Obviously you can tell that the amplitude is changing as a function of time. When you examine the signal closely, you can also tell that the frequency is changing.\n", "Since the frequency is changing, we want to break the signal into segments over which the frequency is relatively constant. Then we will analyze each segment using the DFT. The result of analyzing the first 256 samples, and another block of 256 samples that is 1024 samples later is shown in Figure 2. As is the convention with the DFT, zero frequency is the first sample, and the Nyquist rate corresponds to the middle sample. The sampling rate in this example was 8192Hz, so the maximum frequency is 4096Hz. Note that the frequency has dropped about 300 Hz between the first and second block." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\"sampsig\"
\n", "
Figure 1: A segment of a sampled bird song.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a long signal, there would be a tremendous number of plots that would have to be examined, and this doesn't convey the information very effectively. Instead, the most common presentation is to display the information in an image format, with frequency on the vertical axis, time on the horizontal axis, and the value of a particular pixel being the magnitude of the spectra at that time and frequency. For the first 4 tasks, we are assuming that the signal is real and the magnitude of the spectrum is symmetric about the origin. We therefore need only display the positive frequencies. Later, in task 5 where we acquire data using the SDR the signal is going to be complex and we will need to display both positive and negative frequencies. This presentation is known as a spectrogram." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\"two_segmentsc\"
\n", "
Figure 2: The spectra from two 256 sample blocks of the signal from Fig. 1
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An example of this type of plot for the signal that Fig. 1 was taken from is shown in Fig. 3:\n", "
\"bird_spectroc\"
\n", "
Figure 3: A segment of a sampled bird song.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A function that takes a two-dimensional array y and makes an image of the log-magnitude is given below:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Plot an image of the spectrogram y, with the axis labeled with time tl,\n", "# and frequency fl\n", "#\n", "# t_range -- time axis label, nt samples\n", "# f_range -- frequency axis label, nf samples\n", "# y -- spectrogram, nf by nt array\n", "# dbf -- Dynamic range of the spect\n", "\n", "def sg_plot( t_range, f_range, y, dbf = 60) :\n", " eps = 1e-3\n", " \n", " # find maximum\n", " y_max = abs(y).max()\n", " \n", " # compute 20*log magnitude, scaled to the max\n", " y_log = 20.0 * np.log10( abs( y ) / y_max + eps )\n", " \n", " fig=figure(figsize=(16,6))\n", " \n", " plt.imshow( np.flipud( 64.0*(y_log + dbf)/dbf ), extent= t_range + f_range ,cmap=plt.cm.gray, aspect='auto')\n", " plt.xlabel('Time, s')\n", " plt.ylabel('Frequency, Hz')\n", " plt.tight_layout()\n", "\n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The only parameter of note is dbf which determines the dynamic range in dB that will be presented. The default is 60 dB, or a factor of 1000. This allows you to see small spectral components without seeing all of the background noise, and is reasonably suitable for our purposes here. For other applications with higher or lower noise levels, you may want to change this. For example in Task 5, the signal to noise ratio coming out of the SDR will require adjusting the dynamic range (somewhere between 20-40 was appropriate for my reception)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Several different sound files are available on the class web site to for you to work with. These are\n", "\n", "- `s1` A bird call.\n", "\n", "- `s2` A creaking door.\n", "\n", "- `s3` An orca whale call.\n", "\n", "- `s4` A sound effect.\n", "\n", "- `s5` 24Week Fetal Doppler Ultrasound \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Read each .wav files with the function `data = read_wav( filename )` and play them with the following commands:\n", " \n", " p = pyaudio.PyAudio() # instantiate PyAudio\n", " play_audio( data, p, fs ) # play audio\n", "\n", "Here are the definitions of these functions that read wav files and play them." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import pyaudio\n", "import wave\n", "\n", "# function that reads wav file to array\n", "def read_wav( wavname ):\n", " \n", " wf = wave.open(wavname, 'rb')\n", " \n", " CHUNK = 1024\n", " frames = []\n", " data_str = wf.readframes(CHUNK) #read a chunk\n", " \n", " while data_str != '':\n", " data_int = np.fromstring( data_str, 'int16') # convert from string to int (assumes .wav in int16)\n", " data_flt = data_int.astype( np.float32 ) / 32767.0 # convert from int to float32\n", " frames.append( data_flt ) #append to list\n", " data_str = wf.readframes(CHUNK) #read a chunk\n", "\n", " return np.concatenate( frames )\n", "\n", "\n", "def play_audio( data, p, fs):\n", " # data - audio data array\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " \n", " # open output stream\n", " ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=fs,output=True)\n", " # play audio\n", " ostream.write( data.astype(np.float32).tostring() )\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Task: Play all the wav files" ] }, { "cell_type": "code", "collapsed": false, "input": [ "## Play sounds\n", "# instantiate PyAudio\n", "p = pyaudio.PyAudio()\n", "fs = 44100\n", "# Your code here:\n", "\n", "\n", "\n", "\n", "\n", "p.terminate()\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try to visualize what you would expect the spectrum to look like as a function of time. In each case, the sampling rate is 44100 Hz.\n" ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task 1: Computing a simple spectrogram" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\"rect_win\"
\n", "
Figure 4: Extracting a segment of a long signal with a rectangular window
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For our first attempt at making a spectrogram, we will simply take the original signal of length N, and split it into blocks of length M. We then compute the DFT of each block. This corresponds to using a rectangular window to extract each block from the signal, as is illustrated in Fig. 4.\n", "\n", "Write a python function that computes the spectrogram for a signal. If the original signal is a vector `x`, It should\n", "\n", " * Break the signal up into `m`-sample blocks, stored in the columns of a 2D matrix `xm`. This may require padding the signal with zeros, so that the length is a multiple of the block size.\n", "\n", " * Apply the fft to the matrix using `xmf = fft(xm,len(xm),axis=0)`. This operation applies FFT along each column of the matrix. It does not do a 2D FFT.\n", "\n", " * Compute ranges for time `t_range` and frequency `freq_range`\n", "\n", " * Call the `sg_plot(t_range,f_range,xmf[1:m/2,:])`, where we are only plotting the positive frequencies. Your routine should be invoked with\n", "\n", " myspectrogram(x,m,fs)\n", " \n", " where `x` is the signal, `m` is the block size and `fs` is the sampling rate in Hz.\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One trick that can help you here is the reshape command. To create the `xm` matrix, first zero pad `x` to be a multiple of `m` in length:\n", " # pad x up to a multiple of m \n", " lx = len(x);\n", " nt = (lx + m - 1) // m # // enforces integer division\n", " xp = append(x,zeros(-lx+nt*m))\n", " \n", "Use reshape to make it an m by nt matrix in column major fashion. The order 'F' means a Fortran order, which is column first as opposed to the 'C' order, which is C language order and is row first. Matlab uses a default of column order, whereas numpy uses a default of row order:\n", " \n", " xm = reshape( xp, (m,nt), order='F')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each column of `xm` is one block of `x`.\n", "\n", "To compute the time and frequency range, recall that the DFT frequencies go from `0` to the sampling frequency `fs` Hz in steps of `fs/m` Hz. We are only going to plot the positive frequencies, so \n", "\n", " f_range = [0.0, fs / 2.0]\n", " \n", "The time of a particular block is the period of one sample, `1/fs` seconds, multiplied by the number of samples in the block. If there are nt blocks, the time range is\n", "\n", " t_range = [0.0, lx / fs]\n", " \n", "Try your spectrogram routine with the bird call sound file, `s1`, with a block size of `256` samples. Note, that the sampling rate of the data is `44100` Hz (Compact Disc rate). The spectrogram should look vaguely reminiscent of Fig. 3. However, there is noticeable streaking in the spectral dimension. Include a copy of this spectrogram in your report." ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "Solution for Task 1: " ] }, { "cell_type": "code", "collapsed": false, "input": [ "def myspectrogram(x,m,fs):\n", " # Function breaks the signal into non-overlapping windows, computes the DFT of each window\n", " # The function then calls sg_plot to display the result\n", " #\n", " # Inputs: \n", " # x - data\n", " # m - window size\n", " # fs - sampling rate\n", " \n", " \n", " # Your code here" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "# Plot The Spectrogram\n", "# Your Code Here:\n", "\n", "\n", "\n", "\n", "\n", "plt.title( \"Spectrogram with Rect window\" )" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task 2: A better spectrogram." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly to what we have seen in class, the problem with the spectrogram from task 1 is that we have used a square window to cut out blocks of the signal. In the spectral domain this corresponds to convolving a periodic sinc with the spectra. The sinc sidelobes are high, and fall off slowly. We need to do something a little more gentle with the data.\n", "\n", "To improve our spectrogram, we will extract each block of data with a Hann window. We can do this by multiplying our `xm` matrix with a matrix that has a Hann window along its columns,\n", "\n", " xmw = xm * outer(hanning(m), ones(nt) );\n", "\n", "The `outer(hanning(m), ones(nt) )` term is a matrix that has the same Hann window in each of the nt columns. \n", "\n", "Another more \"pythony\" way is to use broadcasting:\n", "\n", " xmw = xm * hanning(m)[:,None]\n", "\n", "Incorporate this into your a new `myspectrogram_hann(x,m,fs)` function. \n", "\n", "Try your function on the bird song again. This time the spectrogram should look very similar to the Fig. 3. The location and size of the windows is shown in Fig. 5. Note that parts of the signal are effectively not used, at the ends of each window. This can result in discontinuities in the temporal direction of the spectrogram.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\"hann_win\"
\n", "
Figure 5: Extracting segments of a long signal with a Hann window
" ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "Solution for Task 2:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def myspectrogram_hann(x,m,fs):\n", " # Function breaks the signal into non-overlapping windows, multiplies by hann window, computes the DFT of each window\n", " # The function then calls sg_plot to display the result\n", " #\n", " # Inputs: \n", " # x - data\n", " # m - window size\n", " # fs - sampling rate\n", " \n", " # Your code here:\n", " \n", " \n", " \n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "# Plot The Spectrogram\n", "# Your Code Here:\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "plt.title( \"Spectrogram with Hann window\" )" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task 3: An even better spectrogram!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a final enhancement, we will overlap the Hann windows, as shown in Fig. 6. Each block of the signal, and column of the `xm` matrix overlaps the adjacent blocks by 50%. There are a number of ways to do this in python. One is to replicate each half block in `xp` before the reshape. Another direct approach would be to program it in a `for` loop. Note that you also have to modify the time vector, since you have twice as many time samples. Write a new Incorporate this into your a new `myspectrogram_hann_ovlp(x,m,fs)` function. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try this on the bird song. it should look much smoother than the previous case. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\"ol_hann_win\"
\n", "
Figure 6: Extracting segments of a long signal with a Hann window
" ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "Solution for Task 3:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def myspectrogram_hann_ovlp(x,m,fs, dbf=60):\n", " # Function breaks the signal into 50%-overlapping windows, multiplies by hann window, computes the DFT of each window\n", " # The function then calls sg_plot to display the result\n", " #\n", " # Inputs: \n", " # x - data\n", " # m - window size\n", " # fs - sampling rate\n", " \n", " # Your Code Here:\n", " \n", " \n", " \n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "# Plot The Spectrogram\n", "# Your Code Here:\n", "\n", "\n", "\n", "\n", "plt.title( \"Spectrogram with overlapping Hann window\" )" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task 4: Time and frequency resolution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far we've just set the block size to 256 samples, which is about 5.8 ms of data. The frequency resolution is then 44100/256 = 172.27 Hz. Ideally we'd like to have both time and frequency resolution. However, one comes at the expense of the other. For example if we wanted to improve our frequency resolution to 86.13 Hz, it would increase the time between samples to 11.6 ms.\n", "\n", "The optimum tradeoff between the two factors depends on the nature of the spectrum you are analyzing. A good example is the orca whale call, signal s3. This has segments that are very rapidly changing, and segments where the changes are slow. Try different block sizes, from 128, 256, 512, and 1024 samples. Note how the frequency resolution improves, while the temporal resolution degrades." ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "Solution for Task 4:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\n", "# Your Code Here:\n", "\n", "\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "raw", "metadata": {}, "source": [ "Your Answer Here:\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Task 5: Time-Frequency plots of the radio-frequency spectrum with the SDR." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The samples that are obtained by the SDR represent a bandwidth of the spectrum around a center frequency. Hence, when demodulating to base-band (i.e. zero frequency) the signal must be imaginary since it has a non symmetric Fourier transform. \n", "\n", "In this case, we would like to display both sides of the spectrum.\n", "\n", "\n", "Modify the function `myspectrogram_hann_ovlp(x,m,fs,fc)` such that it detects if the input signal `x` is complex. In that case, it will compute a double sided spectrum with frequency range centered around fc (center frequency). For this, it would be useful to use the commands: `isreal` and `fftshift`.\n" ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Solution:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\n", " \n", "def myspectrogram_hann_ovlp(x, m, fs, fc,dbf = 60):\n", " # Function breaks the signal into 50%-overlapping windows, multiplies by hann window, computes the DFT of each window\n", " # The function then calls sg_plot to display the result. For complex inputs the function displays a two sided spectrum.\n", " #\n", " # Inputs: \n", " # x - data\n", " # m - window size\n", " # fs - sampling rate\n", " \n", " # your code here:\n", " \n", " \n", " \n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will first look at radio FM spectrum. In the US the broadcast FM radio band is 88.1-107.9Mhz. It is split into 200KHz slots. This is relatively a large bandwidth and therefor it is also called wideband FM as opposed to narrowband FM which can be as low as 5 Khz. In FM radio the information is encoded by modulating the frequency of the carrier, \n", "$\\qquad \\qquad y_c(t) = A\\cos \\left (2\\pi f_c t + 2\\pi \\Delta f \\int_0^t x(\\tau) d\\tau \\right ).$\n", "\n", "Here, $f_c$ is the carrier frequency, $\\Delta f$ is the frequency deviation and $x(t)$ is a normalized baseband signal.\n", "\n", "The broadcast FM baseband signal, $x(t)$, consists of mono (Left+Right) Channels from 30Hz to 15 KHz, a pilot signal at $f_p=19$ KHz, amplitude modulated Stereo (Left - Right) channels around $2\\cdot f_p = 38$KHz and digital RBDS, which encodes the station information, song name etc. at $3\\cdot f_p =57$KHz. (See `http://en.wikipedia.org/wiki/FM_broadcasting` for more information). \n", "\n", "The baseband signal is:\n", "\n", "$ \\qquad \\qquad x(t) = \\underbrace{(L+R)}_{\\mathrm{mono}} + \\underbrace{0.1 \\cdot \\cos(2\\pi f_p t)}_\\mathrm{pilot} + \\underbrace{(L-R)\\cos(2\\pi (2f_p) t)}_\\mathrm{stereo} + \\underbrace{0.05\\cdot \\mathrm{RBDS}(t)\\cos(2\\pi (3f_p) t)}_\\mathrm{digital~ RBDS} + \\mathrm{subcarriers~signals}. $\n", "\n", "The signal $\\mathrm{RBDS}(t)$ is a signal consists of $\\pm1$ at constant intervals which encode 0, and 1 bits. The subcarriers are often FM modulated signals of foreign stations or carry other information.\n", "This is the spectrum of $x(t)$:\n", "\n", "
\"FM\"
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will listen to our Berkeley own KPFA 94.1MHz station. KPFA transmitter is located close to Grizzly Peak Road, close to where it meets Claremont Ave:\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like always, you will get the best results if you collect samples outside. \n", "\n", "\n", "Task: Set the SDR to sample at a center frequency of 94.1MHz (KPFA) and a rate of 240KHz. Collect 8*256000, which is about 8 seconds of data. \n", "Compute and display a spectrogram with a window of 512 samples. Include the spectrogram in your report. What is the spectral and temporal resolution? Explain what you see. Don\u2019t forget to play with different dynamic ranges of the spectrogram for best visualization. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your acquisition code here:\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "# Display the spectrogram:\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The plot does not resemble the broadcase FM spectrum figure above, since it is not yet FM demodulated. We can easily see that the signal is frequency modulated -- because its frequency looks like the time-signal of speech or music.\n", "\n", "The next step we are going to perform is to demodulate the FM signal and look at its spectrogram. For this we need to find the instantanious frequency as a function of time. There are many ways to implement this -- using a limiter, followed by a discriminator, a phase-locked-loop and more. However, because we have the digital samples it is very easy to compute the instantanious frequency by taking the derivative of the phase of the signal. Recall that $ y_c(t) = A\\cos \\left ( 2\\pi f_c t + 2\\pi \\Delta f \\int_0^t x(\\tau) d\\tau \\right )$.\n", "The SDR demodulates by $f_c$, so the phase of the demodulated signal (the one we sample) is $\\phi(t)=2\\pi \\Delta f \\int_0^t x(\\tau) d\\tau$.\n", "To compute $x(t)$ we need to compute $x(t) = \\frac{d\\phi(t)}{dt}$ (We do not care much about the scaling of $x(t)$. This will correspond to the volume of the signal. \n", "\n", "We can approximate the derivative of the phase by computing finite differences of the phase of our signal, i.e., $\\phi[n]-\\phi[n-1]=\\mathrm{angle}(y[n])-\\mathrm{angle}(y[n-1])$. The problem with phase is that it wraps! This will cause large finite difference values when the phase is around 0.\n", "An alternative that is robust to phase wraps is to compute the finite difference by computing the product $y[n]\\cdot y^*[n-1] = A_n*A_{n-1}e^{j2\\pi(\\phi[n]-\\phi[n-1])}$ and then computing the phase. Convince yourself that it works! \n", "\n", "FM demodulate the samples you got from the SDR and display the spectrogram. Note, that after FM demodulating the signal should be real and hence only half the spectrum should be displayed. Identify the mono audio, the pilot, the stereo and the RBDS signals. Note, that the RBDS signal may be too weak to detect or may need better spectral resolution. Can you identify the subcarriers? KPFA has two subcarriers, one plays French Hatian radio and the other Punjabi. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code Here:\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 6: Demodulate and play the mono and subcarrier signals\n", "\n", "In this task we will play the mono signal of the demodulated FM station. we will also demodulate the subcarriers and play them as well. \n", "\n", "The mono signal covers the frequency range of 0-16KHz. However, there are many other signals present. There's another problem. Our sampled signal is at a rate of 240KHz. The soundcard on most modern computers can only deal with a sampling rate of 48KHz. We will therefore need to filter our signal and downsample it before being able to play it. \n", "\n", "To filter the signal we will use the SciPY signal processing filter design tool to design the filter parameters. The function we will use is `firwin` which uses the window method which we will cover later in class. It is very simple to use. To design a 128 taps (length of the filter) filter that passes 16KHz of a signal sampled at 240KHz the command is:\n", "\n", "`h = signal.firwin(128,16000.0,nyq=240000.0/2)`\n", "\n", "Create the filter and use it to filter the demodulated signal using `fftconvolve`. Plot the resulting spectrogram of the filtered signal. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to play the audio we will need to change the rate of sampling. If we downsample by 5, the rate will be 48KHz, which the computer can play. We can achieve it by taking evey 5th sample. Since our signal is already confined to a maximum frequency of 16KHz, we should be fine and not get aliasing. \n", "\n", "Downsample the signal. Scale it so that its range is between -1 and 1 and play it. \n", "\n", "Plot the spectrogram after downsampling. Make sure it makes sense! Do you see aliasing?\n", "\n", "Do you hear radio?\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next step is to demodulate the subcarriers. To do this, we will need to first, demodulate the subcarrier to baseband. We will then low-pass to separate it from the other signals. Then, we will then downsample it to 48KHz, and FM demodulate it by taking the derivative of the phase. \n", "\n", "To demodulate the subcarrier to baseband -- or zero frequency we will need to:\n", "\n", "1. create a time index `t` representing the samples of the signal\n", "2. multiply the signal with $e^{-2\\pi j f_0 t}$ \n", "\n", "Task: Demodulate the signal by $f_0=67.65$KHz and plot the spectrogram. Since the signal is now complex, you should plot both sides of the spectrum. The subcarrier should be placed at DC frequency" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Design a 128 tap low-pass filter with a bandwidth of $\\pm 7.5$KHz in the passband. Filter the signal and decimate by a factor of 5 to get a signal with a sampling rate of 48KHz. Plot the spectrogram (make sure you adjusted the spectrogram to represent the new sampling rate!). do you see aliasing?" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "FM demodulate the signal by taking the derivative of the phase. Filter again, with the same parameters of $\\pm7.5$KHz low-pass filter to get rid of unwanted noise (you can not use the same filter. Pay attention to the sampling rate!!!). Scale the result to be within $\\pm1$ and play the audio. Plot the spectrogram of the resulting signal. Can you hear French Hatian? " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Repeat the procedure to play the subcarrier at $f_0=92$ KHZ" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Your code here:\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }