originally designed by John Pauly, modified and extended by Michael Lustig, Translated to python by Frank Ong

This week we will look at the processing and spectrum of time-varying signals. In the first part of the lab we will look at the short-time fourier transform and spectrograms. We will use this to analyze audio signals and broadcast FM. In the second we will look at frequency offset correction of the SDR using GSM cellphone basestation signals. In the last part we will look at generating digital packets using frequency modulation.

In []:

```
# Import general functions and libraries
import numpy as np
import matplotlib.pyplot as plt
import pyaudio
from numpy import pi
from numpy import sin
from numpy import zeros
from numpy import ones
from numpy import r_
from scipy import signal
from numpy import hanning
from numpy import outer
from rtlsdr import RtlSdr
from numpy import mean
from numpy import power
from numpy.fft import fft
from numpy.fft import fftshift
from numpy.fft import ifft
from numpy.fft import ifftshift
from numpy import isreal
#Task VI
%matplotlib inline
```

In class we used the DFT to estimate the frequency of a segment of a signal. When we do this, we are implicitly assuming that the frequency is constant over that interval. This is a reasonable assumption for some signals, especially for a very short time windows.

There are many times when we'd like analyze signals whose frequency is changing over time. In fact, most signals aren't interesting unless they do change! There are many examples, including speech, music, and the sounds that surround you in daily life. In this lab we will learn how to process these signals to determine how their spectrum changes with time.

The basic problem is that we have long segment of a signal \(x[n]\), where \(n=0, ... ,N-1\). We want to know what its frequency is as a function of time. There are two basic approaches. One is to pass the signal through a bank of bandpass filters, and plot the outputs of the filters as a function of time.

The second approach is based on the short-time Fourier transform and is to break the signal up into short segments, and compute the spectrum of each of these separately. This is the approach we will use in this lab.

A sample signal is shown in Figure 1. You've seen this signal during the class. Obviously you can tell that the amplitude is changing as a function of time. When you examine the signal closely, you can also tell that the frequency is changing. Since the frequency is changing, we want to break the signal into segments over which the frequency is relatively constant. Then we will analyze each segment using the DFT. The result of analyzing the first 256 samples, and another block of 256 samples that is 1024 samples later is shown in Figure 2. As is the convention with the DFT, zero frequency is the first sample, and the Nyquist rate corresponds to the middle sample. The sampling rate in this example was 8192Hz, so the maximum frequency is 4096Hz. Note that the frequency has dropped about 300 Hz between the first and second block.

For a long signal, there would be a tremendous number of plots that would have to be examined, and this doesn't convey the information very effectively. Instead, the most common presentation is to display the information in an image format, with frequency on the vertical axis, time on the horizontal axis, and the value of a particular pixel being the magnitude of the spectra at that time and frequency. For the first 4 tasks, we are assuming that the signal is real and the magnitude of the spectrum is symmetric about the origin. We therefore need only display the positive frequencies. Later, in task 5 where we acquire data using the SDR the signal is going to be complex and we will need to display both positive and negative frequencies. This presentation is known as a spectrogram.

An example of this type of plot for the signal that Fig. 1 was taken from is shown in Fig. 3:
Figure 3: A segment of a sampled bird song.

A function that takes a two-dimensional array y and makes an image of the log-magnitude is given below:

In []:

```
# Plot an image of the spectrogram y, with the axis labeled with time tl,
# and frequency fl
#
# t_range -- time axis label, nt samples
# f_range -- frequency axis label, nf samples
# y -- spectrogram, nf by nt array
# dbf -- Dynamic range of the spect
def sg_plot( t_range, f_range, y, dbf = 60) :
eps = 1e-3
# find maximum
y_max = abs(y).max()
# compute 20*log magnitude, scaled to the max
y_log = 20.0 * np.log10( abs( y ) / y_max + eps )
fig=figure(figsize=(16,6))
plt.imshow( np.flipud( 64.0*(y_log + dbf)/dbf ), extent= t_range + f_range ,cmap=plt.cm.gray, aspect='auto')
plt.xlabel('Time, s')
plt.ylabel('Frequency, Hz')
plt.tight_layout()
```

The only parameter of note is dbf which determines the dynamic range in dB that will be presented. The default is 60 dB, or a factor of 1000. This allows you to see small spectral components without seeing all of the background noise, and is reasonably suitable for our purposes here. For other applications with higher or lower noise levels, you may want to change this. For example in Task 5, the signal to noise ratio coming out of the SDR will require adjusting the dynamic range (somewhere between 20-40 was appropriate for my reception)

Several different sound files are available on the class web site to for you to work with. These are

`s1`

A bird call.`s2`

A creaking door.`s3`

An orca whale call.`s4`

A sound effect.`s5`

24Week Fetal Doppler Ultrasound

Read each .wav files with the function `data = read_wav( filename )`

and play them with the following commands:

```
p = pyaudio.PyAudio() # instantiate PyAudio
play_audio( data, p, fs ) # play audio
```

Here are the definitions of these functions that read wav files and play them.

In []:

```
import pyaudio
import wave
# function that reads wav file to array
def read_wav( wavname ):
wf = wave.open(wavname, 'rb')
CHUNK = 1024
frames = []
data_str = wf.readframes(CHUNK) #read a chunk
while data_str != '':
data_int = np.fromstring( data_str, 'int16') # convert from string to int (assumes .wav in int16)
data_flt = data_int.astype( np.float32 ) / 32767.0 # convert from int to float32
frames.append( data_flt ) #append to list
data_str = wf.readframes(CHUNK) #read a chunk
return np.concatenate( frames )
def play_audio( data, p, fs):
# data - audio data array
# p - pyAudio object
# fs - sampling rate
# open output stream
ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=fs,output=True)
# play audio
ostream.write( data.astype(np.float32).tostring() )
```

Task: Play all the wav files

In []:

```
## Play sounds
# instantiate PyAudio
p = pyaudio.PyAudio()
fs = 44100
# Your code here:
p.terminate()
```

Try to visualize what you would expect the spectrum to look like as a function of time. In each case, the sampling rate is 44100 Hz.

For our first attempt at making a spectrogram, we will simply take the original signal of length N, and split it into blocks of length M. We then compute the DFT of each block. This corresponds to using a rectangular window to extract each block from the signal, as is illustrated in Fig. 4.

Write a python function that computes the spectrogram for a signal. If the original signal is a vector `x`

, It should

Break the signal up into

`m`

-sample blocks, stored in the columns of a 2D matrix`xm`

. This may require padding the signal with zeros, so that the length is a multiple of the block size.Apply the fft to the matrix using

`xmf = fft(xm,len(xm),axis=0)`

. This operation applies FFT along each column of the matrix. It does not do a 2D FFT.Compute ranges for time

`t_range`

and frequency`freq_range`

Call the

`sg_plot(t_range,f_range,xmf[1:m/2,:])`

, where we are only plotting the positive frequencies. Your routine should be invoked with`myspectrogram(x,m,fs)`

where `x`

is the signal, `m`

is the block size and `fs`

is the sampling rate in Hz.

One trick that can help you here is the reshape command. To create the `xm`

matrix, first zero pad `x`

to be a multiple of `m`

in length: # pad x up to a multiple of m lx = len(x); nt = (lx + m - 1) // m # // enforces integer division xp = append(x,zeros(-lx+nt*m))

Use reshape to make it an m by nt matrix in column major fashion. The order 'F' means a Fortran order, which is column first as opposed to the 'C' order, which is C language order and is row first. Matlab uses a default of column order, whereas numpy uses a default of row order:

`xm = reshape( xp, (m,nt), order='F')`

Each column of `xm`

is one block of `x`

.

To compute the time and frequency range, recall that the DFT frequencies go from `0`

to the sampling frequency `fs`

Hz in steps of `fs/m`

Hz. We are only going to plot the positive frequencies, so

` f_range = [0.0, fs / 2.0]`

The time of a particular block is the period of one sample, `1/fs`

seconds, multiplied by the number of samples in the block. If there are nt blocks, the time range is

` t_range = [0.0, lx / fs]`

Try your spectrogram routine with the bird call sound file, `s1`

, with a block size of `256`

samples. Note, that the sampling rate of the data is `44100`

Hz (Compact Disc rate). The spectrogram should look vaguely reminiscent of Fig. 3. However, there is noticeable streaking in the spectral dimension. Include a copy of this spectrogram in your report.

In []:

```
def myspectrogram(x,m,fs):
# Function breaks the signal into non-overlapping windows, computes the DFT of each window
# The function then calls sg_plot to display the result
#
# Inputs:
# x - data
# m - window size
# fs - sampling rate
# Your code here
```

In []:

```
# Plot The Spectrogram
# Your Code Here:
plt.title( "Spectrogram with Rect window" )
```

Similarly to what we have seen in class, the problem with the spectrogram from task 1 is that we have used a square window to cut out blocks of the signal. In the spectral domain this corresponds to convolving a periodic sinc with the spectra. The sinc sidelobes are high, and fall off slowly. We need to do something a little more gentle with the data.

To improve our spectrogram, we will extract each block of data with a Hann window. We can do this by multiplying our `xm`

matrix with a matrix that has a Hann window along its columns,

`xmw = xm * outer(hanning(m), ones(nt) );`

The `outer(hanning(m), ones(nt) )`

term is a matrix that has the same Hann window in each of the nt columns.

Another more "pythony" way is to use broadcasting:

` xmw = xm * hanning(m)[:,None]`

Incorporate this into your a new `myspectrogram_hann(x,m,fs)`

function.

Try your function on the bird song again. This time the spectrogram should look very similar to the Fig. 3. The location and size of the windows is shown in Fig. 5. Note that parts of the signal are effectively not used, at the ends of each window. This can result in discontinuities in the temporal direction of the spectrogram.

In []:

```
def myspectrogram_hann(x,m,fs):
# Function breaks the signal into non-overlapping windows, multiplies by hann window, computes the DFT of each window
# The function then calls sg_plot to display the result
#
# Inputs:
# x - data
# m - window size
# fs - sampling rate
# Your code here:
```

In []:

```
# Plot The Spectrogram
# Your Code Here:
plt.title( "Spectrogram with Hann window" )
```

As a final enhancement, we will overlap the Hann windows, as shown in Fig. 6. Each block of the signal, and column of the `xm`

matrix overlaps the adjacent blocks by 50%. There are a number of ways to do this in python. One is to replicate each half block in `xp`

before the reshape. Another direct approach would be to program it in a `for`

loop. Note that you also have to modify the time vector, since you have twice as many time samples. Write a new Incorporate this into your a new `myspectrogram_hann_ovlp(x,m,fs)`

function.

Try this on the bird song. it should look much smoother than the previous case.

In []:

```
def myspectrogram_hann_ovlp(x,m,fs, dbf=60):
# Function breaks the signal into 50%-overlapping windows, multiplies by hann window, computes the DFT of each window
# The function then calls sg_plot to display the result
#
# Inputs:
# x - data
# m - window size
# fs - sampling rate
# Your Code Here:
```

In []:

```
# Plot The Spectrogram
# Your Code Here:
plt.title( "Spectrogram with overlapping Hann window" )
```

So far we've just set the block size to 256 samples, which is about 5.8 ms of data. The frequency resolution is then 44100/256 = 172.27 Hz. Ideally we'd like to have both time and frequency resolution. However, one comes at the expense of the other. For example if we wanted to improve our frequency resolution to 86.13 Hz, it would increase the time between samples to 11.6 ms.

The optimum tradeoff between the two factors depends on the nature of the spectrum you are analyzing. A good example is the orca whale call, signal s3. This has segments that are very rapidly changing, and segments where the changes are slow. Try different block sizes, from 128, 256, 512, and 1024 samples. Note how the frequency resolution improves, while the temporal resolution degrades.

In []:

```
# Your Code Here:
```

The samples that are obtained by the SDR represent a bandwidth of the spectrum around a center frequency. Hence, when demodulating to base-band (i.e. zero frequency) the signal must be imaginary since it has a non symmetric Fourier transform.

In this case, we would like to display both sides of the spectrum.

Modify the function `myspectrogram_hann_ovlp(x,m,fs,fc)`

such that it detects if the input signal `x`

is complex. In that case, it will compute a double sided spectrum with frequency range centered around fc (center frequency). For this, it would be useful to use the commands: `isreal`

and `fftshift`

.

In []:

```
def myspectrogram_hann_ovlp(x, m, fs, fc,dbf = 60):
# Function breaks the signal into 50%-overlapping windows, multiplies by hann window, computes the DFT of each window
# The function then calls sg_plot to display the result. For complex inputs the function displays a two sided spectrum.
#
# Inputs:
# x - data
# m - window size
# fs - sampling rate
# your code here:
```

We will first look at radio FM spectrum. In the US the broadcast FM radio band is 88.1-107.9Mhz. It is split into 200KHz slots. This is relatively a large bandwidth and therefor it is also called wideband FM as opposed to narrowband FM which can be as low as 5 Khz. In FM radio the information is encoded by modulating the frequency of the carrier,

\(\qquad \qquad y_c(t) = A\cos \left (2\pi f_c t + 2\pi \Delta f \int_0^t x(\tau) d\tau \right ).\)

Here, \(f_c\) is the carrier frequency, \(\Delta f\) is the frequency deviation and \(x(t)\) is a normalized baseband signal.

The broadcast FM baseband signal, \(x(t)\), consists of mono (Left+Right) Channels from 30Hz to 15 KHz, a pilot signal at \(f_p=19\) KHz, amplitude modulated Stereo (Left - Right) channels around \(2\cdot f_p = 38\)KHz and digital RBDS, which encodes the station information, song name etc. at \(3\cdot f_p =57\)KHz. (See `http://en.wikipedia.org/wiki/FM_broadcasting`

for more information).

The baseband signal is:

\(\qquad \qquad x(t) = \underbrace{(L+R)}_{\mathrm{mono}} + \underbrace{0.1 \cdot \cos(2\pi f_p t)}_\mathrm{pilot} + \underbrace{(L-R)\cos(2\pi (2f_p) t)}_\mathrm{stereo} + \underbrace{0.05\cdot \mathrm{RBDS}(t)\cos(2\pi (3f_p) t)}_\mathrm{digital~ RBDS} + \mathrm{subcarriers~signals}.\)

The signal \(\mathrm{RBDS}(t)\) is a signal consists of \(\pm1\) at constant intervals which encode 0, and 1 bits. The subcarriers are often FM modulated signals of foreign stations or carry other information. This is the spectrum of \(x(t)\):

We will listen to our Berkeley own KPFA 94.1MHz station. KPFA transmitter is located close to Grizzly Peak Road, close to where it meets Claremont Ave:

Like always, you will get the best results if you collect samples outside.

Task: Set the SDR to sample at a center frequency of 94.1MHz (KPFA) and a rate of 240KHz. Collect 8*256000, which is about 8 seconds of data. Compute and display a spectrogram with a window of 512 samples. Include the spectrogram in your report. What is the spectral and temporal resolution? Explain what you see. Donâ€™t forget to play with different dynamic ranges of the spectrogram for best visualization.

In []:

```
# Your acquisition code here:
```

In []:

```
# Display the spectrogram:
```

The plot does not resemble the broadcase FM spectrum figure above, since it is not yet FM demodulated. We can easily see that the signal is frequency modulated -- because its frequency looks like the time-signal of speech or music.

The next step we are going to perform is to demodulate the FM signal and look at its spectrogram. For this we need to find the instantanious frequency as a function of time. There are many ways to implement this -- using a limiter, followed by a discriminator, a phase-locked-loop and more. However, because we have the digital samples it is very easy to compute the instantanious frequency by taking the derivative of the phase of the signal. Recall that \(y_c(t) = A\cos \left ( 2\pi f_c t + 2\pi \Delta f \int_0^t x(\tau) d\tau \right )\). The SDR demodulates by \(f_c\), so the phase of the demodulated signal (the one we sample) is \(\phi(t)=2\pi \Delta f \int_0^t x(\tau) d\tau\). To compute \(x(t)\) we need to compute \(x(t) = \frac{d\phi(t)}{dt}\) (We do not care much about the scaling of \(x(t)\). This will correspond to the volume of the signal.

We can approximate the derivative of the phase by computing finite differences of the phase of our signal, i.e., \(\phi[n]-\phi[n-1]=\mathrm{angle}(y[n])-\mathrm{angle}(y[n-1])\). The problem with phase is that it wraps! This will cause large finite difference values when the phase is around 0. An alternative that is robust to phase wraps is to compute the finite difference by computing the product \(y[n]\cdot y^*[n-1] = A_n*A_{n-1}e^{j2\pi(\phi[n]-\phi[n-1])}\) and then computing the phase. Convince yourself that it works!

FM demodulate the samples you got from the SDR and display the spectrogram. Note, that after FM demodulating the signal should be real and hence only half the spectrum should be displayed. Identify the mono audio, the pilot, the stereo and the RBDS signals. Note, that the RBDS signal may be too weak to detect or may need better spectral resolution. Can you identify the subcarriers? KPFA has two subcarriers, one plays French Hatian radio and the other Punjabi.

In []:

```
# Your code Here:
```

In this task we will play the mono signal of the demodulated FM station. we will also demodulate the subcarriers and play them as well.

The mono signal covers the frequency range of 0-16KHz. However, there are many other signals present. There's another problem. Our sampled signal is at a rate of 240KHz. The soundcard on most modern computers can only deal with a sampling rate of 48KHz. We will therefore need to filter our signal and downsample it before being able to play it.

To filter the signal we will use the SciPY signal processing filter design tool to design the filter parameters. The function we will use is `firwin`

which uses the window method which we will cover later in class. It is very simple to use. To design a 128 taps (length of the filter) filter that passes 16KHz of a signal sampled at 240KHz the command is:

`h = signal.firwin(128,16000.0,nyq=240000.0/2)`

Create the filter and use it to filter the demodulated signal using `fftconvolve`

. Plot the resulting spectrogram of the filtered signal.

In []:

```
# Your code here:
```

In []:

In order to play the audio we will need to change the rate of sampling. If we downsample by 5, the rate will be 48KHz, which the computer can play. We can achieve it by taking evey 5th sample. Since our signal is already confined to a maximum frequency of 16KHz, we should be fine and not get aliasing.

Downsample the signal. Scale it so that its range is between -1 and 1 and play it.

Plot the spectrogram after downsampling. Make sure it makes sense! Do you see aliasing?

Do you hear radio?

In []:

```
# Your code here:
```

Next step is to demodulate the subcarriers. To do this, we will need to first, demodulate the subcarrier to baseband. We will then low-pass to separate it from the other signals. Then, we will then downsample it to 48KHz, and FM demodulate it by taking the derivative of the phase.

To demodulate the subcarrier to baseband -- or zero frequency we will need to:

- create a time index
`t`

representing the samples of the signal - multiply the signal with \(e^{-2\pi j f_0 t}\)

Task: Demodulate the signal by \(f_0=67.65\)KHz and plot the spectrogram. Since the signal is now complex, you should plot both sides of the spectrum. The subcarrier should be placed at DC frequency

In []:

```
# Your code here:
```

Design a 128 tap low-pass filter with a bandwidth of \(\pm 7.5\)KHz in the passband. Filter the signal and decimate by a factor of 5 to get a signal with a sampling rate of 48KHz. Plot the spectrogram (make sure you adjusted the spectrogram to represent the new sampling rate!). do you see aliasing?

In []:

```
# Your code here:
```

FM demodulate the signal by taking the derivative of the phase. Filter again, with the same parameters of \(\pm7.5\)KHz low-pass filter to get rid of unwanted noise (you can not use the same filter. Pay attention to the sampling rate!!!). Scale the result to be within \(\pm1\) and play the audio. Plot the spectrogram of the resulting signal. Can you hear French Hatian?

In []:

```
# Your code here:
```

Repeat the procedure to play the subcarrier at \(f_0=92\) KHZ

In []:

```
# Your code here:
```

In []: