Lab 3, Radio Communication Via a Computer Interface, Part I

Now that you have gotten your radio and radio interfaces, we are ready to experiment with them. In this part of the lab we will learn how to use the interface and the radio, make sure that everything is working correctly so that you will be able to make progress on the second part as well as the project. It is important that you start early, since there may be many technical difficulties.

gsm
Figure 1: The Computer-Radio Audio Interface

The interface you got has a Kenwood style audio connector with a 2.5mm and 3.5mm audio Jack that connects to your Baofeng radio, a ground-loop isolation box, and 2 3.5mm audio jacks color coded that connect to a USB audio card that is supplied with the interface packet. In order order to transmit using the interface we will use the VOX (voice activation) feature of the radio. When in VOX mode, the radio will transmit whenever there is an input amplitude above a certain threshold.

There are several steps that you have to go through before and after each time you work on this lab. Please make sure you follow these simple guidelines:

Before Starting, confirm the following settings on the radio (you will probably need to change a few settings) :

When starting:

During operation:

When Finishing:

In []:
# Import functions and libraries
import numpy as np
import matplotlib.pyplot as plt
import pyaudio
import Queue
import threading,time
import sys

from numpy import pi
from numpy import sin
from numpy import zeros
from numpy import r_
from scipy import signal
from scipy import integrate

import threading,time
import multiprocessing

from rtlsdr import RtlSdr
from numpy import mean
from numpy import power
from numpy.fft import fft
from numpy.fft import fftshift
from numpy.fft import ifft
from numpy.fft import ifftshift

%matplotlib inline

Let's first define the spectrogram function, which we will use later in the lab

In []:
# Plot an image of the spectrogram y, with the axis labeled with time tl,
# and frequency fl
#
# t_range -- time axis label, nt samples
# f_range -- frequency axis label, nf samples
# y -- spectrogram, nf by nt array
# dbf -- Dynamic range of the spect

def sg_plot( t_range, f_range, y, dbf = 60) :
    eps = 1e-3
    
    # find maximum
    y_max = abs(y).max()
    
    # compute 20*log magnitude, scaled to the max
    y_log = 20.0 * np.log10( abs( y ) / y_max + eps )
    
    fig=figure(figsize=(15,6))
    
    plt.imshow( np.flipud( 64.0*(y_log + dbf)/dbf ), extent= t_range  + f_range ,cmap=plt.cm.gray, aspect='auto')
    plt.xlabel('Time, s')
    plt.ylabel('Frequency, Hz')
    plt.tight_layout()


def myspectrogram_hann_ovlp(x, m, fs, fc,dbf = 60):
    # Plot the spectrogram of x.
    # First take the original signal x and split it into blocks of length m
    # This corresponds to using a rectangular window %
    
    
    isreal_bool = isreal(x).all()
    
    # pad x up to a multiple of m 
    lx = len(x);
    nt = (lx + m - 1) // m
    x = append(x,zeros(-lx+nt*m))
    x = x.reshape((m/2,nt*2), order='F')
    x = concatenate((x,x),axis=0)
    x = x.reshape((m*nt*2,1),order='F')
    x = x[r_[m//2:len(x),ones(m//2)*(len(x)-1)].astype(int)].reshape((m,nt*2),order='F')
    
    
    xmw = x * hanning(m)[:,None];
    
    
    # frequency index
    t_range = [0.0, lx / fs]
    
    if isreal_bool:
        f_range = [ fc, fs / 2.0 + fc]
        xmf = np.fft.fft(xmw,len(xmw),axis=0)
        sg_plot(t_range, f_range, xmf[0:m/2,:],dbf=dbf)
        print 1
    else:
        f_range = [-fs / 2.0 + fc, fs / 2.0 + fc]
        xmf = np.fft.fftshift( np.fft.fft( xmw ,len(xmw),axis=0), axes=0 )
        sg_plot(t_range, f_range, xmf,dbf = dbf)
    
    return t_range, f_range, xmf

Buffered Audio I/O

In order to enable convinient audio processing in real-time we modified the I/O audio functions to use threading and python queues. The nice thing about queue is that it implements a buffered FIFO which we will use to fill in with captured samples or samples we would like to transmit.

We are also going to use a nice feature in PyAudio that lets you access different audio interfaces. For example, you can record audio from the USB dongle and play it on the computer built-in speaker at the same time.

In []:
def play_audio( Q, p, fs , dev):
    # play_audio plays audio with sampling rate = fs
    # Q - A queue object from which to play
    # p   - pyAudio object
    # fs  - sampling rate
    # dev - device number
    
    # Example:
    # fs = 44100
    # p = pyaudio.PyAudio() #instantiate PyAudio
    # Q = Queue.queue()
    # Q.put(data)
    # Q.put("EOT") # when function gets EOT it will quit
    # play_audio( Q, p, fs,1 ) # play audio
    # p.terminate() # terminate pyAudio
    
    # open output stream
    ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),output=True,output_device_index=dev)
    # play audio
    while (1):
        data = Q.get()
        if data=="EOT" :
            break
        try:
            ostream.write( data.astype(np.float32).tostring() )
        except:
            break
            
def record_audio( queue, p, fs ,dev,chunk=1024):
    # record_audio records audio with sampling rate = fs
    # queue - output data queue
    # p     - pyAudio object
    # fs    - sampling rate
    # dev   - device number 
    # chunk - chunks of samples at a time default 1024
    #
    # Example:
    # fs = 44100
    # Q = Queue.queue()
    # p = pyaudio.PyAudio() #instantiate PyAudio
    # record_audio( Q, p, fs, 1) # 
    # p.terminate() # terminate pyAudio
    
   
    istream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),input=True,input_device_index=dev,frames_per_buffer=chunk)

    # record audio in chunks and append to frames
    frames = [];
    while (1):
        try:  # when the pyaudio object is distroyed stops
            data_str = istream.read(chunk) # read a chunk of data
        except:
            break
        data_flt = np.fromstring( data_str, 'float32' ) # convert string to float
        queue.put( data_flt ) # append to list

To find the device numbers of the built in input/output and the USB devices, we wrote the following function, which searches for them. We made sure this works on our systems, but if you are having trouble finding a device, you should look at debugging this function first!

In []:
def audioDevNumbers(p):
    # din, dout, dusb = audioDevNumbers(p)
    # The function takes a pyaudio object
    # The function searches for the device numbers for built-in mic and 
    # speaker and the USB audio interface
    # some devices will have the name “Generic USB Audio Device”. In that case, replace it with the the right name.
    
    dusb = 'None'
    din = 'None'
    dout = 'None'
    if sys.platform == 'darwin':
        N = p.get_device_count()
        for n in range(0,N):
            name = p.get_device_info_by_index(n).get('name')
            if name == u'USB PnP Sound Device':
                dusb = n
            if name == u'Built-in Microph':
                din = n
            if name == u'Built-in Output':
                dout = n
    # Windows       
    else:
        N = p.get_device_count()
        for n in range(0,N):
            name = p.get_device_info_by_index(n).get('name')
            if name == u'USB PnP Sound Device':
                dusb = n
            if name == u'Microsoft Sound Mapper - Input':
                din = n
            if name == u'Microsoft Sound Mapper - Output':
                dout = n
                
    if dusb == 'None':
        print('Could not find a usb audio device')
    return din, dout, dusb

Testing the Buffered Audio:

The first test/example would be to see if we can capture audio from the radio and play it on the computer.

In []:
# create an input output FIFO queues
Qin = Queue.Queue()
Qout = Queue.Queue()


# create a pyaudio object
p = pyaudio.PyAudio()

# find the device numbers for builtin I/O and the USB
din, dout, dusb = audioDevNumbers(p)

# initialize a recording thread. The USB device only supports 44.1KHz sampling rate
t_rec = threading.Thread(target = record_audio,   args = (Qin,   p, 44100, dusb  ))

# initialize a playing thread. 
t_play = threading.Thread(target = play_audio,   args = (Qout,   p, 44100, dout  ))

# start the recording and playing threads
t_rec.start()
t_play.start()

# record and play about 10 seconds of audio 430*1024/44100 = 9.98 s
for n in range(0,430):
    
    samples = Qin.get()
    
    
    # You can add code here to do processing on samples in chunks of 1024
    # you will have to implement an overlap an add, or overlap an save to get
    # continuity between chunks
    
    Qout.put(samples)

p.terminate()

Testing VOX Radio Transmit

The next step is to calibrate some parameters such that the voice activation is activated correctly and timely.

We will use the SDR to listen to the transmitted signal.

The VOX circuit has a certain delay before the radio starts transmitting. The following code generates a pure 2000Hz tone and plays it through the USB output. It also captures samples from the SDR and plots a spectrogram of the result.

Before running the code, set the right frequencies on the radio and in software (for the SDR). Modify the code by playing with the length of the pulse and find (roughly) the minimum length that triggers the VOX and the maximum length in which sound is not transmitted. You can also listen to the transmitted signal using a friends radio. You will be able to see in the spectrogram when the radio starts transmitting and wether it is transmitting empty or tone.

For us, we were getting numbers of minimum 5ms to activate and max 100ms before the tone is transmitted. This means that in order to guarentee transmission, I need to play a short pulse to activate vox and not start the real transmission before at least 0.1 second passes. Once the VOX is on, it will stay on for about 2-4 seconds. It's probably better to not start transmission before at least 0.25 seconds passes so Squelch on a receiving radio has time to open.

In []:
# SDR parameters 
fs = 240000
fc0 = 443.610e6   # set your frequency!
#apply frequency correction if your SDR needs it
fc = fc0*(1.0-85e-6)
sdr = RtlSdr()
sdr.sample_rate = fs    # sampling rate
sdr.gain = 10           # if the gain is not enough, increase it
sdr.center_freq = fc

# pyaudio parameters
p = pyaudio.PyAudio()
din, dout, dusb = audioDevNumbers(p)
Q = Queue.Queue()
t_play = threading.Thread(target = play_audio,   args = (Q,   p, 44100, dusb  ))
t_play.start()
In []:
# Modify the length of the pulse. You can just execute this portion
# as many time as you like without terminating the pyaudio object

tlen = 0.15 # in seconds
t=r_[0.0:tlen*44100.0]/44100
sig = 0.5*sin(2*pi*t*2000)
Q.put(sig)
y = sdr.read_samples(256000*4)

tt,ff,xmf = myspectrogram_hann_ovlp(y, 256, fs, fc,dbf = 60)
In []:
sdr.close()
p.terminate()
In []:
def genPTT(plen,zlen,fs):
    # Function generates a short pulse to activate the VOX
    #
    # plen - 2000Hz pulse len in ms
    # zlen - total length of the signal in ms (zero-padded)
    # fs   -  sampling frequency in Hz
    #
    # the function returns the ptt signal
    
    
    
    
    

Measuring the Frequency Response of the Radio's Bandpass Audio Filter

The audio input to the radio is filtered by a bandpass filter. Because later we are going to use the audio interface to transmit data, we need to know how this data is going to be affected by the filter. Much like in Lab1, we will use a chirp signal to estimate the magnitude frequency response. We will trasmit with the radio and receive using the SDR.

In []:

In Order to look at the frequency response, we will need to FM demodulate the signal.

In []:

Now,

In []:

Transmitting your callsign in Morse code

The next step is to see if you can transmit something more meaningful. If you are going to transmit for the first time using a computer, you might as well transmit your callsign in Morse code!

Morse code is composed of dots ( . dit) and dashes ( - dah). The timing is relative to a dot duration which is one unit long. A dah is three units long. Gap between dots and dashes within a character is one unit. A short gap between letters is three units and a gap between words is seven units.

Here's a dictionary of Morse code:

def text2Morse(text,fc,fs,dt): CODE = {'A': '.-', 'B': '-...', 'C': '-.-.', 'D': '-..', 'E': '.', 'F': '..-.', 'G': '--.', 'H': '....', 'I': '..', 'J': '.---', 'K': '-.-', 'L': '.-..', 'M': '--', 'N': '-.', 'O': '---', 'P': '.--.', 'Q': '--.-', 'R': '.-.', 'S': '...', 'T': '-', 'U': '..-', 'V': '...-', 'W': '.--', 'X': '-..-', 'Y': '-.--', 'Z': '--..', '0': '-----', '1': '.----', '2': '..---', '3': '...--', '4': '....-', '5': '.....', '6': '-....', '7': '--...', '8': '---..', '9': '----.', ' ': ' ', "'": '.----.', '(': '-.--.-', ')': '-.--.-', ',': '--..--', '-': '-....-', '.': '.-.-.-', '/': '-..-.', ':': '---...', ';': '-.-.-.', '?': '..--..', '_': '..--.-' }
In []:
def text2Morse(text,fc,fs,dt):
    CODE = {'A': '.-',     'B': '-...',   'C': '-.-.', 
        'D': '-..',    'E': '.',      'F': '..-.',
        'G': '--.',    'H': '....',   'I': '..',
        'J': '.---',   'K': '-.-',    'L': '.-..',
        'M': '--',     'N': '-.',     'O': '---',
        'P': '.--.',   'Q': '--.-',   'R': '.-.',
     	'S': '...',    'T': '-',      'U': '..-',
        'V': '...-',   'W': '.--',    'X': '-..-',
        'Y': '-.--',   'Z': '--..',
        
        '0': '-----',  '1': '.----',  '2': '..---',
        '3': '...--',  '4': '....-',  '5': '.....',
        '6': '-....',  '7': '--...',  '8': '---..',
        '9': '----.',

        ' ': ' ', "'": '.----.', '(': '-.--.-',  ')': '-.--.-',
        ',': '--..--', '-': '-....-', '.': '.-.-.-',
        '/': '-..-.',   ':': '---...', ';': '-.-.-.',
        '?': '..--..', '_': '..--.-'
        }
    
    # your code here:
    
In []: