{ "metadata": { "name": "", "signature": "sha256:7f1dbc29cfc3092b2ea978c684b5ed69a8032d788b77a3ca7f673a06f31827f0" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 3, Radio Communication Via a Computer Interface, Part III AFSK, AX.25 and APRS\n", "\n", "In this part of the lab we are going to experiment with Digital modulation and communication. Network Communication systems have layered architechture. The bottom layer is the physical which implements the modulation. Here we will use [AFSK](http://en.wikipedia.org/wiki/Frequency-shift_keying), which is a form of BFSK in the audio range (hence the 'A'). We will write a modulator/demodulator for AFSK. In addition, we will leverage [AX.25](http://www.tapr.org/pub_ax25.html), which is an amateur-radio data-link layer protocol. [AX.25](http://www.tapr.org/pub_ax25.html) is a packet based protocol that will help us transmit data using packets. It implements basic synchronization, addressing, data encapsulation and some error detection. In the ham world, an implementation of AFSK and [AX.25](http://www.tapr.org/pub_ax25.html) together is also called a [TNC ( Terminal Node Controller )](http://en.wikipedia.org/wiki/Terminal_node_controller). In the past TNC's were separate boxes that hams used to attach to their radios to communicate with packet-based-communication. Today, it is easy to implement TNC's in software using the computer's soundcard.... as you will see here! \n", "\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Import functions and libraries\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pyaudio\n", "import Queue\n", "import threading,time\n", "import sys\n", "\n", "from numpy import pi\n", "from numpy import sin\n", "from numpy import zeros\n", "from numpy import r_\n", "from scipy import signal\n", "from scipy import integrate\n", "\n", "import threading,time\n", "import multiprocessing\n", "\n", "from rtlsdr import RtlSdr\n", "from numpy import mean\n", "from numpy import power\n", "from numpy.fft import fft\n", "from numpy.fft import fftshift\n", "from numpy.fft import ifft\n", "from numpy.fft import ifftshift\n", "import bitarray\n", "from scipy.io.wavfile import read as wavread\n", "\n", "\n", "%matplotlib inline" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "# Plot an image of the spectrogram y, with the axis labeled with time tl,\n", "# and frequency fl\n", "#\n", "# t_range -- time axis label, nt samples\n", "# f_range -- frequency axis label, nf samples\n", "# y -- spectrogram, nf by nt array\n", "# dbf -- Dynamic range of the spect\n", "\n", "def sg_plot( t_range, f_range, y, dbf = 60) :\n", " eps = 1e-3\n", " \n", " # find maximum\n", " y_max = abs(y).max()\n", " \n", " # compute 20*log magnitude, scaled to the max\n", " y_log = 20.0 * np.log10( abs( y ) / y_max + eps )\n", " \n", " fig=figure(figsize=(15,6))\n", " \n", " plt.imshow( np.flipud( 64.0*(y_log + dbf)/dbf ), extent= t_range + f_range ,cmap=plt.cm.gray, aspect='auto')\n", " plt.xlabel('Time, s')\n", " plt.ylabel('Frequency, Hz')\n", " plt.tight_layout()\n", "\n", "\n", "def myspectrogram_hann_ovlp(x, m, fs, fc,dbf = 60):\n", " # Plot the spectrogram of x.\n", " # First take the original signal x and split it into blocks of length m\n", " # This corresponds to using a rectangular window %\n", " \n", " \n", " isreal_bool = isreal(x).all()\n", " \n", " # pad x up to a multiple of m \n", " lx = len(x);\n", " nt = (lx + m - 1) // m\n", " x = append(x,zeros(-lx+nt*m))\n", " x = x.reshape((m/2,nt*2), order='F')\n", " x = concatenate((x,x),axis=0)\n", " x = x.reshape((m*nt*2,1),order='F')\n", " x = x[r_[m//2:len(x),ones(m//2)*(len(x)-1)].astype(int)].reshape((m,nt*2),order='F')\n", " \n", " \n", " xmw = x * hanning(m)[:,None];\n", " \n", " \n", " # frequency index\n", " t_range = [0.0, lx / fs]\n", " \n", " if isreal_bool:\n", " f_range = [ fc, fs / 2.0 + fc]\n", " xmf = np.fft.fft(xmw,len(xmw),axis=0)\n", " sg_plot(t_range, f_range, xmf[0:m/2,:],dbf=dbf)\n", " print 1\n", " else:\n", " f_range = [-fs / 2.0 + fc, fs / 2.0 + fc]\n", " xmf = np.fft.fftshift( np.fft.fft( xmw ,len(xmw),axis=0), axes=0 )\n", " sg_plot(t_range, f_range, xmf,dbf = dbf)\n", " \n", " return t_range, f_range, xmf" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the following tasks you will need the functions:\n", "\n", "`sg_plot`\n", "\n", "`myspectrogram_hann_ovlp`\n", "\n", "`play_audio`\n", "\n", "`record_audio`\n", "\n", "`audioDevNumbers`\n", "\n", "`text2Morse` (to Identify yourself before transmission)\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def play_audio( Q, p, fs , dev):\n", " # play_audio plays audio with sampling rate = fs\n", " # Q - A queue object from which to play\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number\n", " \n", " # Example:\n", " # fs = 44100\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # Q = Queue.queue()\n", " # Q.put(data)\n", " # Q.put(\"EOT\") # when function gets EOT it will quit\n", " # play_audio( Q, p, fs,1 ) # play audio\n", " # p.terminate() # terminate pyAudio\n", " \n", " # open output stream\n", " ostream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),output=True,output_device_index=dev)\n", " # play audio\n", " while (1):\n", " data = Q.get()\n", " if data==\"EOT\" :\n", " break\n", " try:\n", " ostream.write( data.astype(np.float32).tostring() )\n", " except:\n", " break\n", " \n", "def record_audio( queue, p, fs ,dev,chunk=1024):\n", " # record_audio records audio with sampling rate = fs\n", " # queue - output data queue\n", " # p - pyAudio object\n", " # fs - sampling rate\n", " # dev - device number \n", " # chunk - chunks of samples at a time default 1024\n", " #\n", " # Example:\n", " # fs = 44100\n", " # Q = Queue.queue()\n", " # p = pyaudio.PyAudio() #instantiate PyAudio\n", " # record_audio( Q, p, fs, 1) # \n", " # p.terminate() # terminate pyAudio\n", " \n", " \n", " istream = p.open(format=pyaudio.paFloat32, channels=1, rate=int(fs),input=True,input_device_index=dev,frames_per_buffer=chunk)\n", "\n", " # record audio in chunks and append to frames\n", " frames = [];\n", " while (1):\n", " try: # when the pyaudio object is distroyed stops\n", " data_str = istream.read(chunk) # read a chunk of data\n", " except:\n", " break\n", " data_flt = np.fromstring( data_str, 'float32' ) # convert string to float\n", " queue.put( data_flt ) # append to list\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "def audioDevNumbers(p):\n", " # din, dout, dusb = audioDevNumbers(p)\n", " # The function takes a pyaudio object\n", " # The function searches for the device numbers for built-in mic and \n", " # speaker and the USB audio interface\n", " # some devices will have the name \u201cGeneric USB Audio Device\u201d. In that case, replace it with the the right name.\n", " \n", " dusb = 'None'\n", " din = 'None'\n", " dout = 'None'\n", " if sys.platform == 'darwin':\n", " N = p.get_device_count()\n", " for n in range(0,N):\n", " name = p.get_device_info_by_index(n).get('name')\n", " if name == u'USB PnP Sound Device':\n", " dusb = n\n", " if name == u'Built-in Microph':\n", " din = n\n", " if name == u'Built-in Output':\n", " dout = n\n", " # Windows \n", " else:\n", " N = p.get_device_count()\n", " for n in range(0,N):\n", " name = p.get_device_info_by_index(n).get('name')\n", " if name == u'USB PnP Sound Device':\n", " dusb = n\n", " if name == u'Microsoft Sound Mapper - Input':\n", " din = n\n", " if name == u'Microsoft Sound Mapper - Output':\n", " dout = n\n", " \n", " if dusb == 'None':\n", " print('Could not find a usb audio device')\n", " return din, dout, dusb" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "def genPTT(plen,zlen,fs):\n", " Nz = floor(zlen*fs)\n", " Nt = floor(plen*fs)\n", " pttsig = zeros(Nz)\n", " t=r_[0.0:Nt]/fs\n", " pttsig[:Nt] = 0.5*sin(2*pi*t*2000)\n", " return pttsig" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "def text2Morse(text,fc,fs,dt):\n", " CODE = {'A': '.-', 'B': '-...', 'C': '-.-.', \n", " 'D': '-..', 'E': '.', 'F': '..-.',\n", " 'G': '--.', 'H': '....', 'I': '..',\n", " 'J': '.---', 'K': '-.-', 'L': '.-..',\n", " 'M': '--', 'N': '-.', 'O': '---',\n", " 'P': '.--.', 'Q': '--.-', 'R': '.-.',\n", " \t'S': '...', 'T': '-', 'U': '..-',\n", " 'V': '...-', 'W': '.--', 'X': '-..-',\n", " 'Y': '-.--', 'Z': '--..',\n", " \n", " '0': '-----', '1': '.----', '2': '..---',\n", " '3': '...--', '4': '....-', '5': '.....',\n", " '6': '-....', '7': '--...', '8': '---..',\n", " '9': '----.',\n", "\n", " ' ': ' ', \"'\": '.----.', '(': '-.--.-', ')': '-.--.-',\n", " ',': '--..--', '-': '-....-', '.': '.-.-.-',\n", " '/': '-..-.', ':': '---...', ';': '-.-.-.',\n", " '?': '..--..', '_': '..--.-'\n", " }\n", " \n", " Ndot= 1.0*fs*dt\n", " Ndah = 3*Ndot\n", " \n", " sdot = sin(2*pi*fc*r_[0.0:Ndot]/fs)\n", " sdah = sin(2*pi*fc*r_[0.0:Ndah]/fs)\n", " \n", " # convert to dit dah\n", " mrs = \"\"\n", " for char in text:\n", " mrs = mrs + CODE[char.upper()] + \"*\"\n", " \n", " sig = zeros(1)\n", " for char in mrs:\n", " if char == \" \":\n", " sig = concatenate((sig,zeros(Ndot*7)))\n", " if char == \"*\":\n", " sig = concatenate((sig,zeros(Ndot*3)))\n", " if char == \".\":\n", " sig = concatenate((sig,sdot,zeros(Ndot)))\n", " if char == \"-\":\n", " sig = concatenate((sig,sdah,zeros(Ndot)))\n", " return sig\n", " \n", " " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AFSK1200, or Bell 202 modem\n", "\n", "AFSK1200 encodes digital binary data at a data-rate of 1200b/s. It uses the frequencies 1200Hz and 2200Hz ( center frequency of $1700$Hz $\\pm 500$ Hz) to encode the '0's and '1's (also known as space and mark) bits. Even though it has a relatively low bit-rate it is still the dominant standard for amature packet radio over VHF. It is a common physical layer for the AX.25 packet protocol and hence a physical layer for the Automatic Packet Reporting System (APRS), which we will describe later. \n", "\n", "The exact frequency spectrum of a general FSK signal is difficult to obtain. But, when the mark and space frequency difference $\\Delta f$ is much larger than the bit-rate, $B$, then the bandwidth of FSK is approximately $2\\Delta f + B$. This is not exactly the case for AFSK1200 where the spacing between the frequencies is 1000Hz and the bit-rate is 1200 baud.\n", "\n", "