CS150 Final Project
Video Semaphore Decoder

Project Description

Objective

The final project for this semester is to design a video semaphore receiver. The receiver will capture data from a video camera and decode digital messages from flashing lights (semaphores)  in the image stream. In the first mode of operation, the video data will be stored in a 64K byte RAM, then transmitted at 115.2 KBaud over a serial line to a PC for display.  In the second mode, the data encoded in the video image stream will be transmitted over the serial line to be displayed on a PC.

Application

One application of your project would be to decode signals transmitted by small autonomous sensor nodes.  These sensor nodes communicate using a flashing LED or laser.  Those of you who have working projects are welcome to try our your projects with our mini-weather stations.  These weather stations are being developed as a part of our Smart Dust research project.
First and foremost your project is intended to give you practical experience implementing the concepts presented in CS150.  Hopefully some of you may find additional motivation knowing that this is not a toy application - some of the projects in this class may influence the designs of systems that ultimately map the Amazon, fly to Mars, etc.

Specification

Your project will have two modes of operation.  In mode 1, it will acquire an image from video and transmit it over the serial line to the PC.  In mode 2, it will look for a 15 Hz flashing light semaphore signal in the video image, grab the digital information transmitted over many successive frames, and transmit that information to the PC.
When your project is reset it should start in mode 1.  After an image is acquired, it should be transmitted to the computer using the following protocol.  The first byte, or header byte, should be 0x40.  The next two bytes should be the x and y dimensions of the image to be transmitted.  The minimum allowable value for x and y is 64 pixels.  The maximum is around 230.  The following x*y bytes should be the image data, and the last byte should be 0x60. The least significant bit of the image data is typically noise, so you will only transmit the 7 most significant bits of each pixel's intensity.

Header byte 0 1 0 0 0 0 0 0
x dimension
y dimension
image[0][0] 1 x x x x x x x
image[1][0] 1 x x x x x x x
image[2][0] 1 x x x x x x x
...x*y total...
image[xmax][ymax] 1 x x x x x x x
Footer byte 0 1 1 0 0 0 0 0
Mode 1 Frame transmission protocol.  The 7 most significant bits of the image data are transmitted. x and y dimensions are a up to you, but should be between 64x64 and 230x230.  In mode 1 your project acts like a video frame grabber.


In mode 2, your project should look for a 15 Hz serial transmission in the image stream.  For full credit, your project should be able to ignore bright areas if they are not flashing.  After a serial transmission is done (e.g. the flashing light in the image has sent a start bit, 8 data bits, and stop bit spread out over 40 frames of video), the location of the transmission and the serial data from that location should be sent to the computer.  The format for this packet is shown below.  If you find more than one transmitter (extra credit), then you should send a separate packet to the PC for each transmission.
Header byte 0 0 0 0 0 0 0 0
x location
y location
serial data from location x,y
Footer byte 0 0 1 0 0 0 0 0
Mode 2: Semaphore information packet.  This is the format of the packet that should be transmitted to the PC after a video semaphore transmission has been received successfully by your receiver.  The x and y location are the pixel location in the image, and the serial data is the information that was recieved between the start bit and the stop bit.

Block diagram


One possible configuration of your final system is shown above.  It is composed of 5 different components:
  1. Memory and address block. A 64K byte memory will be used to store video data, an image of size up to 256 by 256. You can think of the memory as holding an array: (unsigned char image[256][256];) The counters are used to sequentially store and retrieve data from the memory.  You need to acquire and store at least a 64 by 64 pixel image
  2. The serial transmitter sends 8 data bits with a start and stop bit at 115.2KBaud to the PC serial interface. The effective data rate is about 11 K Bytes per second. This block will be built and debugged for checkpoint 1.
  3. The video interface converts the analog data from a video camera into 8 bits of digital data (one byte per pixel) using an analog to digital converter (ADC). The sync detecto provides synchronization information, in particular, when the top of the image is (vertical sync) and when the start of a line is (horizontal sync). This block will be built and debugged in checkpoint 2.
  4. The semaphore decoder will implement your algorithm to find and decode optical semaphores in the image stream from the camera.  Part of the project is designing an algorithm which fits in the XC4005.
  5. The controller module is responsible for generating all clock, timing, user input/output, and control signals. It is probably better to have a number of simple, modular FSMs controlling various functions, than one huge FSM controlling everything.

  6.  

Serial Interface

See checkpoint 1

Video Interface

The single board camera puts out a composite video signal.  You will use the hardware described in checkpoint 2 to digitize some portion of the video signal.  As you digitize the signal, you will process the information digitally and store it in SRAM.

Test software

Software will be provided which will let you test your design.  This software will provide a flashing light on the computer screen.  You will be able to customize the display to make testing easier: a large flashing spot or a small one, a completely black background or some static images.

Philosophy

This document describes the input/output specification for the semaphore decoder, and gives the schedule for completing requirements for this project. As in the real world, the user/external interface is specified; it is up to you to specify most of what goes into the system. You can consider the course staff as being the customers for your project, and you have contracted to deliver a working system.  We ave checkpoints and demonstrations along the way to see that our contract will be satisfied.  Our contract also has a clause that you don't get paid (i.e. credit) if you don't deliver the working pieces on the specified dates.  Exceptions can only be made for serious medical problems.  Now you may want to deliver extra features, which is fine, but the basic system needs to be working first. As customers, we aren't going to pay for simulations, we need to see the real thing.
 

Tips

Just because a design works does not make it a good design.
Use good design practices such as:


Budget plenty of time to get your project done. With this level of complexity, it takes at least three times as long to debug a systems as it does to design and enter it. In fact, however long you think it will take to complete a task, it will take 3 times longer. Plan  accordingly.  Also, lab space is limited, and workstations will need to be shared. Plan accordingly.
 

Project Report

The project report will be a powerpoint presentation and web page.  A template will be provided.

Checkpoints and dates

 
Checkpoint 1 - Serial Line interface week 9
Checkpoint 2 - Video/ADC interface week 10+
Checkpoint 3 - Mode 1 working week 12
Checkpoint 4 - Threshholding working week 13
Project demo for extra credit (10%) week 14
Last day to demo project for full credit.  After monday,
-10% per day
Monday, May 3
week 15
Absolute last day to demo projects (50%credit ) and 
turn in final reports
Friday, May 7
Week 15
Deadline to turn in Xilinx boards Friday, May 7

Grading

The required functionality for full credit is:
  1. Correct operation in mode 1.  Frame size must be at least 64x64 pixels.
  2. Ability to decode a single 15Hz semaphore transmission in the presence of relatively constant background image intensity.  When one transmission is done, the next transmission may come from a different place.
  3. For full credit the background in which the transmitter is flashing in mode 2 should not need to be completely black.  The transmitter should not need to be the brightest spot in the image.
In order to properly implement the specs above you will almost certainly need to subtract successive frames and threshhold the result.  This should remove any constant intensity background.  The quality of your threshholding algorithm and the pixel size of your frame will influence your grade on the project.

The checkpoints give you an opportunity to lock in credit early in the project.  A working project at the end of the semester may receive substantial credit, but will not receive full credit if the checkpoints have not been achieved on time.
Checkpoint 1 10%
Checkpoint 2 10%
Checkpoint 3 10%
Checkpoint 4 10%
Final report 10%
Final demonstration  50%

Extra credit: if you can find and decode two or more transmitters simultaneously, you'll get at least 10% extra credit.