Project SIXT33N


Version A   |   Version B   |   Version C
Car Assembly



It’s time to show the world what you’ve learned in the EE16 series! For the final project you (and your partner) will gather all that knowledge and skill to build a flavor of the SIXT33N robot (no, we’re not leets. Read: 6T3-3N).


The SIXT33N is a mobile robot on 3 wheels (2 drivable) that moves around according to some input. It uses the MSP430 Launchpad as its guts with some circuitry for driving the motor and sensing through a microphone. It also runs on a 9V battery, so you don’t have to chase it around as it moves. There are 3 different flavors of the project, each described below:


Version A: Music Recognition

In this version, the SIXT33N will recognize two different genres of music. It will only move forward if it hears one of these genres and the speed at which it moves forward depends on the volume that genre.


Version B: Speech Recognition

The SIXT33N will recognize 3 different voice commands. It will then move forward, left or right based on those commands.


Version C: Gesture Recognition

A software will recognize 5 different mouse gestures on the PC, which gets streamed to the SIXT33N through a wireless audio link. The SIXT33N will then speed up, slow down, turn left, turn right or drive in a circle based on the gesture.


As you can tell, some versions are heavier on certain concepts than others. The chart below illustrates the anticipated depth of each EE16 concept. Each of the red dot is budgeted with approximately 2 weeks of lab time.


Version A: Music Version B: Speech Version C: Gesture
Signal Processing
Wireless Communication
PCA/Classification
Circuit Design
Controls


The timeline for the project is roughly as follows:

Date Version A Version B Version C
11/06 Circuits Circuits Circuits
11/13 PCA Circuits PCA
11/20 PCA PCA Control
11/27 Control Control Control
12/04 --- Integration ---
12/09 --- Demo ---


If you are still unsure about which version to pick or how each concept manifests in the different options, the table below lists the checkpoints for each version. The components you will focus on in each phase are italicized while the rest will be provided. Of course we encourage innovation and any project that goes beyond the requirements will receive extra credit!


Phase Version A: Music Version B: Speech Version C: Gesture
1 Mic Front End and Signal Processing
  • Build provided circuit schematic
  • Read ADC output in PC
  • Run provided FFT code on Launchpad
  • Read FFT result in PC
  • Record data for next phase
  • Design microphone conditioning circuit
  • Build circuit
  • Read ADC output in PC
  • Launchpad DSP
  • Record data for next phase
  • Build provided circuit schematic
  • Read ADC output in PC
  • Audio link transmitter (PC)
  • Audio link receiver (Launchpad)
2 Data Processing
  • Experiment with different genres
  • PCA + Classifier (2 genres)
  • Check accuracy
  • PCA projection on Launchpad
  • Generate envelope, threshold to get snippets
  • PCA + Classifier (3 commands)
  • Check accuracy
  • PCA projection on Launchpad
  • Collect gesture data
  • PCA + Classifier (PC, 5 commands)
  • Check accuracy
  • Send resulting command through audio link
3 Controls
  • System modelling
  • Eigenvalue placement
  • 1D movement simulation
  • Motor driver circuit
  • Wheel encoder circuit
  • Move at constant speed
  • System modelling
  • Eigenvalue placement
  • 1D movement simulation
  • Motor driver circuit
  • Wheel encoder circuit
  • Move at constant speed
  • System modelling
  • Eigenvalue placement
  • 2D movement simulation
  • Motor driver circuit
  • Wheel encoder circuit
  • Move at constant speed + direction
4 Integration
  • Compute volume of music
  • Control speed using loudness
  • Incorporate PCA projection - only move when listening to a genre
  • Control speed using PCA result
  • Control speed + direction using PCA result (speed up, slow down, left, right, circle)