Assignment 3: T-Learn (Non-Computational Credit)
Assigned Tuesday, February 7th
Due Wednesday, February 22th, by 11:59pm - submit electronically.

Note: please submit this assignment as a3-t.

In this assignment, we will experiment with a standard backpropagation example, the auto-encoder.

See our Computing Resources page for information about downloading the Tlearn software for use at home, or the Instructional Computing Tlearn help page for how to use tlearn in the Soda clusters.
Refer to R7 in your reader (Plunkett and Elman: Ch. 1 and Appendix B) for general instructions on how to run Tlearn.


The Auto-Encoder

The basic idea of this assignment is very simple: get tlearn to produce output that is as close as possible to its input. As is conventional, we will restrict ourselves to binary strings. The catch is that the network will have a hidden layer that is significantly smaller than the input and output layers.

The basic case will have 4 binary input units, 2 hidden units and 4 output units. This is called the 4-2-4 encoder. Only one input unit will be "on" (set to 1) at a time. We want the network to learn weights that will cause the corresponding output unit to turn on.

We can envision the network as doing a kind of compact encoding. For example, if some language had only 4 phonemes, the auditory system could get by with just 2 fibers for transmitting one phoneme at a time. More realistically, a phoneme from a language with 64 phonemes could be transmitted by just 6 nerve fibers from one brain region to another. One could imagine a complex neural structure that computed which phoneme was most likely at each moment and another complex structure that made use of phonemes to make up words. Since each phoneme has different uses, we would need a separate unit for each one at the receiving end, but the transmission could be done more compactly using the idea above.

The assignment is to experiment with how well backpropagation learning can do at finding weights that will produce a good encoding.


Problem 1
The first part of this assignment is to analyze how the system does on the 4-2-4 encoder problem. Start tlearn as in Assignment 2 and create a project file for the 4-2-4 encoder problem. Then experiment with different values for the learning rate and momentum. (We suggest using random initialization and sampling.) Problem 2
Now expand the task to solve the 8-3-8 encoder problem. Problem 3
Finally, try tlearn on the 9-3-9 encoder problem. Problem 4
Tlearn uses the backpropagation algorithm that we studied in class. The goal of this problem is to produce in a hand simulation of one step in the learning of the 4-2-4 encoder. The assigned reading has further discussion as well as numerical values that may be of use.

Problem 5
Problem 6