Assignment 2: TLEARN software exercises (1)
Assigned Tuesday, January 31st
Due Mon., February 6th, by 11:59pm - submit electronically.

In this assignment, you will explore the capabilities of some very simple neural networks using the tlearn software package.

See our Computing Resources page for information about downloading the Tlearn software for use at home, or the Instructional Computing Tlearn help page for how to use Tlearn in the Soda clusters. The assigment description below assumes that you will use a version of Tlearn with a GUI—i.e., xtlearn or the Windows or Mac version—though everything can be done with "vanilla" Tlearn alone. However, NB, we recommend against using the Windows version: Tlearn used with WinXP subsequent to the Service Pack 2 upgrade has been known to generate mysterious bugs.

It's fine to run xtlearn remotely from a Windows machine, i.e. through a secure shell, but you need an x-windows client in order to view the graphics. (Here are some directions for configuring SSH to run with the x-windows client Exceed.)

Refer to R7 in your reader (Plunkett and Elman: Ch. 1 and Appendix B) for general instructions on how to run Tlearn, or to the Tlearn manual.


Part 1: Designing the logical AND function

The first task is the logical AND function: AND(0,0)=AND(0,1)=AND(1,0)=0; AND(1,1)=1. This function is also shown in the table below:

Input 1 Input 2 Output
0 0 0
0 1 0
1 0 0
1 1 1

To hand in: 

  1. Show the network architecture (weights, output function) you designed and its output on the 4 AND input patterns. A screen-shot of the network architecture with weights indicated is recommended, though a .cf file and associated written expaination of what it looks like will be accepted if a screen-shot isn't possible for you. 
  2. Show how the network works, by illustrating the calculations that produce the output for each pattern.

Part 2: Learning the logical AND function

Now try to make tlearn learn the AND function. Instead of defining the network by hand, you will set the system with random initial weights that will be adjusted during training. Although you probably don't know yet how exactly the program learns these weights, you will by the end of next week: the algorithm is called "back-propagation," and a short, non-mathematical description can be found here.

2a. Learning without hidden nodes


A few notes:

To hand in:

  1. Briefly describe the learning criterion you used.
  2. Turn in a record of the parameters that you tried as well as an account of what happened. Include an example solution.
  3. For what range of settings does the network reliably learn the AND function?
  4. For what range of settings does the network learn about 75% of the time? (That is, for about 75% of your training runs with new initial weights.) (Hint: you may want to look for correlations between the randomly initialized weights you get and the resulting learning behavior.)

2b. Learning with hidden nodes

To hand in:

  1. Once again, turn in a record of the parameters that you tried, a general account of what happened, and an example solution.
  2. How much does this new network help?

Part 3: The logical SAME function

The second task is the logical SAME function: SAME(0,0)= SAME (1,1)=1; SAME(0,1)= SAME(1,0)=0. This function is also shown in the table below:

Input 1 Input 2 Output
0 0 1
1 0 0
0 1 0
1 1 1

To hand in:

  1. Turn in a record of the parameters that you tried, a general account of what happened, and an example solution.
  2. How do the results differ in this case? Explain why.