Samples of Data Loader
Some downsampled grayscale input images and their labels are illustrated below.
Training and Validation Loss
Hyperparameters
We trained a model with the following architecture using Adam with learning rate = 1e-3 and MSE loss for 20 epoches.
nn.Sequential(
nn.Conv2d(1, 8, 3),
nn.BatchNorm2d(8),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(8, 16, 3),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(16, 32, 3),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(32, 32, 3),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Flatten(),
nn.Linear(576, 256),
nn.ReLU(),
nn.Linear(256, 64),
nn.ReLU(),
nn.Linear(64, 2)
)
We tried using only 1 layer of fully connected network instead of 2 and the performance became noticebly worse. We also attempted using a larger learning rate (1e-2), which resulted in unstable loss in the first few epoches but the results after 20 epoches were similar. The losses with different hypermeters during training are shown below.
Original | 1-layer FC | Larger LR |
---|---|---|
Results
The following are a few good and bad outputs from our neural network. It seems like that the neural network struggles in a few images where the subject is looking to the side, which could be due to the fact that most images in the training set are facing straight.
Good Results | Bad Results |
---|---|