Images of the Russian Empire

Colorizing the Prokudin-Gorskii photo collection

Jieming Wei

Overview

The project's goal is to recover the color pictures taken by Sergei Mikhailovich Prokudin-Gorskii. He did not have a color camera at that time period so instead he used black & white camera to take pictures. At each scene, he took three pictures with red, green and blue filters respectively. The project is intended to produce a color photograph from three black & white photograph with filters.

Approach

Since these pictures are taken with red, green and blue filters, we can first roughly assume that those three photos correspond to the R, G and B channels. The algorithm will choose one of the three photos as the base and shift the other two photos in both x axis and y axis to get a best match and then stack these transformed photos together. In terms of the matching criterial, I used normalized cross-correlation and pick the displacenment distance where this value gets maximized. The search range I picked is 30, which is able to handle all pictures (small size) shown beblow.

The algorithm described above can efficently handle photos with small size. However, when photo size gets large, the displacement can become large. Exhaustive search on 2 dimensions therefore would be slow. For large photos, I used image pyramid. This algorithm basically make use of the small-size algorithm everytime the photos get scaled to a smaller size. After the displacement on a smaller scale get calculated, we can apply the adjustment on a bigger scale and further refine the displacement. The search interval can be drastically decreased in this way.

Problem and solution

1. When the entire image is used for searching best displacement, some of the photos does not work well. The reason is that the borders of the image are get rolled into a position where these pixels cannot be matched with other photos. Therefore, I only use the center part of the image (from 1/4 to 3/4 in terms of one side) for matching and the results looks good.

2. Initially, when I used the algorithm above, I cannot get the photo 'self portrait' right. I resolve this by increasing the final output resolution. This resolve the problem because in this photos, there are a lot of small detailed pattern in bush. Therefore, if the image get scaled down too much, we lost too many details to get a successful match. A relatively large resolution photos can be matched successfully.

3. Initially, the image 'emir' does not get produced correctly. The reason is that I use photo with blue filter as a base. Since the man in the picture has blue cloth with high saturation, using blue as base is not a good idea because it becomes hard for low red and blue value matahed up with high blue value. I end up resolving this problem by use photo with green filter as the base. The problem can also be resolved by using edge photo to use, which will be further discussed later.

Result on example images

cathedral.jpg green: (row: 5, col: 2) red: (row: 12, col: 3)
emir.jpg blue: (row: -48, col: -24) red: (row: 58, col: 18)
harvesters.jpg green: (row: 58, col: 18) red: (row: 124, col: 14)
icon.jpg green: (row: 40, col: 18) red: (row: 90, col: 22)
lady.jpg green: (row: 54, col: 8) red: (row: 116, col: 12)
monastery.jpg green: (row: -3, col: 2) red: (row: 3, col: 2)
nativity.jpg green: (row: 3, col: 1) red: (row: 8, col: 0)
self_portrait.jpg green: (row: 78, col: 28) red: (row: 176, col: 38)
settlers.jpg green: (row: 7, col: 0) red: (row: 14, col: -1)
three_generations.jpg green: (row: 52, col: 14) red: (row: 110, col: 12)
train.jpg green: (row: 42, col: 6) red: (row: 86, col: 32)
turkmen.jpg green: (row: 56, col: 22) red: (row: 116, col: 28)
village.jpg green: (row: 64, col: 12) red: (row: 138, col: 22)

Result on online images

abrikos.jpg green: (row: 62, col: -22) red: (row: 142, col: -52)
aist.jpg green: (row: 44, col: 12) red: (row: 104, col: 22)
spaso_evfrosin.jpg green: (row: 34, col: 4) red: (row: 72, col: -6)
uzvarian_fortress.jpg green: (row: 34, col: 2) red: (row: 98, col: 4)

Bells & Whistles

Auto cropping

My auto crop algorithrm checks the brightness of the border in an iterative manner from outside to inside to check if there is an sudden increase on the average brightnees. When an increased brightness lebel is detected, the image will be cropped at that position. The picture will get rotated 4 times to get the algorithm applied to all edges. The algorithm works on around 80% of all edges tested. When the brightness of a border is similar to that of the inside picture or when the change is too gradual, the edges become hard to detect. In these cases, a constant edge will be cropped out.

Harvesters
Three generations
Monastery

Auto contrast

I found that the smallest value on these pictures are almost equal to 0 and the highest value on this picture is almost 1, so transforming the pixel value to span 0 to 1 range will not help too much. What I did is to implement a s-curve gamma function to differentiate the mid range value more so that the photos are more contrasty.

Village
Cathedral
Icon

Better features

For edge detection, I used sobel filters in skimage package. The above three pictures are the output of sobel filters on the input. Then edges photos of the blue and red filtered will get matched to the photo of green filter using similar algorithm described in 'Approach' section using normalized cross-correlation.