Final Projects

cs194-afe & cs194-acj

Vincent Zhu, Stephanie Kim

  1. Seam Carving + required bells and whistles
  2. Lightfield Camera
  3. Image Quilting, Texture

Seam Carving + required bells and whistles

For seam carving (as in the paper) we had to determine the 'importance' each pixel has using an energy function.

The algorithm summarized:

Until image has shrunk to the desired dimension: 

    Find the lowest-importance seam in the image

    Remove it.


A horizontal seam is a connected path from one side of the image to the other that chooses exactly one pixel from each column. (A vertical seam is the same, but from top to bottom and with rows.) Finding the lowest-importance seam with dynamic programming.

Example: Horizontal Carving

In [39]:
 
In [22]:
 
In [30]:
 
In [31]:
 
In [36]:
 

Example: Vertical Carving

In [15]:
 
In [20]:
 
In [23]:
 
In [38]:
 
In [47]:
 

Failed cases

For the river picture, seam carving horizontally did not work that well, when we decrease by 40% horixontally some of the trees took a very unnatural shape

In [41]:
 

For these buldings, shrinking horizontally resulted in weird carvings of the yellow building

In [43]:
 

For the tea, carving horizontally resulted in weird a drink shape

In [45]:
 

Bells & Whistles: Seam Insertion

For Seam insertion, we saved all of the indices that would have been deleted if we were seam carving and then used those saved indices to insert a pixel that would be the average between its neighboring pixels (left and right neighbors) for each R,G,B layer. Unsurprisingly, the more you want to insert the more blurry/less natural it looks.

Horizontal

As you can see in the last yosemite picture, as you insert enough seams the more you notice the "strethches" of inserted seams and it starts to look less natural.

In [49]:
 

Horizontal and Vertical

In [51]:
 

Lightfield Camera

Depth Refocusing with Light Field Data

Light field data consists of a series of images of the same subject(s) at different angles. When these images are averaged into a mean image, the objects further away from the camera will remain in focus, but objects closer to the camera will experience blurring. If we appropriately shift each image based on its corresponding camera position and a scalar α, before averaging, we can change the “depth” that is in focus.

In [14]:
 
In [16]:
 

Aperture Adjustment with Light Field Data

We can simulate the effect of a pinhole camera with light field data as well. To simulate an image taken by a pinhole camera with aperture size r, we average the images within a radius of r from the center. Larger aperture sizes do not filter out light rays originating from the same point arriving from multiple angles, causing scattering/blurring.

In [17]:
 
In [18]:
 

Image Quilting

Start with a pattern/texture

In [16]:
 
Out[16]:
<matplotlib.image.AxesImage at 0x1227e64e0>

Randomly Sampled Texture

The simplest but least effective method. This randomly samples square patches of a certain size until you reach the specified output size.

In [9]:
 
Out[9]:
<matplotlib.image.AxesImage at 0x12207f898>

Overlapping Patches

This randomly samples square patches of a certain size and randomly chooses patches whose cost is less than a certain threshold and creates a specified size output. As you sample square patches of a certain patchsize, overlap any newly added patch over the existing ones.

In [24]:
 
Out[24]:
<matplotlib.image.AxesImage at 0x11d4456a0>

Seam Finding

Instead of just taking straight edges as overlap, we find the min cost seam to decide what the overlap should look like. If you look closely (it's a little hard to see because the size of the image is on the smaller size), in the simple picure there's some artifacts that do no look like real letters (some examples: around rows 150, 210) and line imprints. But in the seam finding one there are basically no lines and all of the letters look like actual text.

In [18]:
 
Out[18]:
<matplotlib.image.AxesImage at 0x120965978>

Another example with seam finding:

In [20]:
 
Out[20]:
<matplotlib.image.AxesImage at 0x11c524518>

Texture Transfer

Creating a texture sample that is guided by a pair of sample/target correspondence images. There is an additional cost term based on the difference between the sampled source patch and the target patch at the location to be filled.

There is some resemblance of the image with the texture, but (if we had more time we think) we would need a smaller patch size and/or edit the cost function (alpha value, etc.)

In [22]: