CS194-26: Final Projects

Susan Lin | Fall 2020


Project 1: Light Field Camera

In this project I worked with real light field data (multiple images over a plane orthogonal to the optical axis) to achieve depth refocusing and aperture adjustment. This project draws from this paper.


Part 1: Depth Refocusing

If we averaged all the images with no shift, the nearby objects would be blurry, while the distant objects would be sharp. To achieve depth refocusing, we have to shift the images by an alpha (varies based on whether you want to move the focal point forward or backward). In the images below, I varied alpha over the interval -2 to 3.5.


Depth Refocusing of the Chess Images

Amtheyst (Focus on Front)
Amtheyst (Focus on Back)
Depth Refocusing of the Amtheyst Images
Flowers (Focus on Front)
Flowers (Focus on Back)
Depth Refocusing of the Flower Images



Part 2: Aperture Adjustment

To achieve aperture adjustment, we can vary the amount of photos from the image grid we incorporate. In the following examples we use "radius" to determine what photos we include. A smaller aperture corresponds to a larger depth of field, and vice versa. Hence, to simulate a smaller aperture, we use fewer photos (a smaller radius), and vice versa.


Focus of all these images is on the center

Aperture Adjustment of the Chess Images

Radius of 0
Radius of 7
Aperture Adjustment of the Amtheyst Images
Radius of 0
Radius of 7
Aperture Adjustment of the Flower Images

Conclusion

I really enjoyed the Light Field Camera project -- learning how to manipulate the focus and camera parameters after the photo has already been taken was really interesting.






Project 2: Image Quilting

In this project, I worked on implementing image quilting and texture synthesis, by following this paper. I had a chance to compare random sampling techniques to more sophisticated techniques that involved SSD and seam finding.


Part 1: Randomly Sampled Texture

To create a standard of comparison, we started off by outputting a larger texture solely by randomly sampling patches from our original texture.


Part 2: Overlapping Patches

In the second part, we allow the patches we sample to overlap. Because of this, we compare the SSD of the old overlap with newly sampled patches. We select and incorporate a new patch that has a fairly low SSD based on the existing output.


Part 3: Seam Finding

In the third part, instead of just overlapping directly, we find the minimum cut path in the overlap. To do so, we use the following equation, which considers the nearby pixels' error metrics, and helps us discover the best path to cut along for our newly sampled patch.

Seam Finding Example


First Patch
Second Patch
Error/Cost Patch
Generated Mask
Mask on Error/Cost Patch


Below, you can see the comparisons between Part 1, Part 2, and Part 3.

Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture
Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture
Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture
Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture
Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture
Original Texture
Randomly Sampled Texture
Overlapping Patches Texture
Seam Finding Texture

Part 4: Texture Transfer

In this fourth part, we can translate a texture onto an image. We are essentially adding on an extra an SSD metric for how similar the sampled texture patch is to the target image. This allows us to achieve a relatively smooth image while retaining the target image's general look.


Texture to be Sampled
Target Image
Output Image
Texture to be Sampled
Target Image
Output Image
Texture to be Sampled
Target Image
Output Image


Conclusion

I learned a lot from both of these projects! This image quilting project was really interesting to think about conceptually. Getting to implement it was fun as well -- though it did take a good chunk of time to debug some issues. There was also a lot of fine-tuning with the parameters (such as input texture size, sampled patch size, and overlap width).