Lightfield camera

Depth Refocusing and Aperture Adjustment with Light Field Data

Roma Desai | CS -194 Final Project #1

 

OVERVIEW

 

We can capture light field data by photographing multiple images over a plane orthogonal to the optical axis. We can use this lightfield data along with simple shifting and averaging operations to create really cool effects such as changing an image’s focus as well as aperture.

 

In this project, I will be using the Stanford Light Field archive’s light field datasets to change the focus depth and aperture on different images.

 

 

 

PART 1: DEPTH REFOCUSING

 

In this section, I averaged all the images together and shifted them various amounts to cause the resulting image to shift its focus to different parts of the image.

 

 I first began by simple averaging all the images together with no shifting. Because objects close up move around more than objects farther away, the image becomes focused in the back and blurry up front. Here is an example:

 

 

To focus the image at a certain depth, we can shift all the images a certain amount. I aligned all the images to the center image by shifting all the others to align with various parts of the center image. Below are animations showing the various depth focusing on 2 of the light field archive’s datasets:

 

 

Chess Dataset

Lego Truck Dataset

 

 

PART 2: APERTURE ADJUSTMENT

 

 

In this section, I also used selective averaging to change the aperture of each image. Aperture is a measure of how large the opening allowing light in is on a camera. If the aperture is larger, more light is allowed in causing it not to focus well on one particular area. Therefore, if we have a large aperture, the image should be blurrier. 

 

We can represent a large aperture by including more light or by averaging together more images. We can represent a smaller aperture by including less light or averaging together less images. To ensure the images were currently focused, I had to take images centered around the center image. Below is an example of high vs low aperture. As you can see, the low aperture image is much more focused.

 

High Aperture

Low Aperture

A picture containing chessman

Description automatically generated

 

 

Applying a range of different apertures to the 2 image sets from above, we get the following GIF results:

 

Chess Dataset

Lego Truck Dataset

 

SUMMARY:

 Overall, I learned alot from this project and thought it was super cool how we can change the crucial parts of the image such as aperture and focus depth after the image is captured. As a photographer myself, I often have to ensure I take multiple images with different focus depths, different camera settings and then choose on a laptop which photographs look the best. Collecting a whole light field, on the other hand, opens the door to a huge range of possibilities which I think is super cool and exciting.

 

 

 

 

 

 Image quilting

Texture Synthesis and Transfer

Roma Desai | CS -194 Final Project #2

 

Overview:

 

For this project, I implemented the texture synthesis algorithm as described in a SIGGRAPH 2001 paper by Efros and Freeman. Texture synthesis is creating a larger texture out of an existing texture by overlapping similar patterns and removing edge artifacts. Interestingly, this algorithm can also be used for texture transfer and can result in some super cool effects. 

 

 

 

 

Part 1: Randomly Sampled Texture

 

The first method of texture synthesis was to use random patches. To capture a good amount of the pattern, I used a patch size that included identifiable patterns in the texture as recommended by the paper. For this method, I randomly sampled patches in the texture and placed them in the output image. Here are some results for different textures. As you can see, the created textures are not accurate and have noticeable edges. 

Original

Created

A picture containing brick, building material

Description automatically generated

A close up of a brick wall

Description automatically generated with medium confidence

Text

Description automatically generated

Text

Description automatically generated

Map

Description automatically generated

A picture containing green

Description automatically generated

 

 

 

 

Part 2: Overlapping Patches

 

The next method improved upon the previous method by overlapping patches. For this method, I overlapped patches by approximately 1/6th of the patch size. This involved finding the overlap strip, finding the top 10 patches that best matched the overlap according to ssd error, and randomly selecting a patch out of the 10 choices. Below are some results. While the textures look more like the original textures now, there are still edge artifacts apparent. 

 

Original

Created

A picture containing brick, building material

Description automatically generated

A picture containing brick, building material

Description automatically generated

Text

Description automatically generated

Text

Description automatically generated

Map

Description automatically generated

A picture containing text

Description automatically generated

 

 

 

Part 3: Seam Finding

 

Finally, the last method involved incorporating seam finding to remove edge artifacts. For this method, I found the min-cost path between the original patch and the new patch and created a mask to combine the patches in that way. This ensured that matching pixels were combined so it did not create harsh lines or noticeable edges. Below is an example of a patch cut according to its min-cost path. As shown, I calculated the min-cost path for the side strip as well as the top strip. Then, I combined these 2 paths into a single mask. I applied to mask to the first patch and applied the inverse of the patch to the overlapping patch. In this way, we can seamlessly combine the 2 paths along the minimum error boundary line.

 

 

Patch 1

Patch 2

Graphical user interface

Description automatically generated

A picture containing text, furniture, chest of drawers

Description automatically generated

 

Mask

Patch 1 Masked

Patch 2 Masked

A picture containing text, music, antenna, tool

Description automatically generated

A picture containing text

Description automatically generated

A picture containing text

Description automatically generated

 

 

Running this on a few textures, here are four of my results:

A picture containing text, brick, building material

Description automatically generated

Text

Description automatically generated

Background pattern

Description automatically generated

Background pattern

Description automatically generated with medium confidence

 

 

Part 4: Texture Transfer 

 

Another application of this texture synthesis algorithm is texture transfer! I modified by above code to look for the best matching patch in the texture image as well as the target image. I first calculated the best ssd error patch like the section above. After that, I ranked the patches from the texture and chose the patch that matched the best Here are some results below. While it did not work super well on my images, the ability to translate this algorithm into something that seems very different is super cool. 

 

A picture containing text

Description automatically generated

A picture containing yellow

Description automatically generated

A picture containing background pattern

Description automatically generated

A picture containing text

Description automatically generated

Map

Description automatically generated

A picture containing text

Description automatically generated

 

 

 

Bells & Whistles:

 

For bells and whistles, I wrote my own min-cost cut finding algorithm. For my basic approach, I knew we had to select one pixel per y-value for the path to be contiguous so I chose the minimum ssd error x-value for each level. I split the patches into a horizontal and vertical strip, ran the algorithm on both individually, and then combined the paths at the end. This allowed me to find the best min-cost path between my 2 patches.

 

 

 

Conclusion:

 

Overall, this simple but innovative algorithm proved to create super cool results. Although the algorithm was simple in theory, I had a lot of difficulty making sure the correct patches were being added to the correct places and everything was aligned which I believe is a reason some of my results are off. I really enjoyed reading the paper, understanding the algorithm and doing my best to implement. Overall, this was a super cool project and I can’t wait to go back and perfect the algorithm further after finals week!