Quinn Tran (abu)
In this assignment, we mimic the dolly zoom, or "Vertigo shot" from Hitchock's Vertigo. To duplicate this effect, I used a camera with a real zoom lens. I took successive pictures of a focused object, where in each iteration I moved farther away and zoomed in on the object in order to maintain the "size" of the object. The narrower field of view induces vertigo. Maintaining consistent camera angles, lense focus, and incremented distances from the object in order to basically have only the background change.
In this assignment I made fake miniatures by simulating the effect of selective focus cameras, also known as Tilt Shift. A user selects a focus plane by selecting 2 points (a line) or any other number of points (polygon). The program uses the points to create a masked region of interest, then applies a blurring filter to the rest of the image. Effectively this narrows the perceived depth of field in the scene and creates the illusion that the lens was really close to the subject
For a picture, I first increased saturation by 30 (since in hsv format saturation values are [0,255]) to enhance the miniature looking effect. I picked 2 points (for a line) and a separate set of points (polygon/irregular shape example). For the line case, the program determined its orientation by calculating slope, then creates a very thin rectangle according to that orientation. It creates a gaussian stack of this mask, where each level the mask is scaled up by a certain percentage before it is blurred, and averages the stack to create the blurred mask that is used to alpha blend different focuses of the image together. Thus, the focused region has max weight 1 and decreases in weight the farther a pixel is from the region. The program creates a separate gaussian stack of the image. Set blended_image = original image. Starting from the bottom of the stack (least blurry to most blurry), alpha blend the blended_image = mask*blended_image + (1-mask)*blurred_image_level_i.
Line Parameters: saturation change = 30, image sigma = 1, image gaussian stack levels = 3, mask sigma = 2, mask gaussian levels = 10, percentage of mask to scale per level = 160%, initial line thickness = 10 pixels.
DC: http://paulasophia.com/wp-content/uploads/2016/05/WASHINGTON-DC-AERIAL.jpgIrregular Shape Parameters: saturation change = 30, image sigma = 1, image gaussian stack levels = 3, mask sigma = 1.5, mask gaussian levels = 10, percentage of mask to scale per level = 110%.
DC: http://paulasophia.com/wp-content/uploads/2016/05/WASHINGTON-DC-AERIAL.jpgIt was ridiculously hard to get consistently aligned shots for the vertigo effect, but it was amazing when the effect did work out correctly. The fake miniatures were even more amusing because everything was so cute and a powerful example of how matrix multiplications can play with our perception. I learned a lot about perception from this project and this class. Thank you, teaching staff for this semester!