CS194 Project 2 - Nalin Chopra

CS194 - Computer Vision and Computational Photography

Project 3 - Face Morphing

By Nalin Chopra

In this project, I learned about different techniques involving image warping to create interesting effects on pictures of human faces.


Part 1: Morphing Two Faces

Defining Correspondences

I started out with a photo of myself, and one of my friend Drew.

The first step in morphing two images is to define correspondence points between them. These points serve as a reference to morph smaller components and will form the skeleton of our morphed frames. Note that for each image, 4 correspondence points around the perimeter of the image are chosen. The correspondence points are chosen manually, and are meant to capture the key feature points of the image. Below is an example of what a set of such points looks like.

Triangulation

The purpose of defining correspondence points is so that we can compute a a triangulation. The goal is to map individual triangles from the inital image to the final image. Then if we can determine some affine transformation matrix (rotation + translation) that represents the transform that points from a given triangle in the first image to the corresponding triangle in the second image, we can play around with the "weight" of such a transformation, and achieve interesting effects by mapping points of a given triangle into points of other shapes and images. Why triangles? They have nice properties when it comes to matrix transformations (it's relatively easy to determine transformation from one triangle to another). It's important to note that there will be some unique transformation for every triangle within a triangulation. Additionally, I specifically used an algorithm to determine a Delaunay triangulation (scipy.spatial.Delaunay), a type of triangulation which maximizes the minimum angle within all triangles, again preventing small, thin slivers from forming and ultimately making our transformations better.

Pictured below is an example of the triangulation that I computed on my face. Note that when morphing two images, the triangulations have to be the same, to ensure that the correct triangles are being mapped to one another.

Computing the Mean Face

We can use the tools described above to define correspondence points and compute a triangulation of our images. To actually compute a morphed image or average of two faces, we need to first compute the average "shape" of our morphed image; this can be done by simply averaging the correspondence points chosen for our two images.

After that, we determine the matrix transformations that map every triangle from the first image into the averaged shape, another set of transformations that map every triangle from the second image into the averaged shape.

Finally, for every triangle within the averaged shape, we can use the transformations we earlier determined to add the pixel values from that triangle into the averaged shape. This is known as inverse warping. Since each image will have its own transformations and contributions into the averaged image, we can average them and add them into the averaged shape. Doing this for each triangle within our triangulation will yield our mean face image!

Let's take a look at how this worked with my face and my friend's:

Results

My Face

Averaged Face

My Friend's Face

Making a Morphing Video

Using the exact principle from computing the average face of two images, we can actually create an entire video of the morphing. Instead of just averaging both the "shape" (correspondence points) and the actual pixel values coming in from the image1 transform contributions and image2 contributions for each triangle within the triangulation, we can compute many mixed images by applying a weighted average. Now for both shape and pixel values, instead of adding contributions and dividing by 2, we can apply the formula weighted_avg_face = (1 - weight) * img_1_contribution + weight * img_2_contribution. Now we can create many weighted average image with different values of the weight, ranging from 0 to 1, and combine the frames into a video!

Here I computed 45 frames of such a transformation and put together a short video. Here's a link if the video doesn't work below.

Part 2: Computing Mean Face of a Population

Average Shape of a Population

Using similar principles from morphing two faces, we can also take a look at what computing the average of many faces looks like. Now all that changes is instead of adding some weighted average of the transformation contributions of two images, we just add the contributions from all of the images and divide by the number of images (linear average).

Here I used an open source image dataset of the faces of 200 Brazilian people which you can find here. First, let's compute the average face shape of the entire population.

Average Shape

Here we can see the correspondence points of all of the images averaged are quite symmetrical, which makes sense as we expect that as we average more faces it should become more symmetrical.

Now let's try to take some individuals from our dataset and project their face into the average shape, using the same technique as from when we computed the mean face of two images, except now we will only have contributions from one image.

Original Images

Images Morphed Into Population Average Face Shape

Original Images

Images Morphed Into Population Average Face Shape

Averaged Face of Population

Now we will add the contributions from every image into the averaged population face shape to compute what the average face looks like. Once again, each individual triangle within each image will have its own transformation matrix and contribution, and we will take a linear average of all 200 contributions.

Part 3: Caricatures

Morphing my Face Into The Population Mean

As we saw with the individual images from the dataset, an individual's face gets mushed into the population mean face shape, which can create some funny accentuated features. My face gets stretched upwards as is shown below!

My Normal Face

Merged Into Population Face Shape

Extrapolation of Facial Features

We can also use the technique of extrapolation by using a negative weight coefficient. Now our original weighted average of (1 - weight) * im1 + weight * im2, will make one of the contribution of my face negative (removing features that make my face "my face", and further accentuate features from the mean face!). This leads to a bit of warping in the image but makes my cheeks and eyes puff up!

My Normal Face

Extrapolation from Population Face Shape, α = -0.75

Part 4: Morphing Facial Expressions Music Video

Morphing my Face Into The Population Mean

Taking the same techniques from creating a video of one face morphing into another with different people, I took 6 pictures of myself where I'm making different facial expressions, and morphed each image into the next (computing 45 interpolated frames in between each pair of images). In order to capture some of the more difficult expressions, I used more correspondence points for each image around the eyebrows, nose, mouth and lips to make it as smooth as possible.

Original Images

Here's a link if the video doesn't work below.

Reflection

Overall, I really enjoyed applying the warping techniques discussed in class to many and seeing them come to life in animations! The video creation was definitely very fun, and creating clean, organized code was key to helping the project go smoothly and ensure that I could reuse code components from part to part of the project. I am excited to further explore and create animations and warps later on with other cool images!