Muhab Abdelgadir CS 194-26

The goal of this project is to take videos of boxes that have 3D grids on them, to set the points manually for the first frame, and to let the computer finish. This is indeed a Poor Man's Augmented Reality

Here is the initial video.

Next, we must have the computer capture our points, and once we do this, it is ready to track them in the next frames.

Next, we need the world coordinates, which are the coordinates inside the boxes grid:

Next, we must track the points of the box. I tracked the points however ran into difficulty because it would miss a point or two every time. This tracks non-rectangular shapes and eventually, it tracked the needed coordinates.

Next, we can calculate the projection matrix. We know that we must transform the matrix world, and this matrix has 8 degrees of freedom. Therefore we must use: $Ax = b → (Ax)T = bT → xTAT = bT$

Next, we can map every point of a basic cube to the box in every frame with given vertex positions:

Overall, this activity was fun and informative. AR seems very doable and plausible with these basics. There are many challenges with mapping the real world with the AR world, but this seems very interesting regardless.

The goal of this project is to create a simple, planar 3D scene from a single photograph. However, I added more where I can condense images into a smaller image and manipulate this, going above and beyond specification.

Inputs:

Outputs:

The last image can be manipulated twice to get a better perspective.

I needed to compute 3D coordinates of each vertex of each of the five planes. Then I defined 3D geometry corresponding to these planes. Finally, I used homography warping code from previous projects to rectify the textures for the planes and then texturemap them onto the 3D model. I then was able to move and rotate the camera and look at the scene from different viewpoints.

Bells and Whistles:

We can further manipulate images by implementing algorithms that base itself on energy functions which we can apply to an image. Once we have this applied, we find the minimal seam and remove it:

The seams are calculated by going through the image and summing up the minimal energy paths. If we want to remove a vertical seam, we perform a row-wise execution, while if we want to remove a horizontal seam (decrease height), we perform it column-wise. For each pixel in the image, we sum their energy to the minimal seam value between the previous 3 points. By the end of this process, we’ll have our seam map as an image such as this:

In sum, I implemented the Poor Man's Augmented Reality and Tour Into the Image + Image Seam Condensing. For the Tour Into the Image + Image Seam Condensing, I added the image seam condensing to further show how we can manipulate images. I just built off out previous projects and it was fairly simple.