CS184 AS5: Raytracer with Scene Hierarchy

DUE DATE: Saturday February 26, 11:00pm
To be done individually!

Aim

For this project we will extend the raycaster to support shadows and loading a scene hierarchy. We will support arbitrary transforms on our spheres, so that we can make ellipsoids.

IMPORTANT: To complete this project, you need to read the textbook's chapters on raytracing. Without it this assignment will be extremely hard! Please save yourself and us the trouble and read these chapters.

Minimum Specifications

For this assignment, write a program called raytracer that accomplishes at least the following:
  1. Launch your raytracer using raytracer scene.scd output.bmp, taking in one required argument, the scene to render. The scene file will contain your eye and viewport location, thus render the given scene for a given camera location and orientation.
  2. Image Output: When the image is completely rendered, immediately write the image to disk according to the supplied output.bmp filename.
  3. Transformations:
    • Linear transforms on spheres: Support rotate, translate, and non-uniform scaling transformations on the sphere. The scene parser will load these transforms from the scene file into the Scene DAG for you as it did for as3.
    • Ray transforming: Intersection tests with a transformed sphere is done by inverse transforming the ray, then testing it for intersection with the untransformed sphere. Supporting this will allow you to easily render ellipsoids.
    • Note on Spheres: Since we support arbitrary transformations, all spheres can now be considered the unit sphere at the origin, with some compound transformation applied to them!
    • Transforming Normals: The normal vector for the surface at the point of intersection is now calculated on the untransformed sphere, and needs to be transformed with an inverse transpose transformation to be properly oriented in world space. Please see Shirley section 6.2.2 for details on transforming normals.
  4. Scene Hierarchy:
    • Load in a SCD_09 file similar to the scene files in as3.
    • As for as3, the scene file parser will supply you with a DAG of all the objects in your scene. You need to traverse this to render the scene. While traversing, keep track of the current transform by using your own stack, since you don't have OpenGL's stack anymore. Fortunately, if you write this method as a recursive traversal, you can use the program's execution stack to keep track of transforms by passing transforms down as arguments to the traversal function.
    • We do NOT store color or LOD data on the DAG for this assignment, so you do not need to keep track of any data other than the transformations (and the colors of the leaf spheres) as you traverse the DAG.
    • For this assignment, you are only required to traverse the DAG once to build a flat representation of your scene, which you then use to render the scene. See the Implementation Tips section for details.
  5. Raytracing:
    • Shadows: Before adding the contribution from a light, always cast a ''shadow ray'' to check if the light is visible from that point. Only add the light if it is visible.
    • Falloff for Lights: We want to model lights as dimming with distance, and be able to adjust the factor by which this extinction occurs. This factor is already included in the specification of each light, and is read into the light class. Apply this falloff according to the absolute distance from the light to the current location. Specifically, you should scale the light intensity by (1/(distToLight+deadDistance))^(falloff).
  6. Distribution Raytracing for Anti-aliasing:
    • Multiple rays per pixel: Shooting only a single ray per pixel can result in objectionable aliasing effects; these can be reduced by shooting multiple rays per pixels and averaging the returned (r,g,b) intensities. Change your code to shoot multiple rays per pixel. This should take very little work! All you need to do is modify the viewport to iterate over smaller steps than one pixel when getSample is called. (Note: If you use the setRaysPerPixel function, you may want to add a line to that function to also set _incPP to the sqrt() of rpp!)

Idea for Extra Credit:

Again, in approximate order of easiest to most difficult.

Additions to the Scene Description

The scene description now includes the following statements:

(sphere id
   (radius radius_float)
   (material material_id)
)

(material id
   (color  color_triple )
   (ka ka_float)   # diffuse reflection coefficient for ambient light (hack!)
   (kd kd_float)   # diffuse reflection coefficient
   (ks ks_float)   # specular reflection coefficient, aka "kr"
   (ksp ksp_float) # specular angle fall-off
   (ksm ksm_float) # metalness  
   (kt kt_float)   # transmission coefficient
   (ktn ktn_float) # refractive index
)

(camera id
   (perspective  0|1 )   # (perspective 0) means parallel projection
   (l l_float)           # left   boundary of window in the image/near-clipping plane
   (r r_float)
           # right  boundary of window in the image/near-clipping plane
   (b b_float)           # bottom boundary of window in the image/near-clipping plane
   (t t_float)           # top    boundary of window in the image/near-clipping plane
   (n n_float)           # sets the -z coordinate of the image plane, and of the near-clipping plane
   (f f_float)           # sets the far clipping plane; this is typically not used for raytracing.
)



(light id
   (type  lighttype_flag)
   (color  color_triple )
   (falloff  falloff_float)               # falloff exponent for point- and spot-lights
   (deaddistance  deaddistance_float)     # dead_distance  for point- and spot-lights
   (angularfalloff  angularfalloff_float) # exponent on cosine for spot-lights
      # by default localized lights are positioned at (0,0,0)
)     # and directed lights are shining in the direction (0,0,-1).



Note that positions and orientations are not specified directly because they can be specified using transformations. If untransformed, all objects are at (0,0,0) and all directions are (0,0,-1).

Note that the frustum is specified by the opengl convention: the l,r,b,t specify a rectangle the near plane z = -n. Therefore, for example, the upper left corner of your screen would be placed as UL = vec4(l,t,-n,1);

Although the scene format support multiple named cameras, we do not provide a way to specify which camera is used for rendering. Therefore, for now you may choose your camera arbitrarily in any scene with multiple cameras.

Example Scene (for submission)


For submission purposes, please render THIS scene file. The output, if rendered with 4 rays per pixel (2 by 2 in a grid) should look like the below. (Caveat: Shadows may vary slightly depending on your choice of threshold value.)


For testing you may initially want a less complex scene. Here is an example that recreates the scene from as4: threespheres.scd. This should look like:

Submission

To submit this project, all of the following needs to be done by the deadline: If you want to create more pictures and the like, please go ahead!

Windows Users: The grader should ONLY have to open your .sln file and press F5 to build and run your solution.
*Nix Users: The grader should ONLY have to run make with the appropriate makefile to build your project. Thus, for Mac and Linux make and for solaris gmake.

Note: The submit program retains the directory structure of what you send it. Thus, we recommend making a new directory for your assignment on the server, cd'ing into that directory, copying the whole framework with your code into this directory, and running yes | submit as5 to easily submit the whole project to us.

Framework

For this project you will continue using the same framework as you used in as4.

Implementation Tips

This project consists of four main ideas -- 3D transformations, scene hierarchies in 3D, shadow rays, and super-sampling. First build out the scene hierarchy details, then worry about implementing shadow rays and supersampling, since these two are independent of one another.

For as4 your code was probably set to re-render the scene on each call to display(). As the scene gets more complex and you add super sampling and shadow rays, this will become increasingly annoying, so you'll want to change your code to only do the raycasting loop once and just call film->show() on display().

Note that the inverse transpose transformation of normals does not account for the 'homogeous'/translation parts of the matrix: when you multiply a normal through, the old translation bits will mess up the w component of the resulting normal vector. Therefore you may want to just manually set "normal[VW] = 0;" after multiplying by the inverse transpose.

DAG Traversal and building a scene description

While the scd files map to a nice mental model of the scene, it's relatively expensive and cumbersome to traverse the scene graph for every ray. We suggest traversing the graph once to build your own rendering data structure representating your scene. For your initial implementation, try just building a vector of spheres, each also containing the transform (and optionally inverse transform) to place it in the world. Note that everything in the scene except the spheres can be converted to worldspace coordinates on loading the scene.

AS4 reference code

To help you avoid spending too much time on AS4 issues while doing AS5, we provide snippets of AS4 reference code.

Raycast:
vec3 raycast(Ray & ray) {
    vec3 retColor(0,0,0);
    vec3 d = vec3(ray.direction().normalize(),VW);

    double t; vec3 n; MaterialInfo m;
    if (world->intersect(ray, t, n, m)) {
        retColor += pairwiseMult(m.color, world->getAmbientLight()) * m.k[MAT_KA];
        vec3 hit = ray.getPos(t);
        vec3 S = m.k[MAT_KSM]*m.color+(1-m.k[MAT_KSM])*vec3(1,1,1);
        for (int lighttype = 0; lighttype < 3; lighttype++) {
            for (vector<Light>::iterator it = world->getLightsBeginIterator(lighttype); 
                    it != world->getLightsEndIterator(lighttype); ++it)
            {
                const LightInfo &info = it->getLightInfo();
                vec3 pos = it->getPosition();
                vec3 incident = -it->getDirection();
                if (lighttype == LIGHT_POINT)
                    incident = pos-hit;
                incident.normalize();
                // diffuse
                retColor += MAX(0,incident*n) * m.k[MAT_KD] * pairwiseMult(m.color, info.color); 
                // specular
                double rvdot = MAX(0,-d*(-incident+2*(incident*n)*n));
                retColor += m.k[MAT_KS] * pow(rvdot,m.k[MAT_KSP]) * pairwiseMult(S, info.color); 
            }
        }
    }

    return retColor;
}

Sphere intersection:
vec3 d(ray.direction(), VW);
vec3 e(ray.start(), VW);
vec3 ec = vec3(vec4(e,1)-_p, VW);
double dec = (d * ec);
double desc = dec*dec - (d*d)*(ec*ec - _r*_r);
if (desc < 0) {
    return numeric_limits<double>::infinity(); //no hit!
}
desc = sqrt(desc);
double t1 = (-1*dec - desc) / (d*d);
double t2 = (-1*dec + desc) / (d*d);