In this segment, we'll briefly review the lighting calculations that
you need to do in your ray tracer. Notice that in the outline in code, we
moved from generating the eye rays, to finding the intersection
points, to now finding the color that should be assigned to a ray and
surface intersection point.
First let's talk a little bit about shadows. So you have this camera,
and you shoot a ray to the object. And then you shoot a ray to the
light source. If the shadow ray to the light source is unblocked, then
the object is visible. On the other hand, consider another point
here. And the ray to the light source is blocked, so the object is in
shadow.
There are, of course, numerical issues with computing shadows. What I
want is that the ray goes to the light source and is unblocked, but,
what may end up happening is, for numerical reasons, and of course
I've exaggerated this, the ray may actually fall below the surface,
and then the surface may incorrectly end up shadowing itself.
There's a simple solution, which is, you move a small epsilon towards
the light, maybe 1 / 1,000 or 1 / 1,000,000 before shooting the shadow
ray. That's as far as shadow rays is concerned. And so the simple
lighting model is that, once you hit a surface, you first need to
shoot a shadow ray. Each light is treated separately. For a given
light, if the shadow ray says the light is blocked, there is no direct
lighting contribution.
If the shadow ray says the light is visible, we can apply the lighting
model. For your homework, the lighting model used is really the same
as in OpenGL. So you have lighting model parameters globally which is
the ambient (r g b), and the attenuation, which is constant, linear
and quadratic. d is the distance to the light source.
Then you have model parameters that are per light. So for a
directional light, you have the light source direction and the (r g b)
parameters, the color. For a point light, you have the location.
The real difference is whether the homogenous coordinate is zero for a
directional light or non-zero for a point light. There are some
differences from the homework 2 syntax. But conceptually, this part of
the specification is almost exactly the same as in homework 2.
Then you have the model for the materials. So you have the diffuse
reflectance, which is the (r g b) color, the specular reflectance
which is (r g b), the shininess of the material which is s, the
emission (r g b). All of these are essentially the same as in homework
2, as in OpenGL.
Finally, I have written down explicitly the formula for the shading
model. Notice first the ambient term, then the emission. So really,
your intensity is initialized by the sum of ambient and emission. Then
we have the per light terms. For each light, what do you do?
Notice the visibility term. This is different from OpenGL, which did
not have you evaluate the visibility, but in the context of a ray
tracer it's easy to just shoot a shadow ray. If the shadow ray returns
that the light is blocked, then you don't consider that light at all.
L_i is the intensity of the light. Of course, it's different for each
color channel. Then you have the diffuse color k_d * max(l_i dot n, 0).
l_i is the direction to the light. Plus the specular term times max of
H, which is the half-angle, dot n, 0 (k_s * max((h_i . n), 0)) and
then s is the shininess. So, this is the half angle term.
Notice the visibility or shadowing for each light, which is not in
OpenGL. Also notice that this is, of course, evaluated per pixel per
light. And in OpenGL as well as in homework 2, you wrote a fragment
shader, so that's not new. But in the old days, OpenGL was evaluating
per vertex, whereas in ray tracers, you always do it per pixel.