In this final segment in OpenGL, we are going to talk about texture mapping, which is responsible for the wooden floor on the mytest3 demo. So here are the new globals, and the basic setup. Before that I'll just show a very brief demo of the program. So this is the idea, and all that's happened is we've replaced the white floor with the wood texture. First, I define the wood texture. And that can be any image. In this case, it's 256 by 256, with RGB. Then I have to define a lot of other stuff in order to get texturing to work. So texNames is texture buffer, in this case I have only 1 texture, so it's an array of length 1. Do I texture or not, islight for lighting. texturing equals to 1, to turn on and off texturing. lighting is equal to 1, to turn on and off lighting. So, again, if I go to my demo I can turn off the texture by pressing the t key, I can turn it on by pressing the t key again. In display, I set the islight command to 0. So I turn off the lighting except for the teapot. I turn istex to texturing, which again can be toggled with the t key. And then I draw the texture on the floor with the texture names buffer 0. And that's the only element in the buffer. Finally, I set istex to 0, so that other items aren't textured. So, simple toggles for the keyboards. t turns on and off texturing. S turns on and off shading. We haven't shown that. But it's something you might want to try. What is the idea of texturing? It's really one of the remarkable breakthroughs in computer graphics, when it was developed in 1970s, because earlier to then, if you wanted to create an image, for example of this dinosaur here, and you wanted it to have the nice dinosaur texture that you imagined dinosaurs actually had, then, you would have to actually break the geometry into small regions, and color each triangle separately. And then maybe you could use Gouraud shading to interpolate those colors. But that requires you to have maybe a million triangles to represent the geometry which is really difficult and makes your 3D graphics slow down. So in the early 70s it was thought that this was a fundamental limitation to computer graphics. Texturing separates the rate or the frequency at which shading varies from the representation of geometry. The idea is that you supply an image. And you add detail in this image-based way. So if you had a plane, like a floor or a desk, then you can represent it using only 4 vertices for the rectangle that the plane is specified by. But then, you also specify an image that gets mapped onto the plane, and that's texture. Of course, images can be mapped onto more complicated shapes as well. It's not entirely trivial how to do that but you specify texture coordinates at each triangle, and in this way it's really a very basic primitive. Throughout the graphics pipeline, you separate out the geometry from the texture. In the real world, most surfaces are textured. And so here is a simple example of a scene with just a polygonal model on the left and with surface texture on the right. And of course, this makes it look more realistic. And so you can add things like wood grain, face textures, bricks, which adds a lot visual details to scenes that can be done in a fragment shader. So the setting of the texture, you initialize the texture of the wood in the shader program, and here is very basic code to read a PPM file. I know that PPM files are not very popular nowadays, and so you can replace this with using an image library to read an appropriate file, but I like it because the format is really very simple. So you open the file name, you read the first line is P6, the width of the file, 256, 256, and, you skip the newline, and so you read in the wood texture. Then you specify texture coordinates. Each vertex must have texture coordinates. So what you do is you use the standard rasterization or Gouraud shading hardware to interpolate these texture coordinates, and then for each fragment or pixel you use that interpolated value to look up the texture, rather than simply to interpolate the colors. So here is a lot of code to set this up. So glGenTextures(1, texNames) gives you a number to use for the first texture. You bind the buffers to the number objects times numperobj, plus ncolors. So the last buffer will be the texture buffer. You define the buffer data. You activate texture 0. You can have multiple textures in the scene. You enable 2D texturing. Then you have to come to the texture coordinates, and you have to enable the client state for the texture coordinate array, and you have to bind the 2D texture to the texture names buffer. Again, I mean, if you look up texturing in OpenGL, you can find a lot of resources on the web. I've given you the basic code you need to do that. So specifying the texture image. This is done with a glTexImage2D command. The target is always GL_TEXTURE_2D. level is 0. That's used for mipmapping, if you want textures at many different level scales. Components 3 or 4, the width must be a factor of 2. And you can specify GL_RGB. And you can specify the textures, unsigned byte, float, et cetera. So specifying the texture image, using those texture 2D, GL_RGB, the size of the texture, GL_UNSIGNED_BYTE and woodtexture. Then you specify the minification and magnification of textures. This is a topic we don't have a lot of time to get into. Essentially a texture has a given size, but if it is mapped on to a surface, it may be stretched or it may be shrunk, and either way you need to know how to interpolate. In this, we are just supplying simple linear interpolation, but you can do more interesting things. You can also specify nearest, if you just want nearest neighbor interpolation. Linear is satisfactory for us. Finally, if you get a texture value which is greater than the range of the texture, what should you do? And so you can clamp to the end points of the texture, or you can just repeat the texture infinitely. Finally, you need to define what's known as a sampler for the texture. And it just gets the, it gets bound to this location tex in my program. And I set it to 0, corresponding to GL_TEXTURE0. istex is bound to the istex location in the program. Drawing with the texture, again I define the offset for the particular object and the base to the end of the array. I bind the vertices and enable the vertex array, I bind the colors and I enable the color array. All of that is just what we did earlier, but now we are going to also talk about the textures, so you define texture zero is the active texture, you enable texture 2D. And here comes the real part to define the texture. You bind the texture here, and then you enable the client state with the texture coordinate array. And so, GLuint texture is what you should use for this binding here that's specified from the main program. And then you bind the buffer for the texture coordinates. You define the texture coordinate pointer. And then, finally, you draw the object in the standard way. So this is the general idiom of how you draw with textures. You can of course also find many other resources and tutorials online. The final steps for drawing, is first you define the vertex shader, which just passes on the texture coordinates. Giving to you the multitexture coordinates zero, because your texture is zero. And the fragment shader can, of course, do more complicated things. But in this case we just look up the texture 2D. And here you're looking up tex, which is the sampler here. And you're looking at texture coordinate 0, st is the standard thing for the texture. Of course you don't have to reset the fragment color to the texture, you can modulate the lighting calculation and you can do other interesting things. Just want to show you the demo again. That you have this texture of the floor, and I can turn it on and I can turn it off.