Book Image

OpenGL ??? Build high performance graphics

By : William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo
Book Image

OpenGL ??? Build high performance graphics

By: William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo

Overview of this book

OpenGL is a fully functional, cross-platform API widely adopted across the industry for 2D and 3D graphics development. It is mainly used for game development and applications, but is equally popular in a vast variety of additional sectors. This practical course will help you gain proficiency with OpenGL and build compelling graphics for your games and applications. OpenGL Development Cookbook – This is your go-to guide to learn graphical programming techniques and implement 3D animations with OpenGL. This straight-talking Cookbook is perfect for intermediate C++ programmers who want to exploit the full potential of OpenGL. Full of practical techniques for implementing amazing computer graphics and visualizations using OpenGL. OpenGL 4.0 Shading Language Cookbook, Second Edition – With Version 4, the language has been further refined to provide programmers with greater power and flexibility, with new stages such as tessellation and compute. OpenGL Shading Language 4 Cookbook is a practical guide that takes you from the fundamentals of programming with modern GLSL and OpenGL, through to advanced techniques. OpenGL Data Visualization Cookbook - This easy-to-follow, comprehensive Cookbook shows readers how to create a variety of real-time, interactive data visualization tools. Each topic is explained in a step-by-step format. A range of hot topics is included, including stereoscopic 3D rendering and data visualization on mobile/wearable platforms. By the end of this guide, you will be equipped with the essential skills to develop a wide range of impressive OpenGL-based applications for your unique data visualization needs. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products, OpenGL Development Cookbook by Muhammad Mobeen Movania, OpenGL 4.0 Shading Language Cookbook, Second Edition by David Wolff, OpenGL Data Visualization Cookbook by Raymond C. H. Lo, William C. Y. Lo
Table of Contents (5 chapters)

In this chapter, we will cover:

To give more realism to 3D graphic scenes, we add lighting. In OpenGL's fixed function pipeline, per-vertex lighting is provided (which is deprecated in OpenGL v3.3 and above). Using shaders, we can not only replicate the per-vertex lighting of fixed function pipeline but also go a step further by implementing per-fragment lighting. The per-vertex lighting is also known as Gouraud shading and the per-fragment shading is known as Phong shading. So, without further ado, let's get started.

Let us start our recipe by following these simple steps:

  1. Set up the vertex shader that performs the lighting calculation in the view/eye space. This generates the color after the lighting calculation.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    uniform mat4 MVP;
    uniform mat4 MV;
    uniform mat3 N;
    uniform vec3 light_position;  //light position in object space
    uniform vec3 diffuse_color;
    uniform vec3 specular_color;
    uniform float shininess;
    smooth out vec4 color;
    const vec3 vEyeSpaceCameraPosition = vec3(0,0,0);
    void main()
    {
      vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1);
      vec4 vEyeSpacePosition = MV*vec4(vVertex,1);
      vec3 vEyeSpaceNormal   = normalize(N*vNormal);
      vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz);
      vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz);
      vec3 H = normalize(L+V);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess));
      color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1);
      gl_Position = MVP*vec4(vVertex,1);
    }
  2. Set up a fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 color;
    void main() {
      vFragColor = color;
    }
  3. In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
    glBindVertexArray(cubeVAOID);
    for(int i=0;i<8;i++) 
    {
      float theta = (float)(i/8.0f*2*M_PI);
      glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta)));
      glm::mat4 M = T;
      glm::mat4 MV = View*M;
      glm::mat4 MVP = Proj*MV; 
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); 
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
      glUniform3fv(shader("diffuse_color"),1, &(colors[i].x));
      glUniform3fv(shader("light_position"),1,&(lightPosOS.x));
      glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    }
    glBindVertexArray(sphereVAOID);
    glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0));
    glm::mat4 M = T;
    glm::mat4 MV = View*M;
    glm::mat4 MVP = Proj*MV;
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
    glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
    glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f);
    glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
    glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0);
    shader.UnUse();
    glBindVertexArray(0);
    grid->Render(glm::value_ptr(Proj*View));

We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV) matrix.

Similarly, we transform the per-vertex normals to eye space, but this time we transform them with the inverse transpose of the modelview matrix, which is stored in the normal matrix (N).

Next, we obtain the vector from the position of the light in eye space to the position of the vertex in eye space, and do a dot product of this vector with the eye space normal. This gives us the diffuse component.

We also calculate two additional vectors, the view vector (V) and the half-way vector (H) between the light and the view vector.

These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ), where σ is the shininess value; the larger the shininess, the more focused the specular.

The final color is then obtained by multiplying the diffuse value with the diffuse color and the specular value with the specular color.

The fragment shader in the per-vertex lighting simply outputs the per-vertex color interpolated by the rasterizer as the current fragment color.

Alternatively, if we move the lighting calculations to the fragment shader, we get a more pleasing rendering result at the expense of increased processing overhead. Specifically, we transform the per-vertex position, light position, and normals to eye space in the vertex shader, shown as follows:

In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.

We will now dissect the per-fragment lighting fragment shader line-by-line. We first calculate the light position in eye space. Then we calculate the vector from the light to the vertex in eye space. We also calculate the view vector (V) and the half way vector (H).

Next, the diffuse component is calculated using the dot product with the eye space normal.

The specular component is calculated as in the per-vertex case.

Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.

Getting started

In this recipe, we will render

Let us start our recipe by following these simple steps:

  1. Set up the vertex shader that performs the lighting calculation in the view/eye space. This generates the color after the lighting calculation.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    uniform mat4 MVP;
    uniform mat4 MV;
    uniform mat3 N;
    uniform vec3 light_position;  //light position in object space
    uniform vec3 diffuse_color;
    uniform vec3 specular_color;
    uniform float shininess;
    smooth out vec4 color;
    const vec3 vEyeSpaceCameraPosition = vec3(0,0,0);
    void main()
    {
      vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1);
      vec4 vEyeSpacePosition = MV*vec4(vVertex,1);
      vec3 vEyeSpaceNormal   = normalize(N*vNormal);
      vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz);
      vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz);
      vec3 H = normalize(L+V);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess));
      color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1);
      gl_Position = MVP*vec4(vVertex,1);
    }
  2. Set up a fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 color;
    void main() {
      vFragColor = color;
    }
  3. In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
    glBindVertexArray(cubeVAOID);
    for(int i=0;i<8;i++) 
    {
      float theta = (float)(i/8.0f*2*M_PI);
      glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta)));
      glm::mat4 M = T;
      glm::mat4 MV = View*M;
      glm::mat4 MVP = Proj*MV; 
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); 
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
      glUniform3fv(shader("diffuse_color"),1, &(colors[i].x));
      glUniform3fv(shader("light_position"),1,&(lightPosOS.x));
      glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    }
    glBindVertexArray(sphereVAOID);
    glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0));
    glm::mat4 M = T;
    glm::mat4 MV = View*M;
    glm::mat4 MVP = Proj*MV;
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
    glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
    glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f);
    glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
    glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0);
    shader.UnUse();
    glBindVertexArray(0);
    grid->Render(glm::value_ptr(Proj*View));

We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV) matrix.

Similarly, we transform the per-vertex normals to eye space, but this time we transform them with the inverse transpose of the modelview matrix, which is stored in the normal matrix (N).

Next, we obtain the vector from the position of the light in eye space to the position of the vertex in eye space, and do a dot product of this vector with the eye space normal. This gives us the diffuse component.

We also calculate two additional vectors, the view vector (V) and the half-way vector (H) between the light and the view vector.

These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ), where σ is the shininess value; the larger the shininess, the more focused the specular.

The final color is then obtained by multiplying the diffuse value with the diffuse color and the specular value with the specular color.

The fragment shader in the per-vertex lighting simply outputs the per-vertex color interpolated by the rasterizer as the current fragment color.

Alternatively, if we move the lighting calculations to the fragment shader, we get a more pleasing rendering result at the expense of increased processing overhead. Specifically, we transform the per-vertex position, light position, and normals to eye space in the vertex shader, shown as follows:

In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.

We will now dissect the per-fragment lighting fragment shader line-by-line. We first calculate the light position in eye space. Then we calculate the vector from the light to the vertex in eye space. We also calculate the view vector (V) and the half way vector (H).

Next, the diffuse component is calculated using the dot product with the eye space normal.

The specular component is calculated as in the per-vertex case.

Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.

How to do it…

Let us start our recipe by following these simple steps:

Set up the vertex shader that performs the lighting calculation in the view/eye space. This generates the color after the lighting calculation.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
uniform mat4 MV;
uniform mat3 N;
uniform vec3 light_position;  //light position in object space
uniform vec3 diffuse_color;
uniform vec3 specular_color;
uniform float shininess;
smooth out vec4 color;
const vec3 vEyeSpaceCameraPosition = vec3(0,0,0);
void main()
{
  vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1);
  vec4 vEyeSpacePosition = MV*vec4(vVertex,1);
  vec3 vEyeSpaceNormal   = normalize(N*vNormal);
  vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz);
  vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz);
  vec3 H = normalize(L+V);
  float diffuse = max(0, dot(vEyeSpaceNormal, L));
  float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess));
  color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1);
  gl_Position = MVP*vec4(vVertex,1);
}
Set up a
  1. fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 color;
    void main() {
      vFragColor = color;
    }
  2. In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
    glBindVertexArray(cubeVAOID);
    for(int i=0;i<8;i++) 
    {
      float theta = (float)(i/8.0f*2*M_PI);
      glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta)));
      glm::mat4 M = T;
      glm::mat4 MV = View*M;
      glm::mat4 MVP = Proj*MV; 
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); 
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
      glUniform3fv(shader("diffuse_color"),1, &(colors[i].x));
      glUniform3fv(shader("light_position"),1,&(lightPosOS.x));
      glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    }
    glBindVertexArray(sphereVAOID);
    glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0));
    glm::mat4 M = T;
    glm::mat4 MV = View*M;
    glm::mat4 MVP = Proj*MV;
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
    glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
    glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f);
    glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
    glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0);
    shader.UnUse();
    glBindVertexArray(0);
    grid->Render(glm::value_ptr(Proj*View));

We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV) matrix.

Similarly, we transform the per-vertex normals to eye space, but this time we transform them with the inverse transpose of the modelview matrix, which is stored in the normal matrix (N).

Next, we obtain the vector from the position of the light in eye space to the position of the vertex in eye space, and do a dot product of this vector with the eye space normal. This gives us the diffuse component.

We also calculate two additional vectors, the view vector (V) and the half-way vector (H) between the light and the view vector.

These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ), where σ is the shininess value; the larger the shininess, the more focused the specular.

The final color is then obtained by multiplying the diffuse value with the diffuse color and the specular value with the specular color.

The fragment shader in the per-vertex lighting simply outputs the per-vertex color interpolated by the rasterizer as the current fragment color.

Alternatively, if we move the lighting calculations to the fragment shader, we get a more pleasing rendering result at the expense of increased processing overhead. Specifically, we transform the per-vertex position, light position, and normals to eye space in the vertex shader, shown as follows:

In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.

We will now dissect the per-fragment lighting fragment shader line-by-line. We first calculate the light position in eye space. Then we calculate the vector from the light to the vertex in eye space. We also calculate the view vector (V) and the half way vector (H).

Next, the diffuse component is calculated using the dot product with the eye space normal.

The specular component is calculated as in the per-vertex case.

Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.

How it works…

We can

perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV) matrix.

Similarly, we transform the per-vertex normals to eye space, but this time we transform them with the inverse transpose of the modelview matrix, which is stored in the normal matrix (N).

Next, we obtain the vector from the position of the light in eye space to the position of the vertex in eye space, and do a dot product of this vector with the eye space normal. This gives us the diffuse component.

We also calculate two additional vectors, the view vector (V) and the half-way vector (H) between the light and the view vector.

These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ), where σ is the shininess value; the larger the shininess, the more focused the specular.

The final color is then obtained by multiplying the diffuse value with the diffuse color and the specular value with the specular color.

The fragment shader in the per-vertex lighting simply outputs the per-vertex color interpolated by the rasterizer as the current fragment color.

Alternatively, if we move the lighting calculations to the fragment shader, we get a more pleasing rendering result at the expense of increased processing overhead. Specifically, we transform the per-vertex position, light position, and normals to eye space in the vertex shader, shown as follows:

In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.

We will now dissect the per-fragment lighting fragment shader line-by-line. We first calculate the light position in eye space. Then we calculate the vector from the light to the vertex in eye space. We also calculate the view vector (V) and the half way vector (H).

Next, the diffuse component is calculated using the dot product with the eye space normal.

The specular component is calculated as in the per-vertex case.

Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.

There's more…

The output from the demo application for this recipe renders a sphere with eight cubes moving in and out, as shown in the following screenshot. The following figure shows the result of the per-vertex lighting. Note the ridge lines clearly visible on the middle sphere, which represents the vertices where the lighting calculations are carried out. Also note the appearance of the specular, which is predominantly visible at vertex positions only.

There's more…

Now, let us see the result of the same demo application implementing per-fragment lighting:

There's more…

Note how the per-fragment lighting gives a smoother result compared to the per-vertex lighting. In addition, the specular component is clearly visible. See also

Learning Modern 3D Graphics Programming, Section III, Jason L. McKesson:

In this recipe, we will now implement directional light. The only difference between a point light and a directional light is that in the case of the directional light source, there is no position, however, there is direction, as shown in the following figure.

Implementing per-fragment directional light

The figure compares directional and point light sources. For a point light source (left-hand side image), the light vector at each vertex is variable, depending on the relative positioning of the vertex with respect to the point light source. For directional light source (right-hand side image), all of the light vectors at vertices are the same and they all point in the direction of the directional light source.

Getting started

We will build
How to do it…

Let us start the recipe by following these simple steps:

Calculate the light direction in eye space and pass it as shader uniform. Note that the last component is 0 since now we have a light direction vector.
lightDirectionES = glm::vec3(MV*glm::vec4(lightDirectionOS,0));
In the vertex shader, output the eye space normal.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
uniform mat3 N;
smooth out vec3 vEyeSpaceNormal;
void main()
{
  vEyeSpaceNormal = N*vNormal;
  gl_Position = MVP*vec4(vVertex,1);
}
In the fragment shader, compute the diffuse component by calculating the dot product between the light direction vector in eye space with the eye space normal, and multiply with the diffuse color to get the fragment color. Note that here, the light vector is independent of the eye space vertex position.
#version 330 core
layout(location=0) out vec4 vFragColor;
uniform vec3 light_direction;
uniform vec3 diffuse_color;
smooth in vec3 vEyeSpaceNormal;
void main() {
  vec3 L = (light_direction);
  float diffuse = max(0, dot(vEyeSpaceNormal, L));
  vFragColor =  diffuse*vec4(diffuse_color,1);
}
How it works…

The only difference There's more…

The demo application implementing this recipe shows a sphere and a cube object. In this demo, the direction of the light is shown by using a line segment at origin. The direction of the light can be changed using the right mouse button. The output from this demo application is shown in the following screenshot:

There's more… See also

The Implementing per-vertex and per-fragment point lighting recipe
Learning Modern 3D Graphics Programming, Chapter 9, Lights On, Jason L. McKesson:

The previous recipe handled a directional light source but without attenuation. The relevant changes to enable per-fragment point light with attenuation will be given in this recipe. We start by implementing per-fragment point light, as in the Implementing per-vertex and per-fragment point lighting recipe.

Implementing per-fragment point light is demonstrated by following these steps:

  1. From the vertex shader, output the eye space vertex position and normal.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    uniform mat4 MVP;
    uniform mat4 MV;
    uniform mat3 N;
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    
    void main() {
        vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
        vEyeSpaceNormal   = N*vNormal;
        gl_Position = MVP*vec4(vVertex,1);
    }
  2. In the fragment shader, calculate the light position in eye space, and then calculate the vector from the eye space vertex position to the eye space light position. Store the light distance before normalizing the light vector.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    uniform vec3 light_position;  //light position in object space
    uniform vec3 diffuse_color;
    uniform mat4 MV;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    const float k0 = 1.0;  //constant attenuation
    const float k1 = 0.0;  //linear attenuation
    const float k2 = 0.0;  //quadratic attenuation
    
    void main() {
      vec3 vEyeSpaceLightPosition = (MV*vec4(light_position,1)).xyz;
      vec3 L = (vEyeSpaceLightPosition-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      diffuse *= attenuationAmount;
      vFragColor = diffuse*vec4(diffuse_color,1);
    }
  3. Apply attenuation based on the distance from the light source to the diffuse component.
    float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
    diffuse *= attenuationAmount;
  4. Multiply the diffuse component to the diffuse color and set it as the fragment color.
    vFragColor = diffuse*vec4(diffuse_color,1);
Getting started

The code for

Implementing per-fragment point light is demonstrated by following these steps:

  1. From the vertex shader, output the eye space vertex position and normal.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    uniform mat4 MVP;
    uniform mat4 MV;
    uniform mat3 N;
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    
    void main() {
        vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
        vEyeSpaceNormal   = N*vNormal;
        gl_Position = MVP*vec4(vVertex,1);
    }
  2. In the fragment shader, calculate the light position in eye space, and then calculate the vector from the eye space vertex position to the eye space light position. Store the light distance before normalizing the light vector.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    uniform vec3 light_position;  //light position in object space
    uniform vec3 diffuse_color;
    uniform mat4 MV;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    const float k0 = 1.0;  //constant attenuation
    const float k1 = 0.0;  //linear attenuation
    const float k2 = 0.0;  //quadratic attenuation
    
    void main() {
      vec3 vEyeSpaceLightPosition = (MV*vec4(light_position,1)).xyz;
      vec3 L = (vEyeSpaceLightPosition-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      diffuse *= attenuationAmount;
      vFragColor = diffuse*vec4(diffuse_color,1);
    }
  3. Apply attenuation based on the distance from the light source to the diffuse component.
    float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
    diffuse *= attenuationAmount;
  4. Multiply the diffuse component to the diffuse color and set it as the fragment color.
    vFragColor = diffuse*vec4(diffuse_color,1);
How to do it…

Implementing per-fragment point light is demonstrated by following these steps:

From the vertex shader, output the eye space vertex position and normal.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
uniform mat4 MV;
uniform mat3 N;
smooth out vec3 vEyeSpaceNormal;
smooth out vec3 vEyeSpacePosition;

void main() {
    vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
    vEyeSpaceNormal   = N*vNormal;
    gl_Position = MVP*vec4(vVertex,1);
}
In the
How it works…

The recipe There's more…

The output from the demo application implementing this recipe is given in the following screenshot. In this recipe, we render a cube and a sphere. The position of light is shown using a crosshair on the screen. The camera position can be changed using the left mouse button and the light position can be changed by using the right mouse button. The light distance can be changed by using the mouse wheel.

There's more… See also

Real-time Rendering, Third Edition, Tomas Akenine-Moller, Eric Haines, Naty Hoffman, A K Peters/CRC Press
Learning Modern 3D Graphics Programming, Chapter 10, Plane Lights, Jason L. McKesson:

We will now implement per-fragment spot light. Spot light is a special point light that emits light in a directional cone. The size of this cone is determined by the spot cutoff amount, which is given in angles, as shown in the following figure. In addition, the sharpness of the spot is controlled by the parameter spot exponent. A higher value of the exponent gives a sharper falloff and vice versa.

Implementing per-fragment spot light
Getting started

The code for this recipe is
How to do it…

Let us start this recipe by following these simple steps:

From the light's object space position and spot light target's position, calculate the spot light direction vector in eye space.
spotDirectionES  = glm::normalize(glm::vec3(MV*glm::vec4(spotPositionOS-lightPosOS,0)))
In the fragment shader, calculate the diffuse component as in point light. In addition, calculate the spot effect by finding the angle between the light direction and the spot direction vector.
vec3 L = (light_position.xyz-vEyeSpacePosition);
float d = length(L);
L = normalize(L);
vec3 D = normalize(spot_direction);
vec3 V = -L;
float diffuse = 1;
float spotEffect = dot(V,D);
If the angle is greater than the spot cutoff, apply the spot exponent and then use the diffuse shader on the fragment.
if(spotEffect > spot_cutoff) {
  spotEffect = pow(spotEffect, spot_exponent);
  diffuse = max(0, dot(vEyeSpaceNormal, L));
  float attenuationAmount = spotEffect/(k0 + (k1*d) + (k2*d*d));
  diffuse *= attenuationAmount;
  vFragColor = diffuse*vec4(diffuse_color,1);
}
How it works…

The spot light is a There's more…

The demo application implementing this recipe renders the same scene as in the point light demo. We can change the spot light direction using the right mouse button. The output result is shown in the following figure:

There's more… See also

Real-time Rendering, Third Edition, Tomas Akenine-Moller, Eric Haines, Naty Hoffman, A K Peters/CRC Press
Spot Light in GLSL tutorial at Ozone3D:

Shadows give important cues about the relative positioning of graphical objects. There are myriads of shadow generation techniques, including shadow volumes, shadow maps, cascaded shadow maps, and so on. An excellent reference on several shadow generation techniques is given in the See also section. We will now see how to carry out basic shadow mapping using FBO.

Let us start with this recipe by following these simple steps:

  1. Create an OpenGL texture object which will be our shadow map texture. Make sure to set the clamp mode to GL_CLAMP_TO_BORDER, set the border color to {1,0,0,0}, give the texture comparison mode to GL_COMPARE_REF_TO_TEXTURE, and set the compare function to GL_LEQUAL. Set the texture internal format to GL_DEPTH_COMPONENT24.
    glGenTextures(1, &shadowMapTexID);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
    GLfloat border[4]={1,0,0,0};
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
    glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
    glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT24,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE,NULL);
    
  2. Set up an FBO and use the shadow map texture as a single depth attachment. This will store the scene's depth from the point of view of light.
    glGenFramebuffers(1,&fboID);
    glBindFramebuffer(GL_FRAMEBUFFER,fboID);
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D,shadowMapTexID,0);
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"FBO setup successful."<<endl;
    } else {
      cout<<"Problem in FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
  3. Using the position and the direction of the light, set up the shadow matrix (S) by combining the light modelview matrix (MV_L), projection matrix (P_L), and bias matrix (B). For reducing runtime calculation, we store the combined projection and bias matrix (BP) at initialization.
    MV_L = glm::lookAt(lightPosOS,glm::vec3(0,0,0), glm::vec3(0,1,0));
    P_L  = glm::perspective(50.0f,1.0f,1.0f, 25.0f);
    B    = glm::scale(glm::translate(glm::mat4(1), glm::vec3(0.5,0.5,0.5)),glm::vec3(0.5,0.5,0.5));
    BP   = B*P_L;
    S    = BP*MV_L;
  4. Bind the FBO and render the scene from the point of view of the light. Make sure to enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values are rendered. Otherwise our objects will suffer from shadow acne.
  5. Disable FBO, restore default viewport, and render the scene normally from the point of view of the camera.
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P, 0 );
  6. In the vertex shader, multiply the world space vertex positions (M*vec4(vVertex,1)) with the shadow matrix (S) to obtain the shadow coordinates. These will be used for lookup of the depth values from the shadowmap texture in the fragment shader.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    
    uniform mat4 MVP;   //modelview projection matrix
    uniform mat4 MV;    //modelview matrix
    uniform mat4 M;     //model matrix
    uniform mat3 N;     //normal matrix
    uniform mat4 S;     //shadow matrix
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    smooth out vec4 vShadowCoords;
    void main()
    {
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal   = N*vNormal;
      vShadowCoords     = S*(M*vec4(vVertex,1));
      gl_Position       = MVP*vec4(vVertex,1);
    }
  7. In the fragment shader, use the shadow coordinates to lookup the depth value in the shadow map sampler which is of the sampler2Dshadow type. This sampler can be used with the textureProj function to return a comparison outcome. We then use the comparison result to darken the diffuse component, simulating shadows.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    uniform sampler2DShadow shadowMap;
    uniform vec3 light_position;  //light position in eye space
    uniform vec3 diffuse_color;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec4 vShadowCoords;
    const float k0 = 1.0;  //constant attenuation
    const float k1 = 0.0;  //linear attenuation
    const float k2 = 0.0;  //quadratic attenuation
    uniform bool bIsLightPass; //no shadows in light pass
    void main() {
      if(bIsLightPass)
      return;
      vec3 L = (light_position.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      float diffuse = max(0, dot(vEyeSpaceNormal, L)) * attenuationAmount;
      if(vShadowCoords.w>1) {
        float shadow = textureProj(shadowMap, vShadowCoords);
        diffuse = mix(diffuse, diffuse*shadow, 0.5);
      }
      vFragColor = diffuse*vec4(diffuse_color, 1);
    }

The shadow mapping algorithm works in two passes. In the first pass, the scene is rendered from the point of view of light, and the depth buffer is stored into a texture called shadowmap. We use a single FBO with a depth attachment for this purpose. Apart from the conventional minification/magnification texture filtering, we set the texture wrapping mode to GL_CLAMP_TO_BORDER, which ensures that the values are clamped to the specified border color. Had we set this as GL_CLAMP or GL_CLAMP_TO_EDGE, the border pixels forming the shadow map would produce visible artefacts.

The shadowmap texture has some additional parameters. The first is the GL_TEXTURE_COMPARE_MODE parameter, which is set as the GL_COMPARE_REF_TO_TEXTURE value. This enables the texture to be used for depth comparison in the shader. Next, we specify the GL_TEXTURE_COMPARE_FUNC parameter, which is set as GL_LEQUAL. This compares the currently interpolated texture coordinate value (r) with the depth texture's sample value (D). It returns 1 if r<=D, otherwise it returns 0. This means that if the depth of the current sample is less than or equal to the depth from the shadowmap texture, the sample is not in shadow; otherwise, it is in shadow. The textureProj GLSL shader function performs this comparison for us and returns 0 or 1 based on whether the point is in shadow or not. These are the texture parameters required for the shadowmap texture.

To ensure that we do not have any shadow acne, we enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values get written to the shadowmap texture. In the second pass, the scene is rendered normally from the point of view of the camera and the shadow map is projected on the scene geometry using shaders.

To render the scene from the point of view of light, the modelview matrix of the light (MV_L), the projection matrix (P_L), and the bias matrix (B) are calculated. After multiplying with the projection matrix, the coordinates are in clip space (that is, they range from [-1,-1,-1]). to [1,1,1]. The bias matrix rescales this range to bring the coordinates from [0,0,0] to [1,1,1] range so that the shadow lookup can be carried out.

If we have the object's vertex position in the object space given as Vobj, the shadow coordinates (UVproj) for the lookup in the shadow map can be given by multiplying the shadow matrix (S) with the world space position of the object (M*Vobj). The whole series of transformations is given as follows:

How it works…

Here, B is the bias matrix, P L is the projection matrix of light, and MV L is the modelview matrix of light. For efficiency, we precompute the bias matrix of the light and the projection matrix, since they are unchanged for the lifetime of the application. Based on the user input, the light's modelview is modified and then the shadow matrix is recalculated. This is then passed to the shader.

In the vertex shader, the shadowmap texture coordinates are obtained by multiplying the world space vertex position (M*Vobj) with the shadow matrix (S). In the fragment shader, the shadow map is looked up using the projected texture coordinate to find if the current fragment is in shadow. Before the texture lookup, we check the value of the w coordinate of the projected texture coordinate. We only do our calculations if the w coordinate is greater than 1. This ensures that we only accept the forward projection and reject the back projection. Try removing this condition to see what we mean.

The shadow map lookup computation is facilitated by the textureProj GLSL function. The result from the shadow map lookup returns 1 or 0. This result is multiplied with the shading computation. As it happens in the real world, we never have coal black shadows. Therefore, we combine the shadow outcome with the shading computation by using the mix GLSL function.

Getting started

For this recipe, we

Let us start with this recipe by following these simple steps:

  1. Create an OpenGL texture object which will be our shadow map texture. Make sure to set the clamp mode to GL_CLAMP_TO_BORDER, set the border color to {1,0,0,0}, give the texture comparison mode to GL_COMPARE_REF_TO_TEXTURE, and set the compare function to GL_LEQUAL. Set the texture internal format to GL_DEPTH_COMPONENT24.
    glGenTextures(1, &shadowMapTexID);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
    GLfloat border[4]={1,0,0,0};
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
    glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
    glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT24,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE,NULL);
    
  2. Set up an FBO and use the shadow map texture as a single depth attachment. This will store the scene's depth from the point of view of light.
    glGenFramebuffers(1,&fboID);
    glBindFramebuffer(GL_FRAMEBUFFER,fboID);
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D,shadowMapTexID,0);
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"FBO setup successful."<<endl;
    } else {
      cout<<"Problem in FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
  3. Using the position and the direction of the light, set up the shadow matrix (S) by combining the light modelview matrix (MV_L), projection matrix (P_L), and bias matrix (B). For reducing runtime calculation, we store the combined projection and bias matrix (BP) at initialization.
    MV_L = glm::lookAt(lightPosOS,glm::vec3(0,0,0), glm::vec3(0,1,0));
    P_L  = glm::perspective(50.0f,1.0f,1.0f, 25.0f);
    B    = glm::scale(glm::translate(glm::mat4(1), glm::vec3(0.5,0.5,0.5)),glm::vec3(0.5,0.5,0.5));
    BP   = B*P_L;
    S    = BP*MV_L;
  4. Bind the FBO and render the scene from the point of view of the light. Make sure to enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values are rendered. Otherwise our objects will suffer from shadow acne.
  5. Disable FBO, restore default viewport, and render the scene normally from the point of view of the camera.
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P, 0 );
  6. In the vertex shader, multiply the world space vertex positions (M*vec4(vVertex,1)) with the shadow matrix (S) to obtain the shadow coordinates. These will be used for lookup of the depth values from the shadowmap texture in the fragment shader.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    
    uniform mat4 MVP;   //modelview projection matrix
    uniform mat4 MV;    //modelview matrix
    uniform mat4 M;     //model matrix
    uniform mat3 N;     //normal matrix
    uniform mat4 S;     //shadow matrix
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    smooth out vec4 vShadowCoords;
    void main()
    {
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal   = N*vNormal;
      vShadowCoords     = S*(M*vec4(vVertex,1));
      gl_Position       = MVP*vec4(vVertex,1);
    }
  7. In the fragment shader, use the shadow coordinates to lookup the depth value in the shadow map sampler which is of the sampler2Dshadow type. This sampler can be used with the textureProj function to return a comparison outcome. We then use the comparison result to darken the diffuse component, simulating shadows.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    uniform sampler2DShadow shadowMap;
    uniform vec3 light_position;  //light position in eye space
    uniform vec3 diffuse_color;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec4 vShadowCoords;
    const float k0 = 1.0;  //constant attenuation
    const float k1 = 0.0;  //linear attenuation
    const float k2 = 0.0;  //quadratic attenuation
    uniform bool bIsLightPass; //no shadows in light pass
    void main() {
      if(bIsLightPass)
      return;
      vec3 L = (light_position.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      float diffuse = max(0, dot(vEyeSpaceNormal, L)) * attenuationAmount;
      if(vShadowCoords.w>1) {
        float shadow = textureProj(shadowMap, vShadowCoords);
        diffuse = mix(diffuse, diffuse*shadow, 0.5);
      }
      vFragColor = diffuse*vec4(diffuse_color, 1);
    }

The shadow mapping algorithm works in two passes. In the first pass, the scene is rendered from the point of view of light, and the depth buffer is stored into a texture called shadowmap. We use a single FBO with a depth attachment for this purpose. Apart from the conventional minification/magnification texture filtering, we set the texture wrapping mode to GL_CLAMP_TO_BORDER, which ensures that the values are clamped to the specified border color. Had we set this as GL_CLAMP or GL_CLAMP_TO_EDGE, the border pixels forming the shadow map would produce visible artefacts.

The shadowmap texture has some additional parameters. The first is the GL_TEXTURE_COMPARE_MODE parameter, which is set as the GL_COMPARE_REF_TO_TEXTURE value. This enables the texture to be used for depth comparison in the shader. Next, we specify the GL_TEXTURE_COMPARE_FUNC parameter, which is set as GL_LEQUAL. This compares the currently interpolated texture coordinate value (r) with the depth texture's sample value (D). It returns 1 if r<=D, otherwise it returns 0. This means that if the depth of the current sample is less than or equal to the depth from the shadowmap texture, the sample is not in shadow; otherwise, it is in shadow. The textureProj GLSL shader function performs this comparison for us and returns 0 or 1 based on whether the point is in shadow or not. These are the texture parameters required for the shadowmap texture.

To ensure that we do not have any shadow acne, we enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values get written to the shadowmap texture. In the second pass, the scene is rendered normally from the point of view of the camera and the shadow map is projected on the scene geometry using shaders.

To render the scene from the point of view of light, the modelview matrix of the light (MV_L), the projection matrix (P_L), and the bias matrix (B) are calculated. After multiplying with the projection matrix, the coordinates are in clip space (that is, they range from [-1,-1,-1]). to [1,1,1]. The bias matrix rescales this range to bring the coordinates from [0,0,0] to [1,1,1] range so that the shadow lookup can be carried out.

If we have the object's vertex position in the object space given as Vobj, the shadow coordinates (UVproj) for the lookup in the shadow map can be given by multiplying the shadow matrix (S) with the world space position of the object (M*Vobj). The whole series of transformations is given as follows:

How it works…

Here, B is the bias matrix, P L is the projection matrix of light, and MV L is the modelview matrix of light. For efficiency, we precompute the bias matrix of the light and the projection matrix, since they are unchanged for the lifetime of the application. Based on the user input, the light's modelview is modified and then the shadow matrix is recalculated. This is then passed to the shader.

In the vertex shader, the shadowmap texture coordinates are obtained by multiplying the world space vertex position (M*Vobj) with the shadow matrix (S). In the fragment shader, the shadow map is looked up using the projected texture coordinate to find if the current fragment is in shadow. Before the texture lookup, we check the value of the w coordinate of the projected texture coordinate. We only do our calculations if the w coordinate is greater than 1. This ensures that we only accept the forward projection and reject the back projection. Try removing this condition to see what we mean.

The shadow map lookup computation is facilitated by the textureProj GLSL function. The result from the shadow map lookup returns 1 or 0. This result is multiplied with the shading computation. As it happens in the real world, we never have coal black shadows. Therefore, we combine the shadow outcome with the shading computation by using the mix GLSL function.

How to do it…

Let us start with this recipe by following these simple steps:

Create an OpenGL texture object which will be our shadow map texture. Make sure to set the clamp mode to GL_CLAMP_TO_BORDER, set the border color to {1,0,0,0}, give the texture comparison mode to GL_COMPARE_REF_TO_TEXTURE, and set the compare function to GL_LEQUAL. Set the texture internal format to GL_DEPTH_COMPONENT24.
glGenTextures(1, &shadowMapTexID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
GLfloat border[4]={1,0,0,0};
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT24,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE,NULL);
Set up an FBO and use the shadow map texture as a single depth attachment. This will store the scene's depth from the point of view of light.
glGenFramebuffers(1,&fboID);
glBindFramebuffer(GL_FRAMEBUFFER,fboID);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D,shadowMapTexID,0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status == GL_FRAMEBUFFER_COMPLETE) {
  cout<<"FBO setup successful."<<endl;
} else {
  cout<<"Problem in FBO setup."<<endl;
}
glBindFramebuffer(GL_FRAMEBUFFER,0);
Using the
  1. position and the direction of the light, set up the shadow matrix (S) by combining the light modelview matrix (MV_L), projection matrix (P_L), and bias matrix (B). For reducing runtime calculation, we store the combined projection and bias matrix (BP) at initialization.
    MV_L = glm::lookAt(lightPosOS,glm::vec3(0,0,0), glm::vec3(0,1,0));
    P_L  = glm::perspective(50.0f,1.0f,1.0f, 25.0f);
    B    = glm::scale(glm::translate(glm::mat4(1), glm::vec3(0.5,0.5,0.5)),glm::vec3(0.5,0.5,0.5));
    BP   = B*P_L;
    S    = BP*MV_L;
  2. Bind the FBO and render the scene from the point of view of the light. Make sure to enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values are rendered. Otherwise our objects will suffer from shadow acne.
  3. Disable FBO, restore default viewport, and render the scene normally from the point of view of the camera.
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P, 0 );
  4. In the vertex shader, multiply the world space vertex positions (M*vec4(vVertex,1)) with the shadow matrix (S) to obtain the shadow coordinates. These will be used for lookup of the depth values from the shadowmap texture in the fragment shader.
    #version 330 core
    layout(location=0) in vec3 vVertex;
    layout(location=1) in vec3 vNormal;
    
    uniform mat4 MVP;   //modelview projection matrix
    uniform mat4 MV;    //modelview matrix
    uniform mat4 M;     //model matrix
    uniform mat3 N;     //normal matrix
    uniform mat4 S;     //shadow matrix
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    smooth out vec4 vShadowCoords;
    void main()
    {
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal   = N*vNormal;
      vShadowCoords     = S*(M*vec4(vVertex,1));
      gl_Position       = MVP*vec4(vVertex,1);
    }
  5. In the fragment shader, use the shadow coordinates to lookup the depth value in the shadow map sampler which is of the sampler2Dshadow type. This sampler can be used with the textureProj function to return a comparison outcome. We then use the comparison result to darken the diffuse component, simulating shadows.
    #version 330 core
    layout(location=0) out vec4 vFragColor;
    uniform sampler2DShadow shadowMap;
    uniform vec3 light_position;  //light position in eye space
    uniform vec3 diffuse_color;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec4 vShadowCoords;
    const float k0 = 1.0;  //constant attenuation
    const float k1 = 0.0;  //linear attenuation
    const float k2 = 0.0;  //quadratic attenuation
    uniform bool bIsLightPass; //no shadows in light pass
    void main() {
      if(bIsLightPass)
      return;
      vec3 L = (light_position.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      float diffuse = max(0, dot(vEyeSpaceNormal, L)) * attenuationAmount;
      if(vShadowCoords.w>1) {
        float shadow = textureProj(shadowMap, vShadowCoords);
        diffuse = mix(diffuse, diffuse*shadow, 0.5);
      }
      vFragColor = diffuse*vec4(diffuse_color, 1);
    }

The shadow mapping algorithm works in two passes. In the first pass, the scene is rendered from the point of view of light, and the depth buffer is stored into a texture called shadowmap. We use a single FBO with a depth attachment for this purpose. Apart from the conventional minification/magnification texture filtering, we set the texture wrapping mode to GL_CLAMP_TO_BORDER, which ensures that the values are clamped to the specified border color. Had we set this as GL_CLAMP or GL_CLAMP_TO_EDGE, the border pixels forming the shadow map would produce visible artefacts.

The shadowmap texture has some additional parameters. The first is the GL_TEXTURE_COMPARE_MODE parameter, which is set as the GL_COMPARE_REF_TO_TEXTURE value. This enables the texture to be used for depth comparison in the shader. Next, we specify the GL_TEXTURE_COMPARE_FUNC parameter, which is set as GL_LEQUAL. This compares the currently interpolated texture coordinate value (r) with the depth texture's sample value (D). It returns 1 if r<=D, otherwise it returns 0. This means that if the depth of the current sample is less than or equal to the depth from the shadowmap texture, the sample is not in shadow; otherwise, it is in shadow. The textureProj GLSL shader function performs this comparison for us and returns 0 or 1 based on whether the point is in shadow or not. These are the texture parameters required for the shadowmap texture.

To ensure that we do not have any shadow acne, we enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values get written to the shadowmap texture. In the second pass, the scene is rendered normally from the point of view of the camera and the shadow map is projected on the scene geometry using shaders.

To render the scene from the point of view of light, the modelview matrix of the light (MV_L), the projection matrix (P_L), and the bias matrix (B) are calculated. After multiplying with the projection matrix, the coordinates are in clip space (that is, they range from [-1,-1,-1]). to [1,1,1]. The bias matrix rescales this range to bring the coordinates from [0,0,0] to [1,1,1] range so that the shadow lookup can be carried out.

If we have the object's vertex position in the object space given as Vobj, the shadow coordinates (UVproj) for the lookup in the shadow map can be given by multiplying the shadow matrix (S) with the world space position of the object (M*Vobj). The whole series of transformations is given as follows:

How it works…

Here, B is the bias matrix, P L is the projection matrix of light, and MV L is the modelview matrix of light. For efficiency, we precompute the bias matrix of the light and the projection matrix, since they are unchanged for the lifetime of the application. Based on the user input, the light's modelview is modified and then the shadow matrix is recalculated. This is then passed to the shader.

In the vertex shader, the shadowmap texture coordinates are obtained by multiplying the world space vertex position (M*Vobj) with the shadow matrix (S). In the fragment shader, the shadow map is looked up using the projected texture coordinate to find if the current fragment is in shadow. Before the texture lookup, we check the value of the w coordinate of the projected texture coordinate. We only do our calculations if the w coordinate is greater than 1. This ensures that we only accept the forward projection and reject the back projection. Try removing this condition to see what we mean.

The shadow map lookup computation is facilitated by the textureProj GLSL function. The result from the shadow map lookup returns 1 or 0. This result is multiplied with the shading computation. As it happens in the real world, we never have coal black shadows. Therefore, we combine the shadow outcome with the shading computation by using the mix GLSL function.

How it works…

The shadow

mapping algorithm works in two passes. In the first pass, the scene is rendered from the point of view of light, and the depth buffer is stored into a texture called shadowmap. We use a single FBO with a depth attachment for this purpose. Apart from the conventional minification/magnification texture filtering, we set the texture wrapping mode to GL_CLAMP_TO_BORDER, which ensures that the values are clamped to the specified border color. Had we set this as GL_CLAMP or GL_CLAMP_TO_EDGE, the border pixels forming the shadow map would produce visible artefacts.

The shadowmap texture has some additional parameters. The first is the GL_TEXTURE_COMPARE_MODE parameter, which is set as the GL_COMPARE_REF_TO_TEXTURE value. This enables the texture to be used for depth comparison in the shader. Next, we specify the GL_TEXTURE_COMPARE_FUNC parameter, which is set as GL_LEQUAL. This compares the currently interpolated texture coordinate value (r) with the depth texture's sample value (D). It returns 1 if r<=D, otherwise it returns 0. This means that if the depth of the current sample is less than or equal to the depth from the shadowmap texture, the sample is not in shadow; otherwise, it is in shadow. The textureProj GLSL shader function performs this comparison for us and returns 0 or 1 based on whether the point is in shadow or not. These are the texture parameters required for the shadowmap texture.

To ensure that we do not have any shadow acne, we enable front-face culling (glEnable(GL_CULL_FACE) and glCullFace(GL_FRONT)) so that the back-face depth values get written to the shadowmap texture. In the second pass, the scene is rendered normally from the point of view of the camera and the shadow map is projected on the scene geometry using shaders.

To render the scene from the point of view of light, the modelview matrix of the light (MV_L), the projection matrix (P_L), and the bias matrix (B) are calculated. After multiplying with the projection matrix, the coordinates are in clip space (that is, they range from [-1,-1,-1]). to [1,1,1]. The bias matrix rescales this range to bring the coordinates from [0,0,0] to [1,1,1] range so that the shadow lookup can be carried out.

If we have the object's vertex position in the object space given as Vobj, the shadow coordinates (UVproj) for the lookup in the shadow map can be given by multiplying the shadow matrix (S) with the world space position of the object (M*Vobj). The whole series of transformations is given as follows:

How it works…

Here, B is the bias matrix, P L is the projection matrix of light, and MV L is the modelview matrix of light. For efficiency, we precompute the bias matrix of the light and the projection matrix, since they are unchanged for the lifetime of the application. Based on the user input, the light's modelview is modified and then the shadow matrix is recalculated. This is then passed to the shader.

In the vertex shader, the shadowmap texture coordinates are obtained by multiplying the world space vertex position (M*Vobj) with the shadow matrix (S). In the fragment shader, the shadow map is looked up using the projected texture coordinate to find if the current fragment is in shadow. Before the texture lookup, we check the value of the w coordinate of the projected texture coordinate. We only do our calculations if the w coordinate is greater than 1. This ensures that we only accept the forward projection and reject the back projection. Try removing this condition to see what we mean.

The shadow map lookup computation is facilitated by the textureProj GLSL function. The result from the shadow map lookup returns 1 or 0. This result is multiplied with the shading computation. As it happens in the real world, we never have coal black shadows. Therefore, we combine the shadow outcome with the shading computation by using the mix GLSL function.

There's more…

The demo application for this recipe shows a plane, a cube, and a sphere. A point light source, which can be rotated using the right mouse button, is placed. The distance of the light source can be altered using the mouse wheel. The output result from the demo is displayed in the following figure:

There's more…

This recipe detailed the shadow mapping technique for a single light source. With each additional light source, the processing, as well as storage requirements, increase. See also

Real-time Shadows, Elmar Eisemann, Michael Schwarz, Ulf Assarsson, Michael Wimmer, A K Peters/CRC Press
OpenGL 4.0 Shading Language Cookbook,

The shadow mapping algorithm, though simple to implement, suffers from aliasing artefacts, which are due to the shadowmap resolution. In addition, the shadows produced using this approach are hard. These can be minimized either by increasing the shadowmap resolution or taking more samples. The latter approach is called percentage closer filtering (PCF), where more samples are taken for the shadowmap lookup and the percentage of the samples is used to estimate if a fragment is in shadow. Thus, in PCF, instead of a single lookup, we sample an n×n neighborhood of shadowmap and then average the values.

Let us see how to extend the basic shadow mapping with PCF.

  1. Change the shadowmap texture minification/magnification filtering modes to GL_LINEAR. Here, we exploit the texture filtering capabilities of the GPU to reduce aliasing artefacts during sampling of the shadow map. Even with the linear filtering support, we have to take additional samples to reduce the artefacts.
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
  2. In the fragment shader, instead of a single texture lookup as in the shadow map recipe, we use a number of samples. GLSL provides a convenient function, textureProjOffset, to allow calculation of samples using an offset. For this recipe, we look at a 3×3 neighborhood around the current shadow map point. Hence, we use a large offset of 2. This helps to reduce sampling artefacts.
    if(vShadowCoords.w>1) {
      float sum = 0;
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 2));
      float shadow = sum/9.0;
      diffuse = mix(diffuse, diffuse*shadow, 0.5);
    }

In order to implement PCF, the first change we need is to set the texture filtering mode to linear filtering. This change enabled the GPU to bilinearly interpolate the shadow value. This gives smoother edges since the hardware does PCF filtering underneath. However it is not enough for our purpose. Therefore, we have to take additional samples to improve the result.

Fortunately, we can use a convenient function, textureProjOffset, which accepts an offset that is added to the given shadow map texture coordinate. Note that the offset given to this function must be a constant literal. Thus, we cannot use a loop variable for dynamic sampling of the shadow map sampler. We, therefore, have to unroll the loop to sample the neighborhood.

We use an offset of 2 units because we wanted to sample at a value of 1.5. However, since the textureProjOffset function does not accept a floating point value, we round it to the nearest integer. The offset is then modified to move to the next sample point until the entire 3×3 neighborhood is sampled. We then average the sampling result for the entire neighborhood. The obtained sampling result is then multiplied to the lighting contribution, thus, producing shadows if the current sample happens to be in an occluded region.

Even with adding additional samples, we get sampling artefacts. These can be reduced by shifting the sampling points randomly. To achieve this, we first implement a pseudo-random function in GLSL as follows:

Then, the sampling for PCF uses the noise function to shift the shadow offset, as shown in the following shader code:

In the given code, three macros are defined, STRATIFIED_3x3 (for 3x3 stratified sampling), STRATIFIED_5x5 (for 5x5 stratified sampling), and RANDOM_SAMPLING (for 4x4 random sampling).

Getting started

The code

Let us see how to extend the basic shadow mapping with PCF.

  1. Change the shadowmap texture minification/magnification filtering modes to GL_LINEAR. Here, we exploit the texture filtering capabilities of the GPU to reduce aliasing artefacts during sampling of the shadow map. Even with the linear filtering support, we have to take additional samples to reduce the artefacts.
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
  2. In the fragment shader, instead of a single texture lookup as in the shadow map recipe, we use a number of samples. GLSL provides a convenient function, textureProjOffset, to allow calculation of samples using an offset. For this recipe, we look at a 3×3 neighborhood around the current shadow map point. Hence, we use a large offset of 2. This helps to reduce sampling artefacts.
    if(vShadowCoords.w>1) {
      float sum = 0;
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2,-2));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 0));
      sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 2));
      float shadow = sum/9.0;
      diffuse = mix(diffuse, diffuse*shadow, 0.5);
    }

In order to implement PCF, the first change we need is to set the texture filtering mode to linear filtering. This change enabled the GPU to bilinearly interpolate the shadow value. This gives smoother edges since the hardware does PCF filtering underneath. However it is not enough for our purpose. Therefore, we have to take additional samples to improve the result.

Fortunately, we can use a convenient function, textureProjOffset, which accepts an offset that is added to the given shadow map texture coordinate. Note that the offset given to this function must be a constant literal. Thus, we cannot use a loop variable for dynamic sampling of the shadow map sampler. We, therefore, have to unroll the loop to sample the neighborhood.

We use an offset of 2 units because we wanted to sample at a value of 1.5. However, since the textureProjOffset function does not accept a floating point value, we round it to the nearest integer. The offset is then modified to move to the next sample point until the entire 3×3 neighborhood is sampled. We then average the sampling result for the entire neighborhood. The obtained sampling result is then multiplied to the lighting contribution, thus, producing shadows if the current sample happens to be in an occluded region.

Even with adding additional samples, we get sampling artefacts. These can be reduced by shifting the sampling points randomly. To achieve this, we first implement a pseudo-random function in GLSL as follows:

Then, the sampling for PCF uses the noise function to shift the shadow offset, as shown in the following shader code:

In the given code, three macros are defined, STRATIFIED_3x3 (for 3x3 stratified sampling), STRATIFIED_5x5 (for 5x5 stratified sampling), and RANDOM_SAMPLING (for 4x4 random sampling).

How to do it…

Let us see how to extend the basic shadow mapping with PCF.

Change the shadowmap texture minification/magnification filtering modes to GL_LINEAR. Here, we exploit the texture filtering capabilities of the GPU to reduce aliasing artefacts during sampling of the shadow map. Even with the linear filtering support, we have to take additional samples to reduce the artefacts.
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
In the fragment shader, instead of a single texture lookup as in the shadow map recipe, we use a number of samples. GLSL provides a convenient function, textureProjOffset, to allow calculation of samples using an offset. For this recipe, we look at a 3×3 neighborhood around the current shadow map point. Hence, we use a large offset of 2. This helps to reduce sampling artefacts.
if(vShadowCoords.w>1) {
  float sum = 0;
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2,-2));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 0));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2(-2, 2));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0,-2));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 0));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 0, 2));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2,-2));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 0));
  sum += textureProjOffset(shadowMap,vShadowCoords,ivec2( 2, 2));
  float shadow = sum/9.0;
  diffuse = mix(diffuse, diffuse*shadow, 0.5);
}

In order to implement PCF, the first change we need is to set the texture filtering mode to linear filtering. This change enabled the GPU to bilinearly interpolate the shadow value. This gives smoother edges since the hardware does PCF filtering underneath. However it is not enough for our purpose. Therefore, we have to take additional samples to improve the result.

Fortunately, we can use a convenient function, textureProjOffset, which accepts an offset that is added to the given shadow map texture coordinate. Note that the offset given to this function must be a constant literal. Thus, we cannot use a loop variable for dynamic sampling of the shadow map sampler. We, therefore, have to unroll the loop to sample the neighborhood.

We use an offset of 2 units because we wanted to sample at a value of 1.5. However, since the textureProjOffset function does not accept a floating point value, we round it to the nearest integer. The offset is then modified to move to the next sample point until the entire 3×3 neighborhood is sampled. We then average the sampling result for the entire neighborhood. The obtained sampling result is then multiplied to the lighting contribution, thus, producing shadows if the current sample happens to be in an occluded region.

Even with adding additional samples, we get sampling artefacts. These can be reduced by shifting the sampling points randomly. To achieve this, we first implement a pseudo-random function in GLSL as follows:

Then, the sampling for PCF uses the noise function to shift the shadow offset, as shown in the following shader code:

In the given code, three macros are defined, STRATIFIED_3x3 (for 3x3 stratified sampling), STRATIFIED_5x5 (for 5x5 stratified sampling), and RANDOM_SAMPLING (for 4x4 random sampling).

How it works…

In order to

implement PCF, the first change we need is to set the texture filtering mode to linear filtering. This change enabled the GPU to bilinearly interpolate the shadow value. This gives smoother edges since the hardware does PCF filtering underneath. However it is not enough for our purpose. Therefore, we have to take additional samples to improve the result.

Fortunately, we can use a convenient function, textureProjOffset, which accepts an offset that is added to the given shadow map texture coordinate. Note that the offset given to this function must be a constant literal. Thus, we cannot use a loop variable for dynamic sampling of the shadow map sampler. We, therefore, have to unroll the loop to sample the neighborhood.

We use an offset of 2 units because we wanted to sample at a value of 1.5. However, since the textureProjOffset function does not accept a floating point value, we round it to the nearest integer. The offset is then modified to move to the next sample point until the entire 3×3 neighborhood is sampled. We then average the sampling result for the entire neighborhood. The obtained sampling result is then multiplied to the lighting contribution, thus, producing shadows if the current sample happens to be in an occluded region.

Even with adding additional samples, we get sampling artefacts. These can be reduced by shifting the sampling points randomly. To achieve this, we first implement a pseudo-random function in GLSL as follows:

Then, the sampling for PCF uses the noise function to shift the shadow offset, as shown in the following shader code:

In the given code, three macros are defined, STRATIFIED_3x3 (for 3x3 stratified sampling), STRATIFIED_5x5 (for 5x5 stratified sampling), and RANDOM_SAMPLING (for 4x4 random sampling).

There's more…

Making these changes, we get a much better result, as shown in the following figure. If we take a bigger neighborhood, we get a better result. However, the computational requirements also increase.

There's more…

The following See also

GPU Gems, Chapter 11, Shadow Map Antialiasing, Michael Bunnell, Fabio Pellacini, available online at:

In this recipe, we will cover a technique which gives a much better result, has better performance, and at the same time is easier to calculate. The technique is called variance shadow mapping. In conventional PCF-filtered shadow mapping, we compare the depth value of the current fragment to the mean depth value in the shadow map, and based on the outcome, we shadow the fragment.

In case of variance shadow mapping, the mean depth value (also called first moment) and the mean squared depth value (also called second moment) are calculated and stored. Then, rather than directly using the mean depth, the variance is used. The variance calculation requires both the mean depth as well as the mean of the squared depth. Using the variance, the probability of whether the given sample is shadowed is estimated. This probability is then compared to the maximum probability to determine if the current sample is shadowed.

Let us start our recipe by following these simple steps:

  1. Set up the shadowmap texture as in the shadow map recipe, but this time remove the depth compare mode (glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE) and glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL)). Also set the format of the texture to the GL_RGBA32F format. Also enable the mipmap generation for this texture. The mipmaps provide filtered textures across different scales and produces better alias-free shadows. We request five mipmap levels (by specifying the max level as 4).
    glGenTextures(1, &shadowMapTexID);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR;
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
    glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border;
    glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 4);
    glGenerateMipmap(GL_TEXTURE_2D);
    
  2. Set up two FBOs: one for shadowmap generation and another for shadowmap filtering. The shadowmap FBO has a renderbuffer attached to it for depth testing. The filtering FBO does not have a renderbuffer attached to it but it has two texture attachments.
    glGenFramebuffers(1,&fboID);
      glGenRenderbuffers(1, &rboID);
      glBindFramebuffer(GL_FRAMEBUFFER,fboID);
      glBindRenderbuffer(GL_RENDERBUFFER, rboID);
      glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,shadowMapTexID,0);
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"FBO setup successful."<<endl;
    } else {
      cout<<"Problem in FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    
    glGenFramebuffers(1,&filterFBOID);
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glGenTextures(2, blurTexID);
    for(int i=0;i<2;i++) {
      glActiveTexture(GL_TEXTURE1+i);
      glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
      glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
      glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
      glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0+i, GL_TEXTURE_2D,blurTexID[i],0);
    }
    status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"Filtering FBO setup successful."<<endl;
    } else {
      cout<<"Problem in Filtering FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
  3. Bind the shadowmap FBO, set the viewport to the size of the shadowmap texture, and render the scene from the point of view of the light, as in the Implementing shadow mapping with FBO recipe. In this pass, instead of storing the depth as in the shadow mapping recipe, we use a custom fragment shader (Chapter4/VarianceShadowmapping/shaders/firststep.frag) to output the depth and depth*depth values in the red and green channels of the fragment output color.
    glBindFramebuffer(GL_FRAMEBUFFER,fboID);   
    glViewport(0,0,SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    DrawSceneFirstPass(MV_L, P_L);

    The shader code is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 clipSpacePos;
    void main()
    {
      vec3 pos = clipSpacePos.xyz/clipSpacePos.w; //-1 to 1
      pos.z += 0.001; //add some offset to remove the shadow acne
      float depth = (pos.z +1)*0.5; // 0 to 1
      float moment1 = depth;
      float moment2 = depth * depth; 
      vFragColor = vec4(moment1,moment2,0,0);
    }
  4. Bind the filtering FBO to filter the shadowmap texture generated in the first pass using separable Gaussian smoothing filters, which are more efficient and offer better performance. We first attach the vertical smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussV.frag) to filter the shadowmap texture and then the horizontal smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussH.frag) to smooth the output from the vertical Gaussian smoothing filter.
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glBindVertexArray(quadVAOID);
    gaussianV_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glDrawBuffer(GL_COLOR_ATTACHMENT1);
    gaussianH_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glBindFramebuffer(GL_FRAMEBUFFER,0);

    The horizontal Gaussian blur shader is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec2 vUV;
    uniform sampler2D textureMap;
    
    const float kernel[]=float[21] (0.000272337,  0.00089296, 0.002583865, 0.00659813,  0.014869116, 0.029570767, 0.051898313, 0.080381679, 0.109868729, 0.132526984, 0.14107424,  0.132526984, 0.109868729, 0.080381679, 0.051898313, 0.029570767, 0.014869116, 0.00659813, 0.002583865, 0.00089296, 0.000272337);
    
    void main()
    {
      vec2 delta = 1.0/textureSize(textureMap,0);
      vec4 color = vec4(0);
      int  index = 20;
    
      for(int i=-10;i<=10;i++) {
        color += kernel[index--]*texture(textureMap, vUV + (vec2(i*delta.x,0)));
      }
    
      vFragColor =  vec4(color.xy,0,0);
    }

    In the vertical Gaussian shader, the loop statement is modified, whereas the rest of the shader is the same.

    color += kernel[index--]*texture(textureMap, vUV + (vec2(0,i*delta.y)));
  5. Unbind the FBO, reset the default viewport, and then render the scene normally, as in the shadow mapping recipe.
    glDrawBuffer(GL_BACK_LEFT);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P);

The variance shadowmap technique tries to represent the depth data such that it can be filtered linearly. Instead of storing the depth, it stores the depth and depth*depth value in a floating point texture, which is then filtered to reconstruct the first and second moments of the depth distribution. Using the moments, it estimates the variance in the filtering neighborhood. This helps in finding the probability of a fragment at a specific depth to be occluded using Chebyshev's inequality. For more mathematical details, we refer the reader to the See also section of this recipe.

From the implementation point of view, similar to the shadow mapping recipe, the method works in two passes. In the first pass, we render the scene from the point of view of light. Instead of storing the depth, we store the depth and the depth*depth values in a floating point texture using the custom fragment shader (see Chapter4/VarianceShadowmapping/shaders/firststep.frag).

The vertex shader outputs the clip space position to the fragment shader using which the fragment depth value is calculated. To reduce self-shadowing, a small bias is added to the z value.

After the first pass, the shadowmap texture is blurred using a separable Gaussian smoothing filter. First the vertical and then the horizontal filter is applied to the shadowmap texture by applying the shadowmap texture to a full-screen quad and alternating the filter FBO's color attachment. Note that the shadowmap texture is bound to texture unit 0 whereas the textures used for filtering are bound to texture unit 1 (attached to GL_COLOR_ATTTACHMENT0 on the filtering FBO) and texture unit 2 (attached to GL_COLOR_ATTACHMENT1 on the filtering FBO).

In the second pass, the scene is rendered from the point of view of the camera. The blurred shadowmap is used in the second pass as a texture to lookup the sample value (see Chapter4/VarianceShadowmapping/shaders/VarianceShadowMap.{vert, frag}). The variance shadow mapping vertex shader outputs the shadow texture coordinates, as in the shadow mapping recipe.

The variance shadow mapping fragment shader operates differently. We first make sure that the shadow coordinates are in front of the light (to prevent back projection), that is, shadowCoord.w>1. Next, the shadowCoords.xyz values are divided by the homogeneous coordinate, shadowCoord.w, to get the depth value.

The texture coordinates after homogeneous division are used to lookup the shadow map storing the two moments. The two moments are used to estimate the variance. The variance is clamped and then the occlusion probability is estimated. The diffuse component is then modulated based on the obtained occlusion probability.

To recap, here is the complete variance shadow mapping fragment shader:

Getting started

For this recipe, we

Let us start our recipe by following these simple steps:

  1. Set up the shadowmap texture as in the shadow map recipe, but this time remove the depth compare mode (glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE) and glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL)). Also set the format of the texture to the GL_RGBA32F format. Also enable the mipmap generation for this texture. The mipmaps provide filtered textures across different scales and produces better alias-free shadows. We request five mipmap levels (by specifying the max level as 4).
    glGenTextures(1, &shadowMapTexID);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR;
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
    glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border;
    glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 4);
    glGenerateMipmap(GL_TEXTURE_2D);
    
  2. Set up two FBOs: one for shadowmap generation and another for shadowmap filtering. The shadowmap FBO has a renderbuffer attached to it for depth testing. The filtering FBO does not have a renderbuffer attached to it but it has two texture attachments.
    glGenFramebuffers(1,&fboID);
      glGenRenderbuffers(1, &rboID);
      glBindFramebuffer(GL_FRAMEBUFFER,fboID);
      glBindRenderbuffer(GL_RENDERBUFFER, rboID);
      glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,shadowMapTexID,0);
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"FBO setup successful."<<endl;
    } else {
      cout<<"Problem in FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    
    glGenFramebuffers(1,&filterFBOID);
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glGenTextures(2, blurTexID);
    for(int i=0;i<2;i++) {
      glActiveTexture(GL_TEXTURE1+i);
      glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
      glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
      glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
      glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0+i, GL_TEXTURE_2D,blurTexID[i],0);
    }
    status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"Filtering FBO setup successful."<<endl;
    } else {
      cout<<"Problem in Filtering FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
  3. Bind the shadowmap FBO, set the viewport to the size of the shadowmap texture, and render the scene from the point of view of the light, as in the Implementing shadow mapping with FBO recipe. In this pass, instead of storing the depth as in the shadow mapping recipe, we use a custom fragment shader (Chapter4/VarianceShadowmapping/shaders/firststep.frag) to output the depth and depth*depth values in the red and green channels of the fragment output color.
    glBindFramebuffer(GL_FRAMEBUFFER,fboID);   
    glViewport(0,0,SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    DrawSceneFirstPass(MV_L, P_L);

    The shader code is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 clipSpacePos;
    void main()
    {
      vec3 pos = clipSpacePos.xyz/clipSpacePos.w; //-1 to 1
      pos.z += 0.001; //add some offset to remove the shadow acne
      float depth = (pos.z +1)*0.5; // 0 to 1
      float moment1 = depth;
      float moment2 = depth * depth; 
      vFragColor = vec4(moment1,moment2,0,0);
    }
  4. Bind the filtering FBO to filter the shadowmap texture generated in the first pass using separable Gaussian smoothing filters, which are more efficient and offer better performance. We first attach the vertical smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussV.frag) to filter the shadowmap texture and then the horizontal smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussH.frag) to smooth the output from the vertical Gaussian smoothing filter.
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glBindVertexArray(quadVAOID);
    gaussianV_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glDrawBuffer(GL_COLOR_ATTACHMENT1);
    gaussianH_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glBindFramebuffer(GL_FRAMEBUFFER,0);

    The horizontal Gaussian blur shader is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec2 vUV;
    uniform sampler2D textureMap;
    
    const float kernel[]=float[21] (0.000272337,  0.00089296, 0.002583865, 0.00659813,  0.014869116, 0.029570767, 0.051898313, 0.080381679, 0.109868729, 0.132526984, 0.14107424,  0.132526984, 0.109868729, 0.080381679, 0.051898313, 0.029570767, 0.014869116, 0.00659813, 0.002583865, 0.00089296, 0.000272337);
    
    void main()
    {
      vec2 delta = 1.0/textureSize(textureMap,0);
      vec4 color = vec4(0);
      int  index = 20;
    
      for(int i=-10;i<=10;i++) {
        color += kernel[index--]*texture(textureMap, vUV + (vec2(i*delta.x,0)));
      }
    
      vFragColor =  vec4(color.xy,0,0);
    }

    In the vertical Gaussian shader, the loop statement is modified, whereas the rest of the shader is the same.

    color += kernel[index--]*texture(textureMap, vUV + (vec2(0,i*delta.y)));
  5. Unbind the FBO, reset the default viewport, and then render the scene normally, as in the shadow mapping recipe.
    glDrawBuffer(GL_BACK_LEFT);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P);

The variance shadowmap technique tries to represent the depth data such that it can be filtered linearly. Instead of storing the depth, it stores the depth and depth*depth value in a floating point texture, which is then filtered to reconstruct the first and second moments of the depth distribution. Using the moments, it estimates the variance in the filtering neighborhood. This helps in finding the probability of a fragment at a specific depth to be occluded using Chebyshev's inequality. For more mathematical details, we refer the reader to the See also section of this recipe.

From the implementation point of view, similar to the shadow mapping recipe, the method works in two passes. In the first pass, we render the scene from the point of view of light. Instead of storing the depth, we store the depth and the depth*depth values in a floating point texture using the custom fragment shader (see Chapter4/VarianceShadowmapping/shaders/firststep.frag).

The vertex shader outputs the clip space position to the fragment shader using which the fragment depth value is calculated. To reduce self-shadowing, a small bias is added to the z value.

After the first pass, the shadowmap texture is blurred using a separable Gaussian smoothing filter. First the vertical and then the horizontal filter is applied to the shadowmap texture by applying the shadowmap texture to a full-screen quad and alternating the filter FBO's color attachment. Note that the shadowmap texture is bound to texture unit 0 whereas the textures used for filtering are bound to texture unit 1 (attached to GL_COLOR_ATTTACHMENT0 on the filtering FBO) and texture unit 2 (attached to GL_COLOR_ATTACHMENT1 on the filtering FBO).

In the second pass, the scene is rendered from the point of view of the camera. The blurred shadowmap is used in the second pass as a texture to lookup the sample value (see Chapter4/VarianceShadowmapping/shaders/VarianceShadowMap.{vert, frag}). The variance shadow mapping vertex shader outputs the shadow texture coordinates, as in the shadow mapping recipe.

The variance shadow mapping fragment shader operates differently. We first make sure that the shadow coordinates are in front of the light (to prevent back projection), that is, shadowCoord.w>1. Next, the shadowCoords.xyz values are divided by the homogeneous coordinate, shadowCoord.w, to get the depth value.

The texture coordinates after homogeneous division are used to lookup the shadow map storing the two moments. The two moments are used to estimate the variance. The variance is clamped and then the occlusion probability is estimated. The diffuse component is then modulated based on the obtained occlusion probability.

To recap, here is the complete variance shadow mapping fragment shader:

How to do it…

Let us start our recipe by following these simple steps:

Set up the shadowmap texture as in the shadow map recipe, but this time remove the depth compare mode (glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE) and glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL)). Also set the format of the texture to the GL_RGBA32F format. Also enable the mipmap generation for this texture. The mipmaps provide filtered textures across different scales and produces better alias-free shadows. We request five mipmap levels (by specifying the max level as 4).
glGenTextures(1, &shadowMapTexID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, shadowMapTexID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR;
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border;
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 4);
glGenerateMipmap(GL_TEXTURE_2D);
Set up two
  1. FBOs: one for shadowmap generation and another for shadowmap filtering. The shadowmap FBO has a renderbuffer attached to it for depth testing. The filtering FBO does not have a renderbuffer attached to it but it has two texture attachments.
    glGenFramebuffers(1,&fboID);
      glGenRenderbuffers(1, &rboID);
      glBindFramebuffer(GL_FRAMEBUFFER,fboID);
      glBindRenderbuffer(GL_RENDERBUFFER, rboID);
      glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,shadowMapTexID,0);
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"FBO setup successful."<<endl;
    } else {
      cout<<"Problem in FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    
    glGenFramebuffers(1,&filterFBOID);
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glGenTextures(2, blurTexID);
    for(int i=0;i<2;i++) {
      glActiveTexture(GL_TEXTURE1+i);
      glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
      glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
      glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border);
      glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
      glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0+i, GL_TEXTURE_2D,blurTexID[i],0);
    }
    status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE) {
      cout<<"Filtering FBO setup successful."<<endl;
    } else {
      cout<<"Problem in Filtering FBO setup."<<endl;
    }
    glBindFramebuffer(GL_FRAMEBUFFER,0);
  2. Bind the shadowmap FBO, set the viewport to the size of the shadowmap texture, and render the scene from the point of view of the light, as in the Implementing shadow mapping with FBO recipe. In this pass, instead of storing the depth as in the shadow mapping recipe, we use a custom fragment shader (Chapter4/VarianceShadowmapping/shaders/firststep.frag) to output the depth and depth*depth values in the red and green channels of the fragment output color.
    glBindFramebuffer(GL_FRAMEBUFFER,fboID);   
    glViewport(0,0,SHADOWMAP_WIDTH, SHADOWMAP_HEIGHT);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    DrawSceneFirstPass(MV_L, P_L);

    The shader code is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec4 clipSpacePos;
    void main()
    {
      vec3 pos = clipSpacePos.xyz/clipSpacePos.w; //-1 to 1
      pos.z += 0.001; //add some offset to remove the shadow acne
      float depth = (pos.z +1)*0.5; // 0 to 1
      float moment1 = depth;
      float moment2 = depth * depth; 
      vFragColor = vec4(moment1,moment2,0,0);
    }
  3. Bind the filtering FBO to filter the shadowmap texture generated in the first pass using separable Gaussian smoothing filters, which are more efficient and offer better performance. We first attach the vertical smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussV.frag) to filter the shadowmap texture and then the horizontal smoothing fragment shader (Chapter4/VarianceShadowmapping/shaders/GaussH.frag) to smooth the output from the vertical Gaussian smoothing filter.
    glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
    glDrawBuffer(GL_COLOR_ATTACHMENT0);
    glBindVertexArray(quadVAOID);
    gaussianV_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glDrawBuffer(GL_COLOR_ATTACHMENT1);
    gaussianH_shader.Use();
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
    glBindFramebuffer(GL_FRAMEBUFFER,0);

    The horizontal Gaussian blur shader is as follows:

    #version 330 core
    layout(location=0) out vec4 vFragColor;
    smooth in vec2 vUV;
    uniform sampler2D textureMap;
    
    const float kernel[]=float[21] (0.000272337,  0.00089296, 0.002583865, 0.00659813,  0.014869116, 0.029570767, 0.051898313, 0.080381679, 0.109868729, 0.132526984, 0.14107424,  0.132526984, 0.109868729, 0.080381679, 0.051898313, 0.029570767, 0.014869116, 0.00659813, 0.002583865, 0.00089296, 0.000272337);
    
    void main()
    {
      vec2 delta = 1.0/textureSize(textureMap,0);
      vec4 color = vec4(0);
      int  index = 20;
    
      for(int i=-10;i<=10;i++) {
        color += kernel[index--]*texture(textureMap, vUV + (vec2(i*delta.x,0)));
      }
    
      vFragColor =  vec4(color.xy,0,0);
    }

    In the vertical Gaussian shader, the loop statement is modified, whereas the rest of the shader is the same.

    color += kernel[index--]*texture(textureMap, vUV + (vec2(0,i*delta.y)));
  4. Unbind the FBO, reset the default viewport, and then render the scene normally, as in the shadow mapping recipe.
    glDrawBuffer(GL_BACK_LEFT);
    glViewport(0,0,WIDTH, HEIGHT);
    DrawScene(MV, P);

The variance shadowmap technique tries to represent the depth data such that it can be filtered linearly. Instead of storing the depth, it stores the depth and depth*depth value in a floating point texture, which is then filtered to reconstruct the first and second moments of the depth distribution. Using the moments, it estimates the variance in the filtering neighborhood. This helps in finding the probability of a fragment at a specific depth to be occluded using Chebyshev's inequality. For more mathematical details, we refer the reader to the See also section of this recipe.

From the implementation point of view, similar to the shadow mapping recipe, the method works in two passes. In the first pass, we render the scene from the point of view of light. Instead of storing the depth, we store the depth and the depth*depth values in a floating point texture using the custom fragment shader (see Chapter4/VarianceShadowmapping/shaders/firststep.frag).

The vertex shader outputs the clip space position to the fragment shader using which the fragment depth value is calculated. To reduce self-shadowing, a small bias is added to the z value.

After the first pass, the shadowmap texture is blurred using a separable Gaussian smoothing filter. First the vertical and then the horizontal filter is applied to the shadowmap texture by applying the shadowmap texture to a full-screen quad and alternating the filter FBO's color attachment. Note that the shadowmap texture is bound to texture unit 0 whereas the textures used for filtering are bound to texture unit 1 (attached to GL_COLOR_ATTTACHMENT0 on the filtering FBO) and texture unit 2 (attached to GL_COLOR_ATTACHMENT1 on the filtering FBO).

In the second pass, the scene is rendered from the point of view of the camera. The blurred shadowmap is used in the second pass as a texture to lookup the sample value (see Chapter4/VarianceShadowmapping/shaders/VarianceShadowMap.{vert, frag}). The variance shadow mapping vertex shader outputs the shadow texture coordinates, as in the shadow mapping recipe.

The variance shadow mapping fragment shader operates differently. We first make sure that the shadow coordinates are in front of the light (to prevent back projection), that is, shadowCoord.w>1. Next, the shadowCoords.xyz values are divided by the homogeneous coordinate, shadowCoord.w, to get the depth value.

The texture coordinates after homogeneous division are used to lookup the shadow map storing the two moments. The two moments are used to estimate the variance. The variance is clamped and then the occlusion probability is estimated. The diffuse component is then modulated based on the obtained occlusion probability.

To recap, here is the complete variance shadow mapping fragment shader:

How it works…

The variance

shadowmap technique tries to represent the depth data such that it can be filtered linearly. Instead of storing the depth, it stores the depth and depth*depth value in a floating point texture, which is then filtered to reconstruct the first and second moments of the depth distribution. Using the moments, it estimates the variance in the filtering neighborhood. This helps in finding the probability of a fragment at a specific depth to be occluded using Chebyshev's inequality. For more mathematical details, we refer the reader to the See also section of this recipe.

From the implementation point of view, similar to the shadow mapping recipe, the method works in two passes. In the first pass, we render the scene from the point of view of light. Instead of storing the depth, we store the depth and the depth*depth values in a floating point texture using the custom fragment shader (see Chapter4/VarianceShadowmapping/shaders/firststep.frag).

The vertex shader outputs the clip space position to the fragment shader using which the fragment depth value is calculated. To reduce self-shadowing, a small bias is added to the z value.

After the first pass, the shadowmap texture is blurred using a separable Gaussian smoothing filter. First the vertical and then the horizontal filter is applied to the shadowmap texture by applying the shadowmap texture to a full-screen quad and alternating the filter FBO's color attachment. Note that the shadowmap texture is bound to texture unit 0 whereas the textures used for filtering are bound to texture unit 1 (attached to GL_COLOR_ATTTACHMENT0 on the filtering FBO) and texture unit 2 (attached to GL_COLOR_ATTACHMENT1 on the filtering FBO).

In the second pass, the scene is rendered from the point of view of the camera. The blurred shadowmap is used in the second pass as a texture to lookup the sample value (see Chapter4/VarianceShadowmapping/shaders/VarianceShadowMap.{vert, frag}). The variance shadow mapping vertex shader outputs the shadow texture coordinates, as in the shadow mapping recipe.

The variance shadow mapping fragment shader operates differently. We first make sure that the shadow coordinates are in front of the light (to prevent back projection), that is, shadowCoord.w>1. Next, the shadowCoords.xyz values are divided by the homogeneous coordinate, shadowCoord.w, to get the depth value.

The texture coordinates after homogeneous division are used to lookup the shadow map storing the two moments. The two moments are used to estimate the variance. The variance is clamped and then the occlusion probability is estimated. The diffuse component is then modulated based on the obtained occlusion probability.

To recap, here is the complete variance shadow mapping fragment shader:

There's more…

Variance
See also

Proceedings of the 2006 symposium on Interactive 3D graphics and games, Variance Shadow Maps, pages 161-165 William Donnelly, Andrew Lauritzen
GPU Gems 3,
  • Chapter 8, Summed-Area Variance Shadow Maps, Andrew Lauritzen: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html
  • Proceedings of the Graphics Interface 2008, Layered variance shadow maps, pages 139-146, Andrew Lauritzen, Michael McCool
  • Sample Distribution Shadow Maps, ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2011, February, Andrew Lauritzen, Marco Salvi, and Aaron Lefohn