Book Image

OpenGL ??? Build high performance graphics

By : William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo
Book Image

OpenGL ??? Build high performance graphics

By: William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo

Overview of this book

OpenGL is a fully functional, cross-platform API widely adopted across the industry for 2D and 3D graphics development. It is mainly used for game development and applications, but is equally popular in a vast variety of additional sectors. This practical course will help you gain proficiency with OpenGL and build compelling graphics for your games and applications. OpenGL Development Cookbook – This is your go-to guide to learn graphical programming techniques and implement 3D animations with OpenGL. This straight-talking Cookbook is perfect for intermediate C++ programmers who want to exploit the full potential of OpenGL. Full of practical techniques for implementing amazing computer graphics and visualizations using OpenGL. OpenGL 4.0 Shading Language Cookbook, Second Edition – With Version 4, the language has been further refined to provide programmers with greater power and flexibility, with new stages such as tessellation and compute. OpenGL Shading Language 4 Cookbook is a practical guide that takes you from the fundamentals of programming with modern GLSL and OpenGL, through to advanced techniques. OpenGL Data Visualization Cookbook - This easy-to-follow, comprehensive Cookbook shows readers how to create a variety of real-time, interactive data visualization tools. Each topic is explained in a step-by-step format. A range of hot topics is included, including stereoscopic 3D rendering and data visualization on mobile/wearable platforms. By the end of this guide, you will be equipped with the essential skills to develop a wide range of impressive OpenGL-based applications for your unique data visualization needs. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products, OpenGL Development Cookbook by Muhammad Mobeen Movania, OpenGL 4.0 Shading Language Cookbook, Second Edition by David Wolff, OpenGL Data Visualization Cookbook by Raymond C. H. Lo, William C. Y. Lo
Table of Contents (5 chapters)

The recipes covered in this chapter include:

We will begin this chapter by designing a simple class to handle the camera. In a typical OpenGL application, the viewing operations are carried out to place a virtual object on screen. We leave the details of the transformations required in between to a typical graduate text on computer graphics like the one given in the See also section of this recipe. This recipe will focus on designing a simple and efficient camera class. We create a simple inheritance from a base class called CAbstractCamera. We will inherit two classes from this parent class, CFreeCamera and CTargetCamera, as shown in the following figure:

Implementing a vector-based camera with FPS style input support

The code for this recipe is in the Chapter2/src directory. The CAbstractCamera class is defined in the AbstractCamera.[h/cpp] files.

We first declare the constructor/destructor pair. Next, the function for setting the projection for the camera is specified. Then some functions for updating the camera matrices based on rotation values are declared. Following these, the accessors and mutators are defined.

The class declaration is concluded with the view frustum culling-specific functions. Finally, the member fields are declared. The inheriting class needs to provide the implementation of one pure virtual function—Update (to recalculate the matrices and orientation vectors). The movement of the camera is based on three orientation vectors, namely, look, up, and right.

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:

Getting ready

The code for this recipe is in the Chapter2/src directory. The CAbstractCamera class is defined in the AbstractCamera.[h/cpp] files.

class CAbstractCamera { public: CAbstractCamera(void); ~CAbstractCamera(void); void SetupProjection(const float fovy, const float aspectRatio, const float near=0.1f, const float far=1000.0f); virtual void Update() = 0; virtual void Rotate(const float yaw, const float pitch, const float roll); const glm::mat4 GetViewMatrix() const; const glm::mat4 GetProjectionMatrix() const; void SetPosition(const glm::vec3& v); const glm::vec3 GetPosition() const; void SetFOV(const float fov); const float GetFOV() const; const float GetAspectRatio() const; void CalcFrustumPlanes(); bool IsPointInFrustum(const glm::vec3& point); bool IsSphereInFrustum(const glm::vec3& center, const float radius); bool IsBoxInFrustum(const glm::vec3& min, const glm::vec3& max); void GetFrustumPlanes(glm::vec4 planes[6]); glm::vec3 farPts[4]; glm::vec3 nearPts[4]; protected: float yaw, pitch, roll, fov, aspect_ratio, Znear, Zfar; static glm::vec3 UP; glm::vec3 look; glm::vec3 up; glm::vec3 right; glm::vec3 position; glm::mat4 V; //view matrix glm::mat4 P; //projection matrix CPlane planes[6]; //Frustum planes };

We first

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:

How to do it…

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

Check for the keyboard key press event.
If the W or S key is pressed, move the camera in the look vector direction:
if( GetAsyncKeyState(VK_W) & 0x8000)
  cam.Walk(dt);
if( GetAsyncKeyState(VK_S) & 0x8000)
  cam.Walk(-dt);
If the A or D key is pressed, move the camera in the right vector direction:
if( GetAsyncKeyState(VK_A) & 0x8000)
  cam.Strafe(-dt); 
if( GetAsyncKeyState(VK_D) & 0x8000)
  cam.Strafe(dt);
If the Q or Z key is pressed, move the camera in the up vector direction:
if( GetAsyncKeyState(VK_Q) & 0x8000)
  cam.Lift(dt); 
if( GetAsyncKeyState(VK_Z) & 0x8000)
  cam.Lift(-dt);

For
There's more…

It is See also

Smooth mouse filtering

Free camera is the first camera type which we will implement in this recipe. A free camera does not have a fixed target. However it does have a fixed position from which it can look in any direction.

Getting ready

The following
How to do it…

The steps needed to implement the free camera are as follows:

Define the CFreeCamera class and add a vector to store the current translation.
In the Update method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):
glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
There's more…

The Walk function See also

DHPOWare OpenGL camera demo – Part 1 (

The target camera works the opposite way. Rather than the position, the target remains fixed, while the camera moves or rotates around the target. Some operations like panning, move both the target and the camera position together.

Getting ready

The following
How to do it…

We implement the target camera as follows:

Define the CTargetCamera class with a target position (target), the rotation limits (minRy and maxRy), the distance between the target and the camera position (distance), and the distance limits (minDistance and maxDistance).
In the Update method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):
glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
Use the distance to get a vector and then translate this vector by the current rotation matrix:
glm::vec3 T = glm::vec3(0,0,distance);
T = glm::vec3(R*glm::vec4(T,0.0f));
Get the new camera position by adding the translation vector to the target position:
position = target + T;
Recalculate There's more…

The Move function See also

DHPOWare OpenGL camera demo – Part 1 (

When working with a lot of polygonal data, there is a need to reduce the amount of geometry pushed to the GPU for processing. There are several techniques for scene management, such as quadtrees, octrees, and bsp trees. These techniques help in sorting the geometry in visibility order, so that the objects are sorted (and some of these even culled from the display). This helps in reducing the work load on the GPU.

Even before such techniques can be used, there is an additional step which most graphics applications do and that is view frustum culling. This process removes the geometry if it is not in the current camera's view frustum. The idea is that if the object is not viewable, it should not be processed. A frustum is a chopped pyramid with its tip at the camera position and the base is at the far clip plane. The near clip plane is where the pyramid is chopped, as shown in the following figure. Any geometry inside the viewing frustum is displayed.

Implementing view frustum culling

We will implement view frustum culling by taking the following steps:

  1. Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
    #version 330 core
    layout(location = 0) in vec3 vVertex;  
    uniform float t;
    const float PI = 3.141562;
    void main()
    {
      gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
    }
  2. Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  3. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  4. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

After these calls, the query result is retrieved by calling:

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

There's more…

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

There's more…
Getting ready

For this recipe, we will create a grid of points that are moved in a sine wave using a simple vertex shader. The geometry shader does the view frustum culling by only emitting vertices that are inside the viewing frustum. The calculation of the viewing frustum is carried out on the CPU, based on the camera projection parameters. We will follow the geometric approach in this tutorial. The code implementing this recipe is in the Chapter2/ViewFrustumCulling directory.

We will implement view frustum culling by taking the following steps:

  1. Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
    #version 330 core
    layout(location = 0) in vec3 vVertex;  
    uniform float t;
    const float PI = 3.141562;
    void main()
    {
      gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
    }
  2. Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  3. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  4. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

After these calls, the query result is retrieved by calling:

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

There's more…

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

There's more…
How to do it…

We will implement view frustum culling by taking the following steps:

Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
#version 330 core
layout(location = 0) in vec3 vVertex;  
uniform float t;
const float PI = 3.141562;
void main()
{
  gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
}
Define a
  1. geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  2. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  3. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

After these calls, the query result is retrieved by calling:

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

There's more…

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

There's more…
How it works…

There

are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

After these calls, the query result is retrieved by calling:

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

There's more…

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

There's more…
There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query); pointShader.Use(); glUniform1f(pointShader("t"), current_time); glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0])); glBindVertexArray(pointVAOID); glDrawArrays(GL_POINTS,0,MAX_POINTS); pointShader.UnUse(); glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res; glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum. See also

Lighthouse 3D view frustum culling

Often when working on projects, we need the ability to pick graphical objects on screen. While in OpenGL versions before OpenGL 3.0, the selection buffer was used for this purpose, this buffer is removed in the modern OpenGL 3.3 core profile. However, this leaves us with some alternate methods. We will implement a simple picking technique using the depth buffer in this recipe.

Picking using depth buffer can be implemented as follows:

  1. Enable depth testing:
    glEnable(GL_DEPTH_TEST);
  2. In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
    glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  3. Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
    glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
  4. Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
    size_t i=0;
    float minDist = 1000;
    selected_box=-1;
    for(i=0;i<3;i++) { 
      float dist = glm::distance(box_positions[i], objPt);
      if( dist<1 && dist<minDist) {
        selected_box = i;
        minDist = dist;
      }
    }
  5. Based on the selected index, color the object as selected:
    glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
    cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[1]);
    cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[2]);
    cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
    cube->Render(glm::value_ptr(MVP*T));
Getting ready

The code for this recipe is in the Chapter2/Picking_DepthBuffer folder. Relevant source files are in the Chapter2/src folder.

Picking using depth buffer can be implemented as follows:

  1. Enable depth testing:
    glEnable(GL_DEPTH_TEST);
  2. In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
    glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  3. Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
    glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
  4. Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
    size_t i=0;
    float minDist = 1000;
    selected_box=-1;
    for(i=0;i<3;i++) { 
      float dist = glm::distance(box_positions[i], objPt);
      if( dist<1 && dist<minDist) {
        selected_box = i;
        minDist = dist;
      }
    }
  5. Based on the selected index, color the object as selected:
    glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
    cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[1]);
    cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[2]);
    cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
    cube->Render(glm::value_ptr(MVP*T));
How to do it…

Picking using depth buffer can be implemented as follows:

Enable depth testing:
glEnable(GL_DEPTH_TEST);
In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
size_t i=0;
float minDist = 1000;
selected_box=-1;
for(i=0;i<3;i++) { 
  float dist = glm::distance(box_positions[i], objPt);
  if( dist<1 && dist<minDist) {
    selected_box = i;
    minDist = dist;
  }
}
Based on the selected index, color the object as selected:
glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
cube->Render(glm::value_ptr(MVP*T));

T = glm::translate(glm::mat4(1), box_positions[1]);
cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
cube->Render(glm::value_ptr(MVP*T));

T = glm::translate(glm::mat4(1), box_positions[2]);
cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
cube->Render(glm::value_ptr(MVP*T));
How it works…

This There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

There's more… See also

Picking tutorial at OGLDEV (

Another method which is used for picking objects in a 3D world is color-based picking. In this recipe, we will use the same scene as in the last recipe.

Getting ready

The code for this recipe is in the Chapter2/Picking_ColorBuffer folder. Relevant source files are in the Chapter2/src folder.

How to do it…

To enable picking with the color buffer, the following steps are needed:

Disable dithering. This is done to prevent any color mismatch during the query:
glDisable(GL_DITHER);
In the mouse down event handler, read the color value at the clicked position from the color buffer using the glReadPixels function:
GLubyte pixel[4];
glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
Compare the color value at the clicked point to the color values of all objects to find the intersection:
selected_box=-1;
if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) {
  cout<<"picked box 1"<<endl;
  selected_box = 0;
}
if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) {
  cout<<"picked box 2"<<endl;
  selected_box = 1;
}
if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) {
  cout<<"picked box 3"<<endl;
  selected_box = 2;
}
How it works…

This See also

Lighthouse3d color coded picking tutorial (

The final method we will cover for picking involves casting rays in the scene to determine the nearest object to the viewer. We will use the same scene as in the last two recipes, three cubes (red, green, and blue colored) placed near the origin.

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

Getting ready

The code for this recipe is in the Chapter2/Picking_SceneIntersection folder. Relevant source files are in the Chapter2/src folder.

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

How to do it…

For picking with scene intersection queries, take the following steps:

Get two object-space points by unprojecting the screen-space point (x, HEIGHT-y), with different depth value, one at z=0 and the other at z=1:
glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
glm::vec3   end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
Get the current camera position as eyeRay.origin and get eyeRay.direction by subtracting and normalizing the difference of the two object-space points, end and start, as follows:
eyeRay.origin     =  cam.GetPosition();
eyeRay.direction  =  glm::normalize(end-start);
For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB)

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

How it works…

The method

discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

There's more…

The intersectBox function See also