Book Image

OpenGL ??? Build high performance graphics

By : William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo
Book Image

OpenGL ??? Build high performance graphics

By: William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo

Overview of this book

OpenGL is a fully functional, cross-platform API widely adopted across the industry for 2D and 3D graphics development. It is mainly used for game development and applications, but is equally popular in a vast variety of additional sectors. This practical course will help you gain proficiency with OpenGL and build compelling graphics for your games and applications. OpenGL Development Cookbook – This is your go-to guide to learn graphical programming techniques and implement 3D animations with OpenGL. This straight-talking Cookbook is perfect for intermediate C++ programmers who want to exploit the full potential of OpenGL. Full of practical techniques for implementing amazing computer graphics and visualizations using OpenGL. OpenGL 4.0 Shading Language Cookbook, Second Edition – With Version 4, the language has been further refined to provide programmers with greater power and flexibility, with new stages such as tessellation and compute. OpenGL Shading Language 4 Cookbook is a practical guide that takes you from the fundamentals of programming with modern GLSL and OpenGL, through to advanced techniques. OpenGL Data Visualization Cookbook - This easy-to-follow, comprehensive Cookbook shows readers how to create a variety of real-time, interactive data visualization tools. Each topic is explained in a step-by-step format. A range of hot topics is included, including stereoscopic 3D rendering and data visualization on mobile/wearable platforms. By the end of this guide, you will be equipped with the essential skills to develop a wide range of impressive OpenGL-based applications for your unique data visualization needs. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products, OpenGL Development Cookbook by Muhammad Mobeen Movania, OpenGL 4.0 Shading Language Cookbook, Second Edition by David Wolff, OpenGL Data Visualization Cookbook by Raymond C. H. Lo, William C. Y. Lo
Table of Contents (5 chapters)

In this chapter, we will focus on:

Volume rendering is a special class of rendering algorithms that allows us to portray fuzzy phenomena, such as smoke. There are numerous algorithms for volume rendering. To start our quest, we will focus on the simplest method called 3D texture slicing. This method approximates the volume-density function by slicing the dataset in front-to-back or back-to-front order and then blends the proxy slices using hardware-supported blending. Since it relies on the rasterization hardware, this method is very fast on the modern GPU.

The pseudo code for view-aligned 3D texture slicing is as follows:

Let us start our recipe by following these simple steps:

  1. Load the volume dataset by reading the external volume datafile and passing the data into an OpenGL texture. Also enable hardware mipmap generation. Typically, the volume datafiles store densities that are obtained from using a cross-sectional imaging modality such as CT or MRI scans. Each CT/MRI scan is a 2D slice. We accumulate these slices in Z direction to obtain a 3D texture, which is simply an array of 2D textures. The densities store different material types, for example, values ranging from 0 to 20 are typically occupied by air. As we have an 8-bit unsigned dataset, we store the dataset into a local array of GLubyte type. If we had an unsigned 16-bit dataset, we would have stored it into a local array of GLushort type. In case of 3D textures, in addition to the S and T parameters, we have an additional parameter R that controls the slice we are at in the 3D texture.
    std::ifstream infile(volume_file.c_str(), std::ios_base::binary);
    if(infile.good()) {
       GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
       infile.read(reinterpret_cast<char*>(pData),   
       XDIM*YDIM*ZDIM*sizeof(GLubyte));
       infile.close();
       glGenTextures(1, &textureID);
       glBindTexture(GL_TEXTURE_3D, textureID); 
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, 
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T,   
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R,    
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, 
                       GL_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER,   
                       GL_LINEAR_MIPMAP_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_BASE_LEVEL, 0);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 4);
    glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,GL_RED,GL_UNSIGNED_BYTE,pData);
       glGenerateMipmap(GL_TEXTURE_3D);
       return true;
    } else {
       return false;
    }

    The filtering parameters for 3D textures are similar to the 2D texture parameters that we have seen before. Mipmaps are collections of down-sampled versions of a texture that are used for level of detail (LOD) functionality. That is, they help to use a down-sampled version of the texture if the viewer is very far from the object on which the texture is applied. This helps improve the performance of the application. We have to specify the max number of levels (GL_TEXTURE_MAX_LEVEL), which is the maximum number of mipmaps generated from the given texture. In addition, the base level (GL_TEXTURE_BASE_LEVEL) denotes the first level for the mipmap that is used when the object is closest.

    The glGenerateMipMap function works by generating derived arrays by repeated filtered reduction operation on the previous level. So let's say that we have three mipmap levels and our 3D texture has a resolution of 256×256×256 at level 0. For level 1 mipmap, the level 0 data will be reduced to half the size by filtered reduction to 128×128×128. For level 2 mipmap, the level 1 data will be filtered and reduced to 64×64×64. Finally, for level 3 mipmap, the level 2 data will be filtered and reduced to 32×32×32.

  2. Setup a vertex array object and a vertex buffer object to store the geometry of the proxy slices. Make sure that the buffer object usage is specified as GL_DYNAMIC_DRAW. The initial glBufferData call allocates GPU memory for the maximum number of slices. The vTextureSlices array is defined globally and it stores the vertices produced by texture slicing operation for triangulation. The glBufferData is initialized with 0 as the data will be filled at runtime dynamically.
    const int MAX_SLICES = 512;
    glm::vec3 vTextureSlices[MAX_SLICES*12];
    
    glGenVertexArrays(1, &volumeVAO);
    glGenBuffers(1, &volumeVBO);  
    glBindVertexArray(volumeVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeVBO);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vTextureSlices), 0, GL_DYNAMIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    glBindVertexArray(0);
  3. Implement slicing of volume by finding intersections of a unit cube with proxy slices perpendicular to the viewing direction. This is carried out by the SliceVolume function. We use a unit cube since our data has equal size in all three axes that is, 256×256×256. If we have a non-equal sized dataset, we can scale the unit cube appropriately.
       //determine max and min distances
        glm::vec3 vecStart[12];
        glm::vec3 vecDir[12];
        float lambda[12];
        float lambda_inc[12];
        float denom = 0;
        float plane_dist = min_dist;
        float plane_dist_inc = (max_dist-min_dist)/float(num_slices);
    
        //determine vecStart and vecDir values
        glm::vec3 intersection[6];
        float dL[12];
    
        for(int i=num_slices-1;i>=0;i--) {
            for(int e = 0; e < 12; e++) 
            {
                dL[e] = lambda[e] + i*lambda_inc[e];
            }
    
            if  ((dL[0] >= 0.0) && (dL[0] < 1.0))    { 
                intersection[0] = vecStart[0] + dL[0]*vecDir[0];
            }
            //like wise for all intersection points		 
            int indices[]={0,1,2, 0,2,3, 0,3,4, 0,4,5};
            for(int i=0;i<12;i++)
            vTextureSlices[count++]=intersection[indices[i]];
        }
        //update buffer object
        glBindBuffer(GL_ARRAY_BUFFER, volumeVBO);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vTextureSlices),  &(vTextureSlices[0].x));
  4. In the render function, set the over blending, bind the volume vertex array object, bind the shader, and then call the glDrawArrays function.
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); 
    glBindVertexArray(volumeVAO);	
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glDrawArrays(GL_TRIANGLES, 0, sizeof(vTextureSlices)/sizeof(vTextureSlices[0]));
    shader.UnUse(); 
    glDisable(GL_BLEND);

Volume rendering using 3D texture slicing approximates the volume rendering integral by alpha-blending textured slices. The first step is loading and generating a 3D texture from the volume data. After loading the volume dataset, the slicing of the volume is carried out using proxy slices. These are oriented perpendicular to the viewing direction. Moreover, we have to find the intersection of the proxy polygons with the unit cube boundaries. This is carried out by the SliceVolume function. Note that slicing is carried out only when the view is rotated.

We first obtain the view direction vector (viewDir), which is the third column in the model-view matrix. The first column of the model-view matrix stores the right vector and the second column stores the up vector. We will now detail how the SliceVolume function works internally. We find the minimum and maximum vertex in the current viewing direction by calculating the maximum and minimum distance of the 8 unit vertices in the viewing direction. These distances are obtained using the dot product of each unit cube vertex with the view direction vector:

There are only three unique paths when going from the nearest vertex to the farthest vertex from the camera. We store all possible paths for each vertex into an edge table, which is defined as follows:

Next, plane intersection distances are estimated for the 12 edge indices of the unit cube:

Finally, the interpolated intersections with the unit cube edges are carried out by moving back-to-front in the viewing direction. After proxy slices have been generated, the vertex buffer object is updated with the new data.

In the rendering function, the appropriate shader is bound. The vertex shader calculates the clip space position by multiplying the object space vertex position (vPosition) with the combined model view projection (MVP) matrix. It also calculates the 3D texture coordinates (vUV) for the volume data. Since we render a unit cube, the minimum vertex position will be (-0.5,-0.5,-0.5) and the maximum vertex position will be (0.5,0.5,0.5). Since our 3D texture lookup requires coordinates from (0,0,0) to (1,1,1), we add (0.5,0.5,0.5) to the object space vertex position to obtain the correct 3D texture coordinates.

The fragment shader then uses the 3D texture coordinates to sample the volume data (which is now accessed through a new sampler type sampler3D for 3D textures) to display the density. At the time of creation of the 3D texture, we specified the internal format as GL_RED (the third parameter of the glTexImage3D function). Therefore, we can now access our densities through the red channel of the texture sampler. To get a shader of grey, we set the same value for green, blue, and alpha channels as well.

In previous OpenGL versions, we would store the volume densities in a special internal format GL_INTENSITY. This is deprecated in the OpenGL3.3 core profile. So now we have to use GL_RED, GL_GREEN, GL_BLUE, or GL_ALPHA internal formats.

Getting ready

The code for this recipe is in the Chapter7/3DTextureSlicing directory.

Let us start our recipe by following these simple steps:

  1. Load the volume dataset by reading the external volume datafile and passing the data into an OpenGL texture. Also enable hardware mipmap generation. Typically, the volume datafiles store densities that are obtained from using a cross-sectional imaging modality such as CT or MRI scans. Each CT/MRI scan is a 2D slice. We accumulate these slices in Z direction to obtain a 3D texture, which is simply an array of 2D textures. The densities store different material types, for example, values ranging from 0 to 20 are typically occupied by air. As we have an 8-bit unsigned dataset, we store the dataset into a local array of GLubyte type. If we had an unsigned 16-bit dataset, we would have stored it into a local array of GLushort type. In case of 3D textures, in addition to the S and T parameters, we have an additional parameter R that controls the slice we are at in the 3D texture.
    std::ifstream infile(volume_file.c_str(), std::ios_base::binary);
    if(infile.good()) {
       GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
       infile.read(reinterpret_cast<char*>(pData),   
       XDIM*YDIM*ZDIM*sizeof(GLubyte));
       infile.close();
       glGenTextures(1, &textureID);
       glBindTexture(GL_TEXTURE_3D, textureID); 
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, 
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T,   
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R,    
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, 
                       GL_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER,   
                       GL_LINEAR_MIPMAP_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_BASE_LEVEL, 0);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 4);
    glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,GL_RED,GL_UNSIGNED_BYTE,pData);
       glGenerateMipmap(GL_TEXTURE_3D);
       return true;
    } else {
       return false;
    }

    The filtering parameters for 3D textures are similar to the 2D texture parameters that we have seen before. Mipmaps are collections of down-sampled versions of a texture that are used for level of detail (LOD) functionality. That is, they help to use a down-sampled version of the texture if the viewer is very far from the object on which the texture is applied. This helps improve the performance of the application. We have to specify the max number of levels (GL_TEXTURE_MAX_LEVEL), which is the maximum number of mipmaps generated from the given texture. In addition, the base level (GL_TEXTURE_BASE_LEVEL) denotes the first level for the mipmap that is used when the object is closest.

    The glGenerateMipMap function works by generating derived arrays by repeated filtered reduction operation on the previous level. So let's say that we have three mipmap levels and our 3D texture has a resolution of 256×256×256 at level 0. For level 1 mipmap, the level 0 data will be reduced to half the size by filtered reduction to 128×128×128. For level 2 mipmap, the level 1 data will be filtered and reduced to 64×64×64. Finally, for level 3 mipmap, the level 2 data will be filtered and reduced to 32×32×32.

  2. Setup a vertex array object and a vertex buffer object to store the geometry of the proxy slices. Make sure that the buffer object usage is specified as GL_DYNAMIC_DRAW. The initial glBufferData call allocates GPU memory for the maximum number of slices. The vTextureSlices array is defined globally and it stores the vertices produced by texture slicing operation for triangulation. The glBufferData is initialized with 0 as the data will be filled at runtime dynamically.
    const int MAX_SLICES = 512;
    glm::vec3 vTextureSlices[MAX_SLICES*12];
    
    glGenVertexArrays(1, &volumeVAO);
    glGenBuffers(1, &volumeVBO);  
    glBindVertexArray(volumeVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeVBO);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vTextureSlices), 0, GL_DYNAMIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    glBindVertexArray(0);
  3. Implement slicing of volume by finding intersections of a unit cube with proxy slices perpendicular to the viewing direction. This is carried out by the SliceVolume function. We use a unit cube since our data has equal size in all three axes that is, 256×256×256. If we have a non-equal sized dataset, we can scale the unit cube appropriately.
       //determine max and min distances
        glm::vec3 vecStart[12];
        glm::vec3 vecDir[12];
        float lambda[12];
        float lambda_inc[12];
        float denom = 0;
        float plane_dist = min_dist;
        float plane_dist_inc = (max_dist-min_dist)/float(num_slices);
    
        //determine vecStart and vecDir values
        glm::vec3 intersection[6];
        float dL[12];
    
        for(int i=num_slices-1;i>=0;i--) {
            for(int e = 0; e < 12; e++) 
            {
                dL[e] = lambda[e] + i*lambda_inc[e];
            }
    
            if  ((dL[0] >= 0.0) && (dL[0] < 1.0))    { 
                intersection[0] = vecStart[0] + dL[0]*vecDir[0];
            }
            //like wise for all intersection points		 
            int indices[]={0,1,2, 0,2,3, 0,3,4, 0,4,5};
            for(int i=0;i<12;i++)
            vTextureSlices[count++]=intersection[indices[i]];
        }
        //update buffer object
        glBindBuffer(GL_ARRAY_BUFFER, volumeVBO);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vTextureSlices),  &(vTextureSlices[0].x));
  4. In the render function, set the over blending, bind the volume vertex array object, bind the shader, and then call the glDrawArrays function.
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); 
    glBindVertexArray(volumeVAO);	
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glDrawArrays(GL_TRIANGLES, 0, sizeof(vTextureSlices)/sizeof(vTextureSlices[0]));
    shader.UnUse(); 
    glDisable(GL_BLEND);

Volume rendering using 3D texture slicing approximates the volume rendering integral by alpha-blending textured slices. The first step is loading and generating a 3D texture from the volume data. After loading the volume dataset, the slicing of the volume is carried out using proxy slices. These are oriented perpendicular to the viewing direction. Moreover, we have to find the intersection of the proxy polygons with the unit cube boundaries. This is carried out by the SliceVolume function. Note that slicing is carried out only when the view is rotated.

We first obtain the view direction vector (viewDir), which is the third column in the model-view matrix. The first column of the model-view matrix stores the right vector and the second column stores the up vector. We will now detail how the SliceVolume function works internally. We find the minimum and maximum vertex in the current viewing direction by calculating the maximum and minimum distance of the 8 unit vertices in the viewing direction. These distances are obtained using the dot product of each unit cube vertex with the view direction vector:

There are only three unique paths when going from the nearest vertex to the farthest vertex from the camera. We store all possible paths for each vertex into an edge table, which is defined as follows:

Next, plane intersection distances are estimated for the 12 edge indices of the unit cube:

Finally, the interpolated intersections with the unit cube edges are carried out by moving back-to-front in the viewing direction. After proxy slices have been generated, the vertex buffer object is updated with the new data.

In the rendering function, the appropriate shader is bound. The vertex shader calculates the clip space position by multiplying the object space vertex position (vPosition) with the combined model view projection (MVP) matrix. It also calculates the 3D texture coordinates (vUV) for the volume data. Since we render a unit cube, the minimum vertex position will be (-0.5,-0.5,-0.5) and the maximum vertex position will be (0.5,0.5,0.5). Since our 3D texture lookup requires coordinates from (0,0,0) to (1,1,1), we add (0.5,0.5,0.5) to the object space vertex position to obtain the correct 3D texture coordinates.

The fragment shader then uses the 3D texture coordinates to sample the volume data (which is now accessed through a new sampler type sampler3D for 3D textures) to display the density. At the time of creation of the 3D texture, we specified the internal format as GL_RED (the third parameter of the glTexImage3D function). Therefore, we can now access our densities through the red channel of the texture sampler. To get a shader of grey, we set the same value for green, blue, and alpha channels as well.

In previous OpenGL versions, we would store the volume densities in a special internal format GL_INTENSITY. This is deprecated in the OpenGL3.3 core profile. So now we have to use GL_RED, GL_GREEN, GL_BLUE, or GL_ALPHA internal formats.

How to do it…

Let us start our

recipe by following these simple steps:

  1. Load the volume dataset by reading the external volume datafile and passing the data into an OpenGL texture. Also enable hardware mipmap generation. Typically, the volume datafiles store densities that are obtained from using a cross-sectional imaging modality such as CT or MRI scans. Each CT/MRI scan is a 2D slice. We accumulate these slices in Z direction to obtain a 3D texture, which is simply an array of 2D textures. The densities store different material types, for example, values ranging from 0 to 20 are typically occupied by air. As we have an 8-bit unsigned dataset, we store the dataset into a local array of GLubyte type. If we had an unsigned 16-bit dataset, we would have stored it into a local array of GLushort type. In case of 3D textures, in addition to the S and T parameters, we have an additional parameter R that controls the slice we are at in the 3D texture.
    std::ifstream infile(volume_file.c_str(), std::ios_base::binary);
    if(infile.good()) {
       GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
       infile.read(reinterpret_cast<char*>(pData),   
       XDIM*YDIM*ZDIM*sizeof(GLubyte));
       infile.close();
       glGenTextures(1, &textureID);
       glBindTexture(GL_TEXTURE_3D, textureID); 
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, 
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T,   
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R,    
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, 
                       GL_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER,   
                       GL_LINEAR_MIPMAP_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_BASE_LEVEL, 0);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 4);
    glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,GL_RED,GL_UNSIGNED_BYTE,pData);
       glGenerateMipmap(GL_TEXTURE_3D);
       return true;
    } else {
       return false;
    }

    The filtering parameters for 3D textures are similar to the 2D texture parameters that we have seen before. Mipmaps are collections of down-sampled versions of a texture that are used for level of detail (LOD) functionality. That is, they help to use a down-sampled version of the texture if the viewer is very far from the object on which the texture is applied. This helps improve the performance of the application. We have to specify the max number of levels (GL_TEXTURE_MAX_LEVEL), which is the maximum number of mipmaps generated from the given texture. In addition, the base level (GL_TEXTURE_BASE_LEVEL) denotes the first level for the mipmap that is used when the object is closest.

    The glGenerateMipMap function works by generating derived arrays by repeated filtered reduction operation on the previous level. So let's say that we have three mipmap levels and our 3D texture has a resolution of 256×256×256 at level 0. For level 1 mipmap, the level 0 data will be reduced to half the size by filtered reduction to 128×128×128. For level 2 mipmap, the level 1 data will be filtered and reduced to 64×64×64. Finally, for level 3 mipmap, the level 2 data will be filtered and reduced to 32×32×32.

  2. Setup a vertex array object and a vertex buffer object to store the geometry of the proxy slices. Make sure that the buffer object usage is specified as GL_DYNAMIC_DRAW. The initial glBufferData call allocates GPU memory for the maximum number of slices. The vTextureSlices array is defined globally and it stores the vertices produced by texture slicing operation for triangulation. The glBufferData is initialized with 0 as the data will be filled at runtime dynamically.
    const int MAX_SLICES = 512;
    glm::vec3 vTextureSlices[MAX_SLICES*12];
    
    glGenVertexArrays(1, &volumeVAO);
    glGenBuffers(1, &volumeVBO);  
    glBindVertexArray(volumeVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeVBO);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vTextureSlices), 0, GL_DYNAMIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    glBindVertexArray(0);
  3. Implement slicing of volume by finding intersections of a unit cube with proxy slices perpendicular to the viewing direction. This is carried out by the SliceVolume function. We use a unit cube since our data has equal size in all three axes that is, 256×256×256. If we have a non-equal sized dataset, we can scale the unit cube appropriately.
       //determine max and min distances
        glm::vec3 vecStart[12];
        glm::vec3 vecDir[12];
        float lambda[12];
        float lambda_inc[12];
        float denom = 0;
        float plane_dist = min_dist;
        float plane_dist_inc = (max_dist-min_dist)/float(num_slices);
    
        //determine vecStart and vecDir values
        glm::vec3 intersection[6];
        float dL[12];
    
        for(int i=num_slices-1;i>=0;i--) {
            for(int e = 0; e < 12; e++) 
            {
                dL[e] = lambda[e] + i*lambda_inc[e];
            }
    
            if  ((dL[0] >= 0.0) && (dL[0] < 1.0))    { 
                intersection[0] = vecStart[0] + dL[0]*vecDir[0];
            }
            //like wise for all intersection points		 
            int indices[]={0,1,2, 0,2,3, 0,3,4, 0,4,5};
            for(int i=0;i<12;i++)
            vTextureSlices[count++]=intersection[indices[i]];
        }
        //update buffer object
        glBindBuffer(GL_ARRAY_BUFFER, volumeVBO);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vTextureSlices),  &(vTextureSlices[0].x));
  4. In the render function, set the over blending, bind the volume vertex array object, bind the shader, and then call the glDrawArrays function.
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); 
    glBindVertexArray(volumeVAO);	
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glDrawArrays(GL_TRIANGLES, 0, sizeof(vTextureSlices)/sizeof(vTextureSlices[0]));
    shader.UnUse(); 
    glDisable(GL_BLEND);

Volume rendering using 3D texture slicing approximates the volume rendering integral by alpha-blending textured slices. The first step is loading and generating a 3D texture from the volume data. After loading the volume dataset, the slicing of the volume is carried out using proxy slices. These are oriented perpendicular to the viewing direction. Moreover, we have to find the intersection of the proxy polygons with the unit cube boundaries. This is carried out by the SliceVolume function. Note that slicing is carried out only when the view is rotated.

We first obtain the view direction vector (viewDir), which is the third column in the model-view matrix. The first column of the model-view matrix stores the right vector and the second column stores the up vector. We will now detail how the SliceVolume function works internally. We find the minimum and maximum vertex in the current viewing direction by calculating the maximum and minimum distance of the 8 unit vertices in the viewing direction. These distances are obtained using the dot product of each unit cube vertex with the view direction vector:

There are only three unique paths when going from the nearest vertex to the farthest vertex from the camera. We store all possible paths for each vertex into an edge table, which is defined as follows:

Next, plane intersection distances are estimated for the 12 edge indices of the unit cube:

Finally, the interpolated intersections with the unit cube edges are carried out by moving back-to-front in the viewing direction. After proxy slices have been generated, the vertex buffer object is updated with the new data.

In the rendering function, the appropriate shader is bound. The vertex shader calculates the clip space position by multiplying the object space vertex position (vPosition) with the combined model view projection (MVP) matrix. It also calculates the 3D texture coordinates (vUV) for the volume data. Since we render a unit cube, the minimum vertex position will be (-0.5,-0.5,-0.5) and the maximum vertex position will be (0.5,0.5,0.5). Since our 3D texture lookup requires coordinates from (0,0,0) to (1,1,1), we add (0.5,0.5,0.5) to the object space vertex position to obtain the correct 3D texture coordinates.

The fragment shader then uses the 3D texture coordinates to sample the volume data (which is now accessed through a new sampler type sampler3D for 3D textures) to display the density. At the time of creation of the 3D texture, we specified the internal format as GL_RED (the third parameter of the glTexImage3D function). Therefore, we can now access our densities through the red channel of the texture sampler. To get a shader of grey, we set the same value for green, blue, and alpha channels as well.

In previous OpenGL versions, we would store the volume densities in a special internal format GL_INTENSITY. This is deprecated in the OpenGL3.3 core profile. So now we have to use GL_RED, GL_GREEN, GL_BLUE, or GL_ALPHA internal formats.

How it works…

Volume rendering

using 3D texture slicing approximates the volume rendering integral by alpha-blending textured slices. The first step is loading and generating a 3D texture from the volume data. After loading the volume dataset, the slicing of the volume is carried out using proxy slices. These are oriented perpendicular to the viewing direction. Moreover, we have to find the intersection of the proxy polygons with the unit cube boundaries. This is carried out by the SliceVolume function. Note that slicing is carried out only when the view is rotated.

We first obtain the view direction vector (viewDir), which is the third column in the model-view matrix. The first column of the model-view matrix stores the right vector and the second column stores the up vector. We will now detail how the SliceVolume function works internally. We find the minimum and maximum vertex in the current viewing direction by calculating the maximum and minimum distance of the 8 unit vertices in the viewing direction. These distances are obtained using the dot product of each unit cube vertex with the view direction vector:

There are only three unique paths when going from the nearest vertex to the farthest vertex from the camera. We store all possible paths for each vertex into an edge table, which is defined as follows:

Next, plane intersection distances are estimated for the 12 edge indices of the unit cube:

Finally, the interpolated intersections with the unit cube edges are carried out by moving back-to-front in the viewing direction. After proxy slices have been generated, the vertex buffer object is updated with the new data.

In the rendering function, the appropriate shader is bound. The vertex shader calculates the clip space position by multiplying the object space vertex position (vPosition) with the combined model view projection (MVP) matrix. It also calculates the 3D texture coordinates (vUV) for the volume data. Since we render a unit cube, the minimum vertex position will be (-0.5,-0.5,-0.5) and the maximum vertex position will be (0.5,0.5,0.5). Since our 3D texture lookup requires coordinates from (0,0,0) to (1,1,1), we add (0.5,0.5,0.5) to the object space vertex position to obtain the correct 3D texture coordinates.

The fragment shader then uses the 3D texture coordinates to sample the volume data (which is now accessed through a new sampler type sampler3D for 3D textures) to display the density. At the time of creation of the 3D texture, we specified the internal format as GL_RED (the third parameter of the glTexImage3D function). Therefore, we can now access our densities through the red channel of the texture sampler. To get a shader of grey, we set the same value for green, blue, and alpha channels as well.

In previous OpenGL versions, we would store the volume densities in a special internal format GL_INTENSITY. This is deprecated in the OpenGL3.3 core profile. So now we have to use GL_RED, GL_GREEN, GL_BLUE, or GL_ALPHA internal formats.

There's more…

The output from the demo application for this recipe volume renders the engine dataset using 3D texture slicing. In the demo code, we can change the number of slices by pressing the + and - keys.

There's more…

We now show how we obtain the result by showing an image containing successive 3D texture slicing images in the same viewing direction from 8 slices all the way to 256 slices. The results are given in the following screenshot. The wireframe view is shown in the top row, whereas the alpha-blended result is shown in the bottom row.

There's more…

As can be seen, See also

The 3.5.2 Viewport-Aligned Slices section in
  • Chapter 3, GPU-based Volume Rendering, Real-time Volume Graphics, AK Peters/CRC Press, page numbers 73 to 79

In this recipe, we will implement volume rendering using single-pass GPU ray casting. There are two basic approaches for doing GPU ray casting: the multi-pass approach and the single-pass approach. Both of these approaches differ in how they estimate the ray marching direction vectors. The single-pass approach uses a single fragment shader. The steps described here can be understood easily from the following diagram:

Implementing volume rendering using single-pass GPU ray casting

First, the camera ray direction is calculated by subtracting the vertex positions from the camera position. This gives the ray marching direction. The initial ray position (that is, the ray entry position) is the vertex position. Then based on the ray step size, the initial ray position is advanced in the ray direction using a loop. The volume dataset is then sampled at this position to obtain the density value. This process is continued forward advancing the current ray position until either the ray exits the volume dataset or the alpha value of the color is completely saturated.

The obtained samples during the ray traversal are composited using the current ray function. If the average ray function is used, all of the sample densities are added and then divided by the total number of samples. Similarly, in case of front-to-back alpha compositing, the alpha value of the current sample is multiplied by the accumulated color alpha value and the product is subtracted from the current density. This gives the alpha for the previous densities. This alpha value is then added to the accumulated color alpha. In addition, it is multiplied by the current density and then the obtained color is added to the accumulated color. The accumulated color is then returned as the final fragment color.

The steps required to implement single-pass GPU ray casting are as follows:

  1. Load the volume data from the file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
  2. Set up a vertex array object and a vertex buffer object to render a unit cube as follows:
    glGenVertexArrays(1, &cubeVAOID);
    glGenBuffers(1, &cubeVBOID); 
    glGenBuffers(1, &cubeIndicesID);  
    glm::vec3 vertices[8]={	glm::vec3(-0.5f,-0.5f,-0.5f), glm::vec3( 0.5f,-0.5f,-0.5f),glm::vec3( 0.5f, 0.5f,-0.5f), glm::vec3(-0.5f, 0.5f,-0.5f),glm::vec3(-0.5f,-0.5f, 0.5f), glm::vec3( 0.5f,-0.5f, 0.5f),glm::vec3( 0.5f, 0.5f, 0.5f), glm::vec3(-0.5f, 0.5f, 0.5f)}; 
    
    GLushort cubeIndices[36]={0,5,4,5,0,1,3,7,6,3,6,2,7,4,6,6,4,5,2,1,
    3,3,1,0,3,0,7,7,0,4,6,5,2,2,5,1}; 
    
    glBindVertexArray(cubeVAOID);
    glBindBuffer (GL_ARRAY_BUFFER, cubeVBOID);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &(vertices[0].x), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    
    glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, cubeIndicesID);
    glBufferData (GL_ELEMENT_ARRAY_BUFFER, sizeof(cubeIndices), &cubeIndices[0], GL_STATIC_DRAW); 
    glBindVertexArray(0);
  3. In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycaster.(vert,frag)) and then render the unit cube.
    glEnable(GL_BLEND);
    glBindVertexArray(cubeVAOID);
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniform3fv(shader("camPos"), 1, &(camPos.x));
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    shader.UnUse(); 
    glDisable(GL_BLEND);
  4. From the vertex shader, in addition to the clip space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions.
    smooth out vec3 vUV;
    void main()
    {  
      gl_Position = MVP*vec4(vVertex.xyz,1);
        vUV = vVertex + vec3(0.5);
    }
  5. In the fragment shader, use the camera position and the 3D texture coordinates to run a loop in the current viewing direction. Terminate the loop if the current sample position is outside the volume or the alpha value of the accumulated color is saturated.
      vec3 dataPos = vUV;
      vec3 geomDir = normalize((vUV-vec3(0.5)) - camPos); 
      vec3 dirStep = geomDir * step_size; 
      bool stop = false; 
      for (int i = 0; i < MAX_SAMPLES; i++) {
        // advance ray by step
        dataPos = dataPos + dirStep;
        // stop condition
        stop=dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0;
        if (stop) 
            break;
  6. Composite the current sample value obtained from the volume using an appropriate operator and finally return the composited color.
    float sample = texture(volume, dataPos).r;
    
    float prev_alpha = sample - (sample * vFragColor.a);
    vFragColor.rgb = prev_alpha * vec3(sample) + vFragColor.rgb; 
    vFragColor.a += prev_alpha; 
    //early ray termination
    if( vFragColor.a>0.99)
    break;
       }

There are two parts of this recipe. The first step is the generation and rendering of the cube geometry for invoking the fragment shader. Note that we could also use a full-screen quad for doing this as we did for the GPU ray tracing recipe but for volumetric datasets it is more efficient to just render the unit cube. The second step is carried out in the shaders.

In the vertex shader (Chapter7/GPURaycasting/shaders/raycast.vert), the 3D texture coordinates are estimated using the per-vertex position of the unit cube. Since the unit cube is at origin, we add vec(0.5) to the position to bring the 3D texture coordinates to the 0 to 1 range.

Next, the fragment shader uses the 3D texture coordinates and the eye position to estimate the ray marching directions. A loop is then run in the fragment shader (as shown in step 5) that marches through the volume dataset and composites the obtained sample values using the current compositing scheme. This process is continued until the ray exits the volume or the alpha value of the accumulated color is fully saturated.

The texMin and texMax constants have a value of vec3(-1,-1,-1) and vec3(1,1,1) respectively. To determine if the data value is outside the volume data, we use the sign function. The sign function returns -1 if the value is less than 0, 0 if the value is equal to 0, and 1 if value is greater than 0. Hence, the sign function for the (sign(dataPos-texMin) and sign (texMax-dataPos)) calculation will give us vec3(1,1,1) at the possible minimum and maximum position. When we do a dot product between two vec3(1,1,1), we get the answer 3. So to be within the dataset limits, the dot product will return a value less than 3. If it is greater than 3, we are already out of the volume dataset.

Getting ready

The code for this recipe is contained in the Chapter7/GPURaycasting folder.

The steps required to implement single-pass GPU ray casting are as follows:

  1. Load the volume data from the file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
  2. Set up a vertex array object and a vertex buffer object to render a unit cube as follows:
    glGenVertexArrays(1, &cubeVAOID);
    glGenBuffers(1, &cubeVBOID); 
    glGenBuffers(1, &cubeIndicesID);  
    glm::vec3 vertices[8]={	glm::vec3(-0.5f,-0.5f,-0.5f), glm::vec3( 0.5f,-0.5f,-0.5f),glm::vec3( 0.5f, 0.5f,-0.5f), glm::vec3(-0.5f, 0.5f,-0.5f),glm::vec3(-0.5f,-0.5f, 0.5f), glm::vec3( 0.5f,-0.5f, 0.5f),glm::vec3( 0.5f, 0.5f, 0.5f), glm::vec3(-0.5f, 0.5f, 0.5f)}; 
    
    GLushort cubeIndices[36]={0,5,4,5,0,1,3,7,6,3,6,2,7,4,6,6,4,5,2,1,
    3,3,1,0,3,0,7,7,0,4,6,5,2,2,5,1}; 
    
    glBindVertexArray(cubeVAOID);
    glBindBuffer (GL_ARRAY_BUFFER, cubeVBOID);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &(vertices[0].x), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    
    glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, cubeIndicesID);
    glBufferData (GL_ELEMENT_ARRAY_BUFFER, sizeof(cubeIndices), &cubeIndices[0], GL_STATIC_DRAW); 
    glBindVertexArray(0);
  3. In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycaster.(vert,frag)) and then render the unit cube.
    glEnable(GL_BLEND);
    glBindVertexArray(cubeVAOID);
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniform3fv(shader("camPos"), 1, &(camPos.x));
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    shader.UnUse(); 
    glDisable(GL_BLEND);
  4. From the vertex shader, in addition to the clip space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions.
    smooth out vec3 vUV;
    void main()
    {  
      gl_Position = MVP*vec4(vVertex.xyz,1);
        vUV = vVertex + vec3(0.5);
    }
  5. In the fragment shader, use the camera position and the 3D texture coordinates to run a loop in the current viewing direction. Terminate the loop if the current sample position is outside the volume or the alpha value of the accumulated color is saturated.
      vec3 dataPos = vUV;
      vec3 geomDir = normalize((vUV-vec3(0.5)) - camPos); 
      vec3 dirStep = geomDir * step_size; 
      bool stop = false; 
      for (int i = 0; i < MAX_SAMPLES; i++) {
        // advance ray by step
        dataPos = dataPos + dirStep;
        // stop condition
        stop=dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0;
        if (stop) 
            break;
  6. Composite the current sample value obtained from the volume using an appropriate operator and finally return the composited color.
    float sample = texture(volume, dataPos).r;
    
    float prev_alpha = sample - (sample * vFragColor.a);
    vFragColor.rgb = prev_alpha * vec3(sample) + vFragColor.rgb; 
    vFragColor.a += prev_alpha; 
    //early ray termination
    if( vFragColor.a>0.99)
    break;
       }

There are two parts of this recipe. The first step is the generation and rendering of the cube geometry for invoking the fragment shader. Note that we could also use a full-screen quad for doing this as we did for the GPU ray tracing recipe but for volumetric datasets it is more efficient to just render the unit cube. The second step is carried out in the shaders.

In the vertex shader (Chapter7/GPURaycasting/shaders/raycast.vert), the 3D texture coordinates are estimated using the per-vertex position of the unit cube. Since the unit cube is at origin, we add vec(0.5) to the position to bring the 3D texture coordinates to the 0 to 1 range.

Next, the fragment shader uses the 3D texture coordinates and the eye position to estimate the ray marching directions. A loop is then run in the fragment shader (as shown in step 5) that marches through the volume dataset and composites the obtained sample values using the current compositing scheme. This process is continued until the ray exits the volume or the alpha value of the accumulated color is fully saturated.

The texMin and texMax constants have a value of vec3(-1,-1,-1) and vec3(1,1,1) respectively. To determine if the data value is outside the volume data, we use the sign function. The sign function returns -1 if the value is less than 0, 0 if the value is equal to 0, and 1 if value is greater than 0. Hence, the sign function for the (sign(dataPos-texMin) and sign (texMax-dataPos)) calculation will give us vec3(1,1,1) at the possible minimum and maximum position. When we do a dot product between two vec3(1,1,1), we get the answer 3. So to be within the dataset limits, the dot product will return a value less than 3. If it is greater than 3, we are already out of the volume dataset.

How to do it…

The steps required to

implement single-pass GPU ray casting are as follows:

  1. Load the volume data from the file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
  2. Set up a vertex array object and a vertex buffer object to render a unit cube as follows:
    glGenVertexArrays(1, &cubeVAOID);
    glGenBuffers(1, &cubeVBOID); 
    glGenBuffers(1, &cubeIndicesID);  
    glm::vec3 vertices[8]={	glm::vec3(-0.5f,-0.5f,-0.5f), glm::vec3( 0.5f,-0.5f,-0.5f),glm::vec3( 0.5f, 0.5f,-0.5f), glm::vec3(-0.5f, 0.5f,-0.5f),glm::vec3(-0.5f,-0.5f, 0.5f), glm::vec3( 0.5f,-0.5f, 0.5f),glm::vec3( 0.5f, 0.5f, 0.5f), glm::vec3(-0.5f, 0.5f, 0.5f)}; 
    
    GLushort cubeIndices[36]={0,5,4,5,0,1,3,7,6,3,6,2,7,4,6,6,4,5,2,1,
    3,3,1,0,3,0,7,7,0,4,6,5,2,2,5,1}; 
    
    glBindVertexArray(cubeVAOID);
    glBindBuffer (GL_ARRAY_BUFFER, cubeVBOID);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &(vertices[0].x), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    
    glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, cubeIndicesID);
    glBufferData (GL_ELEMENT_ARRAY_BUFFER, sizeof(cubeIndices), &cubeIndices[0], GL_STATIC_DRAW); 
    glBindVertexArray(0);
  3. In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycaster.(vert,frag)) and then render the unit cube.
    glEnable(GL_BLEND);
    glBindVertexArray(cubeVAOID);
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniform3fv(shader("camPos"), 1, &(camPos.x));
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    shader.UnUse(); 
    glDisable(GL_BLEND);
  4. From the vertex shader, in addition to the clip space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions.
    smooth out vec3 vUV;
    void main()
    {  
      gl_Position = MVP*vec4(vVertex.xyz,1);
        vUV = vVertex + vec3(0.5);
    }
  5. In the fragment shader, use the camera position and the 3D texture coordinates to run a loop in the current viewing direction. Terminate the loop if the current sample position is outside the volume or the alpha value of the accumulated color is saturated.
      vec3 dataPos = vUV;
      vec3 geomDir = normalize((vUV-vec3(0.5)) - camPos); 
      vec3 dirStep = geomDir * step_size; 
      bool stop = false; 
      for (int i = 0; i < MAX_SAMPLES; i++) {
        // advance ray by step
        dataPos = dataPos + dirStep;
        // stop condition
        stop=dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0;
        if (stop) 
            break;
  6. Composite the current sample value obtained from the volume using an appropriate operator and finally return the composited color.
    float sample = texture(volume, dataPos).r;
    
    float prev_alpha = sample - (sample * vFragColor.a);
    vFragColor.rgb = prev_alpha * vec3(sample) + vFragColor.rgb; 
    vFragColor.a += prev_alpha; 
    //early ray termination
    if( vFragColor.a>0.99)
    break;
       }

There are two parts of this recipe. The first step is the generation and rendering of the cube geometry for invoking the fragment shader. Note that we could also use a full-screen quad for doing this as we did for the GPU ray tracing recipe but for volumetric datasets it is more efficient to just render the unit cube. The second step is carried out in the shaders.

In the vertex shader (Chapter7/GPURaycasting/shaders/raycast.vert), the 3D texture coordinates are estimated using the per-vertex position of the unit cube. Since the unit cube is at origin, we add vec(0.5) to the position to bring the 3D texture coordinates to the 0 to 1 range.

Next, the fragment shader uses the 3D texture coordinates and the eye position to estimate the ray marching directions. A loop is then run in the fragment shader (as shown in step 5) that marches through the volume dataset and composites the obtained sample values using the current compositing scheme. This process is continued until the ray exits the volume or the alpha value of the accumulated color is fully saturated.

The texMin and texMax constants have a value of vec3(-1,-1,-1) and vec3(1,1,1) respectively. To determine if the data value is outside the volume data, we use the sign function. The sign function returns -1 if the value is less than 0, 0 if the value is equal to 0, and 1 if value is greater than 0. Hence, the sign function for the (sign(dataPos-texMin) and sign (texMax-dataPos)) calculation will give us vec3(1,1,1) at the possible minimum and maximum position. When we do a dot product between two vec3(1,1,1), we get the answer 3. So to be within the dataset limits, the dot product will return a value less than 3. If it is greater than 3, we are already out of the volume dataset.

How it works…

There are two parts of this recipe. The first step is the generation and rendering of the cube geometry for invoking the fragment shader. Note that we could also use a full-screen quad for doing this as we did for the GPU ray tracing recipe but for volumetric datasets it is more efficient to just render the unit cube. The second step is carried out in the shaders.

In the vertex

shader (Chapter7/GPURaycasting/shaders/raycast.vert), the 3D texture coordinates are estimated using the per-vertex position of the unit cube. Since the unit cube is at origin, we add vec(0.5) to the position to bring the 3D texture coordinates to the 0 to 1 range.

Next, the fragment shader uses the 3D texture coordinates and the eye position to estimate the ray marching directions. A loop is then run in the fragment shader (as shown in step 5) that marches through the volume dataset and composites the obtained sample values using the current compositing scheme. This process is continued until the ray exits the volume or the alpha value of the accumulated color is fully saturated.

The texMin and texMax constants have a value of vec3(-1,-1,-1) and vec3(1,1,1) respectively. To determine if the data value is outside the volume data, we use the sign function. The sign function returns -1 if the value is less than 0, 0 if the value is equal to 0, and 1 if value is greater than 0. Hence, the sign function for the (sign(dataPos-texMin) and sign (texMax-dataPos)) calculation will give us vec3(1,1,1) at the possible minimum and maximum position. When we do a dot product between two vec3(1,1,1), we get the answer 3. So to be within the dataset limits, the dot product will return a value less than 3. If it is greater than 3, we are already out of the volume dataset.

There's more…

The demo application for this demo shows the engine dataset rendered using single-pass GPU ray casting. The camera position can be changed using the left-mouse button and the view can be zoomed in/out
See also

We will now implement pseudo-isosurface rendering in single-pass GPU ray casting. While much of the setup is the same as for the single-pass GPU ray casting, the difference will be in the compositing step in the ray casting fragment shader. In this shader, we will try to find the given isosurface. If it is actually found, we estimate the normal at the sampling point to carry out the lighting calculation for the isosurface.

In the pseudocode, the pseudo-isosurface rendering in single-pass ray casting can be elaborated as follows:

Let us start the recipe by following these simple steps:

  1. Load the volume data from file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
  2. Set up a vertex array object and a vertex buffer object to render a unit cube as in the previous recipe.
  3. In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycasting (vert,frag)) and then render the unit cube.
    glEnable(GL_BLEND);
    glBindVertexArray(cubeVAOID);
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniform3fv(shader("camPos"), 1, &(camPos.x));
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    shader.UnUse(); 
    glDisable(GL_BLEND);
  4. From the vertex shader, in addition to the clip-space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions as follows:
    smooth out vec3 vUV;
    void main()
    {
      gl_Position = MVP*vec4(vVertex.xyz,1);
      vUV = vVertex + vec3(0.5);
    }
  5. In the fragment shader, use the camera position and the 3D texture coordinates to run a loop in the current viewing direction. The loop is terminated if the current sample position is outside the volume or the alpha value of the accumulated color is saturated.
      vec3 dataPos = vUV;
      vec3 geomDir = normalize((vUV-vec3(0.5)) - camPos); 
      vec3 dirStep = geomDir * step_size; 
      bool stop = false; 
      for (int i = 0; i < MAX_SAMPLES; i++) {
        // advance ray by step
        dataPos = dataPos + dirStep;
        // stop condition
        stop=dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0;
        if (stop)
            break;
    
  6. For isosurface estimation, we take two sample values to find the zero crossing of the isofunction inside the volume dataset. If there is a zero crossing, we find the exact intersection point using bisection based refinement. Finally, we use the Phong illumination model to shade the isosurface assuming that the light is located at the camera position.
    float sample=texture(volume, dataPos).r;
    float sample2=texture(volume, dataPos+dirStep).r;
    if( (sample -isoValue) < 0  && (sample2-isoValue) >= 0.0){
    vec3 xN = dataPos;
    vec3 xF = dataPos+dirStep;
    vec3 tc = Bisection(xN, xF, isoValue);
    vec3 N = GetGradient(tc);
    vec3 V = -geomDir;
    vec3 L =  V;
    vFragColor =  PhongLighting(L,N,V,250,  vec3(0.5));
    break;
    } 
    }

The Bisection function is defined as follows:

The Bisection function takes the two samples between which the given sample value lies. It then runs a loop. In each step, it calculates the midpoint of the two sample points and checks the density value at the midpoint to the given isovalue. If it is less, the left sample point is swapped with the mid position otherwise, the right sample point is swapped. This helps to reduce the search space quickly. The process is repeated and finally, the midpoint between the left sample point and right sample point is returned. The Gradient function estimates the gradient of the volume density using center finite difference approximation.

Getting ready

The code for this

Let us start the recipe by following these simple steps:

  1. Load the volume data from file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
  2. Set up a vertex array object and a vertex buffer object to render a unit cube as in the previous recipe.
  3. In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycasting (vert,frag)) and then render the unit cube.
    glEnable(GL_BLEND);
    glBindVertexArray(cubeVAOID);
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glUniform3fv(shader("camPos"), 1, &(camPos.x));
    glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
    shader.UnUse(); 
    glDisable(GL_BLEND);
  4. From the vertex shader, in addition to the clip-space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions as follows:
    smooth out vec3 vUV;
    void main()
    {
      gl_Position = MVP*vec4(vVertex.xyz,1);
      vUV = vVertex + vec3(0.5);
    }
  5. In the fragment shader, use the camera position and the 3D texture coordinates to run a loop in the current viewing direction. The loop is terminated if the current sample position is outside the volume or the alpha value of the accumulated color is saturated.
      vec3 dataPos = vUV;
      vec3 geomDir = normalize((vUV-vec3(0.5)) - camPos); 
      vec3 dirStep = geomDir * step_size; 
      bool stop = false; 
      for (int i = 0; i < MAX_SAMPLES; i++) {
        // advance ray by step
        dataPos = dataPos + dirStep;
        // stop condition
        stop=dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0;
        if (stop)
            break;
    
  6. For isosurface estimation, we take two sample values to find the zero crossing of the isofunction inside the volume dataset. If there is a zero crossing, we find the exact intersection point using bisection based refinement. Finally, we use the Phong illumination model to shade the isosurface assuming that the light is located at the camera position.
    float sample=texture(volume, dataPos).r;
    float sample2=texture(volume, dataPos+dirStep).r;
    if( (sample -isoValue) < 0  && (sample2-isoValue) >= 0.0){
    vec3 xN = dataPos;
    vec3 xF = dataPos+dirStep;
    vec3 tc = Bisection(xN, xF, isoValue);
    vec3 N = GetGradient(tc);
    vec3 V = -geomDir;
    vec3 L =  V;
    vFragColor =  PhongLighting(L,N,V,250,  vec3(0.5));
    break;
    } 
    }

The Bisection function is defined as follows:

The Bisection function takes the two samples between which the given sample value lies. It then runs a loop. In each step, it calculates the midpoint of the two sample points and checks the density value at the midpoint to the given isovalue. If it is less, the left sample point is swapped with the mid position otherwise, the right sample point is swapped. This helps to reduce the search space quickly. The process is repeated and finally, the midpoint between the left sample point and right sample point is returned. The Gradient function estimates the gradient of the volume density using center finite difference approximation.

How to do it…

Let us start the recipe by following these simple steps:

Load the volume data from file into a 3D OpenGL texture as in the previous recipe. Refer to the LoadVolume function in Chapter7/GPURaycasting/main.cpp for details.
Set up a vertex array object and a vertex buffer object to render a unit cube as in the previous recipe.
In the render function, set the ray casting vertex and fragment shaders (Chapter7/GPURaycasting/shaders/raycasting (vert,frag)) and then render the unit cube.
glEnable(GL_BLEND);
glBindVertexArray(cubeVAOID);
shader.Use(); 
glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
glUniform3fv(shader("camPos"), 1, &(camPos.x));
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
shader.UnUse(); 
glDisable(GL_BLEND);
From the vertex shader, in addition to the clip-space position, output the 3D texture coordinates for lookup in the fragment shader. We simply offset the object space vertex positions as follows:
smooth out vec3 vUV;
void main()
{
  gl_Position = MVP*vec4(vVertex.xyz,1);
  vUV = vVertex + vec3(0.5);
}
In the fragment

The Bisection function is defined as follows:

The Bisection function takes the two samples between which the given sample value lies. It then runs a loop. In each step, it calculates the midpoint of the two sample points and checks the density value at the midpoint to the given isovalue. If it is less, the left sample point is swapped with the mid position otherwise, the right sample point is swapped. This helps to reduce the search space quickly. The process is repeated and finally, the midpoint between the left sample point and right sample point is returned. The Gradient function estimates the gradient of the volume density using center finite difference approximation.

How it works…

While bulk of the code is similar to the single-pass GPU ray casting recipe. There is a major difference in the ray marching loop. In case of isosurface rendering, we do not use compositing. Instead, we find the zero crossing of the volume dataset isofunction by sampling two consecutive There's more…

The demo application implementing this recipe shows the engine dataset rendered using the pseudo-isosurface rendering mode. When run, the output is as shown in the following screenshot:

There's more… See also

Advanced Illumination Techniques for GPU-based Volume Rendering, SIGGRAPH 2008 course notes, Available online at

In this recipe, we will implement splatting on the GPU. The splatting algorithm converts the voxel representation into splats by convolving them with a Gaussian kernel. The Gaussian smoothing kernel reduces high frequencies and smoothes out edges giving a smoothed rendered output.

Let us start this recipe by following these simple steps:

  1. Load the 3D volume data and store it into an array.
    std::ifstream infile(filename.c_str(), std::ios_base::binary); 
    if(infile.good()) {
    pVolume = new GLubyte[XDIM*YDIM*ZDIM];
    infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
    infile.close();
    return true;
    } else {
    return false;
    }
  2. Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel.
    vertices.clear(); 
    int dx = XDIM/X_SAMPLING_DIST;
    int dy = YDIM/Y_SAMPLING_DIST;
    int dz = ZDIM/Z_SAMPLING_DIST;
    scale = glm::vec3(dx,dy,dz); 
    for(int z=0;z<ZDIM;z+=dz) {
    for(int y=0;y<YDIM;y+=dy) {
    for(int x=0;x<XDIM;x+=dx) {
    SampleVoxel(x,y,z);
    }
    }
    }

    The SampleVoxel function is defined in the VolumeSplatter class as follows:

  3. In each sampling step, estimate the volume density values at the current voxel. If the value is greater than the given isovalue, store the voxel position and normal into a vertex array.
       GLubyte data = SampleVolume(x, y, z);
       if(data>isoValue) {
          Vertex v; 
          v.pos.x = x;
          v.pos.y = y;
         v.pos.z = z;
        v.normal = GetNormal(x, y, z);
        v.pos *= invDim; 
        vertices.push_back(v);
       }

    The SampleVolume function takes the given sampling point and returns the nearest voxel density. It is defined in the VolumeSplatter class as follows:

    GLubyte VolumeSplatter::SampleVolume(const int x, const int y, const int z) {
        int index = (x+(y*XDIM)) + z*(XDIM*YDIM); 
      if(index<0)
         index = 0;
      if(index >= XDIM*YDIM*ZDIM)
         index = (XDIM*YDIM*ZDIM)-1;
      return pVolume[index];
    }
  4. After the sampling step, pass the generated vertices to a vertex array object (VAO) containing a vertex buffer object (VBO).
       glGenVertexArrays(1, &volumeSplatterVAO);
       glGenBuffers(1, &volumeSplatterVBO);   
       glBindVertexArray(volumeSplatterVAO);
       glBindBuffer (GL_ARRAY_BUFFER, volumeSplatterVBO);
       glBufferData (GL_ARRAY_BUFFER, splatter->GetTotalVertices()   
       *sizeof(Vertex), splatter->GetVertexPointer(), GL_STATIC_DRAW);
      glEnableVertexAttribArray(0);
       glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),
       0); 
       glEnableVertexAttribArray(1);
       glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),  
       (const GLvoid*) offsetof(Vertex, normal));
  5. Set up two FBOs for offscreen rendering. The first FBO (filterFBOID) is used for Gaussian smoothing.
      glGenFramebuffers(1,&filterFBOID);
      glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
      glGenTextures(2, blurTexID);
      for(int i=0;i<2;i++) {
        glActiveTexture(GL_TEXTURE1+i);
        glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
         //set texture parameters  	       
          glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
          IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
          glFramebufferTexture2D(GL_FRAMEBUFFER,   
          GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,blurTexID[i],0); 
      }
      GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
      if(status == GL_FRAMEBUFFER_COMPLETE) {
        cout<<"Filtering FBO setup successful."<<endl;
      } else {
        cout<<"Problem in Filtering FBO setup."<<endl;
      }
  6. The second FBO (fboID) is used to render the scene so that the smoothing operation can be applied on the rendered output from the first pass. Add a render buffer object to this FBO to enable depth testing.
       glGenFramebuffers(1,&fboID);
       glGenRenderbuffers(1, &rboID);
       glGenTextures(1, &texID);
       glBindFramebuffer(GL_FRAMEBUFFER,fboID);
       glBindRenderbuffer(GL_RENDERBUFFER, rboID);
       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texID); 
       //set texture parameters
       glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
       IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
       glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,
       GL_TEXTURE_2D, texID, 0);
       glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
       GL_RENDERBUFFER, rboID);
       glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,  
       IMAGE_WIDTH, IMAGE_HEIGHT);
       status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
       if(status == GL_FRAMEBUFFER_COMPLETE) {
          cout<<"Offscreen rendering FBO setup successful."<<endl;
       } else {
          cout<<"Problem in offscreen rendering FBO setup."<<endl;
       }
  7. In the render function, first render the point splats to a texture using the first FBO (fboID).
       glBindFramebuffer(GL_FRAMEBUFFER,fboID); 	 
      glViewport(0,0, IMAGE_WIDTH, IMAGE_HEIGHT);  
      glDrawBuffer(GL_COLOR_ATTACHMENT0); 
        glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
        glm::mat4 T = glm::translate(glm::mat4(1), 
       glm::vec3(-0.5,-0.5,-0.5));
      glBindVertexArray(volumeSplatterVAO);
      shader.Use(); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE,   
       glm::value_ptr(MV*T));
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE,   
       glm::value_ptr(glm::inverseTranspose(glm::mat3(MV*T))));
      glUniformMatrix4fv(shader("P"), 1, GL_FALSE, 
       glm::value_ptr(P));
      glDrawArrays(GL_POINTS, 0, splatter->GetTotalVertices());
      shader.UnUse();   

    The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) is defined as follows. It calculates the eye space normal. The splat size is calculated using the volume dimension and the sampling voxel size. This is then written to the gl_PointSize variable in the vertex shader.

    The splatting fragment shader (Chapter7/Splatting/shaders/splatShader.frag) is defined as follows:

  8. Next, set the filtering FBO and first apply the vertical and then the horizontal Gaussian smoothing pass by drawing a full-screen quad as was done in the Variance shadow mapping recipe in Chapter 4, Lights and Shadows.
      glBindVertexArray(quadVAOID); 
      glBindFramebuffer(GL_FRAMEBUFFER, filterFBOID);    
      glDrawBuffer(GL_COLOR_ATTACHMENT0);
      gaussianV_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
       glDrawBuffer(GL_COLOR_ATTACHMENT1); 
      gaussianH_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
  9. Unbind the filtering FBO, restore the default draw buffer and render the filtered output on the screen.
       glBindFramebuffer(GL_FRAMEBUFFER,0);  
      glDrawBuffer(GL_BACK_LEFT);
      glViewport(0,0,WIDTH, HEIGHT);   
      quadShader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); 
      quadShader.UnUse();
      glBindVertexArray(0);

Splatting algorithm works by rendering the voxels of the volume data as Gaussian blobs and projecting them on the screen. To achieve this, we first estimate the candidate voxels from the volume dataset by traversing through the entire volume dataset voxel by voxel for the given isovalue. If we have the appropriate voxel, we store its normal and position into a vertex array. For convenience, we wrap all of this functionality into the VolumeSplatter class.

We first create a new instance of the VolumeSplatter class. Next, we set the volume dimensions and then load the volume data. Next, we specify the target isovalue and the number of sampling voxels to use. Finally, we call the VolumeSplatter::SplatVolume function that traverses the whole volume voxel by voxel.

The splatter stores the vertices and normals into a vertex array. We then generate the vertex buffer object from this array. In the rendering function, we first draw the entire splat dataset in a single-pass into an offscreen render target. This is done so that we can filter it using separable Gaussian convolution filters. Finally, the filtered output is displayed on a full-screen quad.

The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) calculates the point size on screen based on the depth of the splat. In order to achieve this in the vertex shader, we have to enable the GL_VERTEX_PROGRAM_POINT_SIZE state that is, glEnable(GL_VERTEX_PROGRAM_POINT_SIZE). The vertex shader also outputs the splat normals in eye space.

Since the default point sprite renders as a screen-aligned quad, in the fragment shader (Chapter7/Splatting/shaders/splatShader.frag), we discard all fragments that are outside the radius of the splat at the current splat position.

Finally, we estimate the diffuse and specular components and output the current fragment color using the eye space normal of the splat.

Getting ready

The code for this recipe is in the Chapter7/Splatting directory.

Let us start this recipe by following these simple steps:

  1. Load the 3D volume data and store it into an array.
    std::ifstream infile(filename.c_str(), std::ios_base::binary); 
    if(infile.good()) {
    pVolume = new GLubyte[XDIM*YDIM*ZDIM];
    infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
    infile.close();
    return true;
    } else {
    return false;
    }
  2. Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel.
    vertices.clear(); 
    int dx = XDIM/X_SAMPLING_DIST;
    int dy = YDIM/Y_SAMPLING_DIST;
    int dz = ZDIM/Z_SAMPLING_DIST;
    scale = glm::vec3(dx,dy,dz); 
    for(int z=0;z<ZDIM;z+=dz) {
    for(int y=0;y<YDIM;y+=dy) {
    for(int x=0;x<XDIM;x+=dx) {
    SampleVoxel(x,y,z);
    }
    }
    }

    The SampleVoxel function is defined in the VolumeSplatter class as follows:

  3. In each sampling step, estimate the volume density values at the current voxel. If the value is greater than the given isovalue, store the voxel position and normal into a vertex array.
       GLubyte data = SampleVolume(x, y, z);
       if(data>isoValue) {
          Vertex v; 
          v.pos.x = x;
          v.pos.y = y;
         v.pos.z = z;
        v.normal = GetNormal(x, y, z);
        v.pos *= invDim; 
        vertices.push_back(v);
       }

    The SampleVolume function takes the given sampling point and returns the nearest voxel density. It is defined in the VolumeSplatter class as follows:

    GLubyte VolumeSplatter::SampleVolume(const int x, const int y, const int z) {
        int index = (x+(y*XDIM)) + z*(XDIM*YDIM); 
      if(index<0)
         index = 0;
      if(index >= XDIM*YDIM*ZDIM)
         index = (XDIM*YDIM*ZDIM)-1;
      return pVolume[index];
    }
  4. After the sampling step, pass the generated vertices to a vertex array object (VAO) containing a vertex buffer object (VBO).
       glGenVertexArrays(1, &volumeSplatterVAO);
       glGenBuffers(1, &volumeSplatterVBO);   
       glBindVertexArray(volumeSplatterVAO);
       glBindBuffer (GL_ARRAY_BUFFER, volumeSplatterVBO);
       glBufferData (GL_ARRAY_BUFFER, splatter->GetTotalVertices()   
       *sizeof(Vertex), splatter->GetVertexPointer(), GL_STATIC_DRAW);
      glEnableVertexAttribArray(0);
       glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),
       0); 
       glEnableVertexAttribArray(1);
       glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),  
       (const GLvoid*) offsetof(Vertex, normal));
  5. Set up two FBOs for offscreen rendering. The first FBO (filterFBOID) is used for Gaussian smoothing.
      glGenFramebuffers(1,&filterFBOID);
      glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
      glGenTextures(2, blurTexID);
      for(int i=0;i<2;i++) {
        glActiveTexture(GL_TEXTURE1+i);
        glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
         //set texture parameters  	       
          glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
          IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
          glFramebufferTexture2D(GL_FRAMEBUFFER,   
          GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,blurTexID[i],0); 
      }
      GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
      if(status == GL_FRAMEBUFFER_COMPLETE) {
        cout<<"Filtering FBO setup successful."<<endl;
      } else {
        cout<<"Problem in Filtering FBO setup."<<endl;
      }
  6. The second FBO (fboID) is used to render the scene so that the smoothing operation can be applied on the rendered output from the first pass. Add a render buffer object to this FBO to enable depth testing.
       glGenFramebuffers(1,&fboID);
       glGenRenderbuffers(1, &rboID);
       glGenTextures(1, &texID);
       glBindFramebuffer(GL_FRAMEBUFFER,fboID);
       glBindRenderbuffer(GL_RENDERBUFFER, rboID);
       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texID); 
       //set texture parameters
       glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
       IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
       glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,
       GL_TEXTURE_2D, texID, 0);
       glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
       GL_RENDERBUFFER, rboID);
       glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,  
       IMAGE_WIDTH, IMAGE_HEIGHT);
       status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
       if(status == GL_FRAMEBUFFER_COMPLETE) {
          cout<<"Offscreen rendering FBO setup successful."<<endl;
       } else {
          cout<<"Problem in offscreen rendering FBO setup."<<endl;
       }
  7. In the render function, first render the point splats to a texture using the first FBO (fboID).
       glBindFramebuffer(GL_FRAMEBUFFER,fboID); 	 
      glViewport(0,0, IMAGE_WIDTH, IMAGE_HEIGHT);  
      glDrawBuffer(GL_COLOR_ATTACHMENT0); 
        glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
        glm::mat4 T = glm::translate(glm::mat4(1), 
       glm::vec3(-0.5,-0.5,-0.5));
      glBindVertexArray(volumeSplatterVAO);
      shader.Use(); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE,   
       glm::value_ptr(MV*T));
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE,   
       glm::value_ptr(glm::inverseTranspose(glm::mat3(MV*T))));
      glUniformMatrix4fv(shader("P"), 1, GL_FALSE, 
       glm::value_ptr(P));
      glDrawArrays(GL_POINTS, 0, splatter->GetTotalVertices());
      shader.UnUse();   

    The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) is defined as follows. It calculates the eye space normal. The splat size is calculated using the volume dimension and the sampling voxel size. This is then written to the gl_PointSize variable in the vertex shader.

    The splatting fragment shader (Chapter7/Splatting/shaders/splatShader.frag) is defined as follows:

  8. Next, set the filtering FBO and first apply the vertical and then the horizontal Gaussian smoothing pass by drawing a full-screen quad as was done in the Variance shadow mapping recipe in Chapter 4, Lights and Shadows.
      glBindVertexArray(quadVAOID); 
      glBindFramebuffer(GL_FRAMEBUFFER, filterFBOID);    
      glDrawBuffer(GL_COLOR_ATTACHMENT0);
      gaussianV_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
       glDrawBuffer(GL_COLOR_ATTACHMENT1); 
      gaussianH_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
  9. Unbind the filtering FBO, restore the default draw buffer and render the filtered output on the screen.
       glBindFramebuffer(GL_FRAMEBUFFER,0);  
      glDrawBuffer(GL_BACK_LEFT);
      glViewport(0,0,WIDTH, HEIGHT);   
      quadShader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); 
      quadShader.UnUse();
      glBindVertexArray(0);

Splatting algorithm works by rendering the voxels of the volume data as Gaussian blobs and projecting them on the screen. To achieve this, we first estimate the candidate voxels from the volume dataset by traversing through the entire volume dataset voxel by voxel for the given isovalue. If we have the appropriate voxel, we store its normal and position into a vertex array. For convenience, we wrap all of this functionality into the VolumeSplatter class.

We first create a new instance of the VolumeSplatter class. Next, we set the volume dimensions and then load the volume data. Next, we specify the target isovalue and the number of sampling voxels to use. Finally, we call the VolumeSplatter::SplatVolume function that traverses the whole volume voxel by voxel.

The splatter stores the vertices and normals into a vertex array. We then generate the vertex buffer object from this array. In the rendering function, we first draw the entire splat dataset in a single-pass into an offscreen render target. This is done so that we can filter it using separable Gaussian convolution filters. Finally, the filtered output is displayed on a full-screen quad.

The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) calculates the point size on screen based on the depth of the splat. In order to achieve this in the vertex shader, we have to enable the GL_VERTEX_PROGRAM_POINT_SIZE state that is, glEnable(GL_VERTEX_PROGRAM_POINT_SIZE). The vertex shader also outputs the splat normals in eye space.

Since the default point sprite renders as a screen-aligned quad, in the fragment shader (Chapter7/Splatting/shaders/splatShader.frag), we discard all fragments that are outside the radius of the splat at the current splat position.

Finally, we estimate the diffuse and specular components and output the current fragment color using the eye space normal of the splat.

How to do it…

Let us start this recipe by following these simple steps:

Load the 3D volume data and store it into an array.
std::ifstream infile(filename.c_str(), std::ios_base::binary); 
if(infile.good()) {
pVolume = new GLubyte[XDIM*YDIM*ZDIM];
infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
infile.close();
return true;
} else {
return false;
}
Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel.
vertices.clear(); 
int dx = XDIM/X_SAMPLING_DIST;
int dy = YDIM/Y_SAMPLING_DIST;
int dz = ZDIM/Z_SAMPLING_DIST;
scale = glm::vec3(dx,dy,dz); 
for(int z=0;z<ZDIM;z+=dz) {
for(int y=0;y<YDIM;y+=dy) {
for(int x=0;x<XDIM;x+=dx) {
SampleVoxel(x,y,z);
}
}
}
The SampleVoxel function
  1. is defined in the VolumeSplatter class as follows:

  2. In each sampling step, estimate the volume density values at the current voxel. If the value is greater than the given isovalue, store the voxel position and normal into a vertex array.
       GLubyte data = SampleVolume(x, y, z);
       if(data>isoValue) {
          Vertex v; 
          v.pos.x = x;
          v.pos.y = y;
         v.pos.z = z;
        v.normal = GetNormal(x, y, z);
        v.pos *= invDim; 
        vertices.push_back(v);
       }

    The SampleVolume function takes the given sampling point and returns the nearest voxel density. It is defined in the VolumeSplatter class as follows:

    GLubyte VolumeSplatter::SampleVolume(const int x, const int y, const int z) {
        int index = (x+(y*XDIM)) + z*(XDIM*YDIM); 
      if(index<0)
         index = 0;
      if(index >= XDIM*YDIM*ZDIM)
         index = (XDIM*YDIM*ZDIM)-1;
      return pVolume[index];
    }
  3. After the sampling step, pass the generated vertices to a vertex array object (VAO) containing a vertex buffer object (VBO).
       glGenVertexArrays(1, &volumeSplatterVAO);
       glGenBuffers(1, &volumeSplatterVBO);   
       glBindVertexArray(volumeSplatterVAO);
       glBindBuffer (GL_ARRAY_BUFFER, volumeSplatterVBO);
       glBufferData (GL_ARRAY_BUFFER, splatter->GetTotalVertices()   
       *sizeof(Vertex), splatter->GetVertexPointer(), GL_STATIC_DRAW);
      glEnableVertexAttribArray(0);
       glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),
       0); 
       glEnableVertexAttribArray(1);
       glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),  
       (const GLvoid*) offsetof(Vertex, normal));
  4. Set up two FBOs for offscreen rendering. The first FBO (filterFBOID) is used for Gaussian smoothing.
      glGenFramebuffers(1,&filterFBOID);
      glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID);
      glGenTextures(2, blurTexID);
      for(int i=0;i<2;i++) {
        glActiveTexture(GL_TEXTURE1+i);
        glBindTexture(GL_TEXTURE_2D, blurTexID[i]);
         //set texture parameters  	       
          glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
          IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
          glFramebufferTexture2D(GL_FRAMEBUFFER,   
          GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,blurTexID[i],0); 
      }
      GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
      if(status == GL_FRAMEBUFFER_COMPLETE) {
        cout<<"Filtering FBO setup successful."<<endl;
      } else {
        cout<<"Problem in Filtering FBO setup."<<endl;
      }
  5. The second FBO (fboID) is used to render the scene so that the smoothing operation can be applied on the rendered output from the first pass. Add a render buffer object to this FBO to enable depth testing.
       glGenFramebuffers(1,&fboID);
       glGenRenderbuffers(1, &rboID);
       glGenTextures(1, &texID);
       glBindFramebuffer(GL_FRAMEBUFFER,fboID);
       glBindRenderbuffer(GL_RENDERBUFFER, rboID);
       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texID); 
       //set texture parameters
       glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,IMAGE_WIDTH,
       IMAGE_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL);
       glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,
       GL_TEXTURE_2D, texID, 0);
       glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
       GL_RENDERBUFFER, rboID);
       glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,  
       IMAGE_WIDTH, IMAGE_HEIGHT);
       status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
       if(status == GL_FRAMEBUFFER_COMPLETE) {
          cout<<"Offscreen rendering FBO setup successful."<<endl;
       } else {
          cout<<"Problem in offscreen rendering FBO setup."<<endl;
       }
  6. In the render function, first render the point splats to a texture using the first FBO (fboID).
       glBindFramebuffer(GL_FRAMEBUFFER,fboID); 	 
      glViewport(0,0, IMAGE_WIDTH, IMAGE_HEIGHT);  
      glDrawBuffer(GL_COLOR_ATTACHMENT0); 
        glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
        glm::mat4 T = glm::translate(glm::mat4(1), 
       glm::vec3(-0.5,-0.5,-0.5));
      glBindVertexArray(volumeSplatterVAO);
      shader.Use(); 
      glUniformMatrix4fv(shader("MV"), 1, GL_FALSE,   
       glm::value_ptr(MV*T));
      glUniformMatrix3fv(shader("N"), 1, GL_FALSE,   
       glm::value_ptr(glm::inverseTranspose(glm::mat3(MV*T))));
      glUniformMatrix4fv(shader("P"), 1, GL_FALSE, 
       glm::value_ptr(P));
      glDrawArrays(GL_POINTS, 0, splatter->GetTotalVertices());
      shader.UnUse();   

    The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) is defined as follows. It calculates the eye space normal. The splat size is calculated using the volume dimension and the sampling voxel size. This is then written to the gl_PointSize variable in the vertex shader.

    The splatting fragment shader (Chapter7/Splatting/shaders/splatShader.frag) is defined as follows:

  7. Next, set the filtering FBO and first apply the vertical and then the horizontal Gaussian smoothing pass by drawing a full-screen quad as was done in the Variance shadow mapping recipe in Chapter 4, Lights and Shadows.
      glBindVertexArray(quadVAOID); 
      glBindFramebuffer(GL_FRAMEBUFFER, filterFBOID);    
      glDrawBuffer(GL_COLOR_ATTACHMENT0);
      gaussianV_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
       glDrawBuffer(GL_COLOR_ATTACHMENT1); 
      gaussianH_shader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
  8. Unbind the filtering FBO, restore the default draw buffer and render the filtered output on the screen.
       glBindFramebuffer(GL_FRAMEBUFFER,0);  
      glDrawBuffer(GL_BACK_LEFT);
      glViewport(0,0,WIDTH, HEIGHT);   
      quadShader.Use();  
      glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); 
      quadShader.UnUse();
      glBindVertexArray(0);

Splatting algorithm works by rendering the voxels of the volume data as Gaussian blobs and projecting them on the screen. To achieve this, we first estimate the candidate voxels from the volume dataset by traversing through the entire volume dataset voxel by voxel for the given isovalue. If we have the appropriate voxel, we store its normal and position into a vertex array. For convenience, we wrap all of this functionality into the VolumeSplatter class.

We first create a new instance of the VolumeSplatter class. Next, we set the volume dimensions and then load the volume data. Next, we specify the target isovalue and the number of sampling voxels to use. Finally, we call the VolumeSplatter::SplatVolume function that traverses the whole volume voxel by voxel.

The splatter stores the vertices and normals into a vertex array. We then generate the vertex buffer object from this array. In the rendering function, we first draw the entire splat dataset in a single-pass into an offscreen render target. This is done so that we can filter it using separable Gaussian convolution filters. Finally, the filtered output is displayed on a full-screen quad.

The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) calculates the point size on screen based on the depth of the splat. In order to achieve this in the vertex shader, we have to enable the GL_VERTEX_PROGRAM_POINT_SIZE state that is, glEnable(GL_VERTEX_PROGRAM_POINT_SIZE). The vertex shader also outputs the splat normals in eye space.

Since the default point sprite renders as a screen-aligned quad, in the fragment shader (Chapter7/Splatting/shaders/splatShader.frag), we discard all fragments that are outside the radius of the splat at the current splat position.

Finally, we estimate the diffuse and specular components and output the current fragment color using the eye space normal of the splat.

How it works…

Splatting

algorithm works by rendering the voxels of the volume data as Gaussian blobs and projecting them on the screen. To achieve this, we first estimate the candidate voxels from the volume dataset by traversing through the entire volume dataset voxel by voxel for the given isovalue. If we have the appropriate voxel, we store its normal and position into a vertex array. For convenience, we wrap all of this functionality into the VolumeSplatter class.

We first create a new instance of the VolumeSplatter class. Next, we set the volume dimensions and then load the volume data. Next, we specify the target isovalue and the number of sampling voxels to use. Finally, we call the VolumeSplatter::SplatVolume function that traverses the whole volume voxel by voxel.

The splatter stores the vertices and normals into a vertex array. We then generate the vertex buffer object from this array. In the rendering function, we first draw the entire splat dataset in a single-pass into an offscreen render target. This is done so that we can filter it using separable Gaussian convolution filters. Finally, the filtered output is displayed on a full-screen quad.

The splatting vertex shader (Chapter7/Splatting/shaders/splatShader.vert) calculates the point size on screen based on the depth of the splat. In order to achieve this in the vertex shader, we have to enable the GL_VERTEX_PROGRAM_POINT_SIZE state that is, glEnable(GL_VERTEX_PROGRAM_POINT_SIZE). The vertex shader also outputs the splat normals in eye space.

Since the default point sprite renders as a screen-aligned quad, in the fragment shader (Chapter7/Splatting/shaders/splatShader.frag), we discard all fragments that are outside the radius of the splat at the current splat position.

Finally, we estimate the diffuse and specular components and output the current fragment color using the eye space normal of the splat.

There's more…

The demo application implementing this recipe renders the engine dataset as in the previous recipes, as shown in the following screenshot. Note the output appears blurred due to Gaussian smoothing of the splats.

There's more…

This recipe gave us an overview on the splatting algorithm. Our brute force approach in this recipe was to iterate through all of the voxels. For large datasets, we have to employ an See also

The Qsplat project:

In this recipe, we will implement classification to the 3D texture slicing presented before. We will generate a lookup table to add specific colors to specific densities. This is accomplished by generating a 1D texture that is looked up in the fragment shader with the current volume density. The returned color is then used as the color of the current fragment. Apart from the setup of the transfer function data, all the other content remains the same as in the 3D texture slicing recipe. Note that the classification method is not limited to 3D texture slicing, it can be applied to any volume rendering algorithm.

Let us start this recipe by following these simple steps:

  1. Load the volume data and setup the texture slicing as in the Implementing volume rendering using 3D texture slicing recipe.
  2. Create a 1D texture that will be our transfer function texture for color lookup. We create a set of color values and then interpolate them on the fly. Refer to LoadTransferFunction in Chapter7/3DTextureSlicingClassification/main.cpp.
    float pData[256][4]; 
    int indices[9];
    for(int i=0;i<9;i++) {
    int index = i*28;  
    pData[index][0] = jet_values[i].x;
    pData[index][1] = jet_values[i].y;
    pData[index][2] = jet_values[i].z;
    pData[index][3] = jet_values[i].w;
    indices[i] = index;
    }
    for(int j=0;j<9-1;j++)
    {
    float dDataR = (pData[indices[j+1]][0] - pData[indices[j]][0]);
    float dDataG = (pData[indices[j+1]][1] - pData[indices[j]][1]);
    float dDataB = (pData[indices[j+1]][2] - pData[indices[j]][2]);
    float dDataA = (pData[indices[j+1]][3] - pData[indices[j]][3]);
    int dIndex = indices[j+1]-indices[j];
    float dDataIncR = dDataR/float(dIndex);
    float dDataIncG = dDataG/float(dIndex);
    float dDataIncB = dDataB/float(dIndex);
    float dDataIncA = dDataA/float(dIndex);
    for(int i=indices[j]+1;i<indices[j+1];i++)
    {
        pData[i][0] = (pData[i-1][0] + dDataIncR);
        pData[i][1] = (pData[i-1][1] + dDataIncG);
        pData[i][2] = (pData[i-1][2] + dDataIncB);
        pData[i][3] = (pData[i-1][3] + dDataIncA);
    }
    }
  3. Generate a 1D OpenGL texture from the interpolated lookup data from step 1. We bind this texture to texture unit 1 (GL_TEXTURE1);
    glGenTextures(1, &tfTexID);
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_1D, tfTexID);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);  
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage1D(GL_TEXTURE_1D,0,GL_RGBA,256,0,GL_RGBA,GL_FLOAT,pData);
  4. In the fragment shader, add a new sampler for the transfer function lookup table. Since we now have two textures, we bind the volume data to texture unit 0 (GL_TEXTURE0) and the transfer function texture to texture unit 1 (GL_TEXTURE1).
    shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/textureSlicer.vert");
    shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/textureSlicer.frag");
    shader.CreateAndLinkProgram();
    shader.Use();
      shader.AddAttribute("vVertex");
      shader.AddUniform("MVP");  
      shader.AddUniform("volume");
      shader.AddUniform("lut");
      glUniform1i(shader("volume"),0);
      glUniform1i(shader("lut"),1);
    shader.UnUse();
  5. Finally, in the fragment shader, instead of directly returning the current volume density value, we lookup the density value in the transfer function and return the appropriate color value. Refer to Chapter7/3DTextureSlicingClassification/shaders/textureSlicer.frag for details.
    uniform sampler3D volume; 
    uniform sampler1D lut;	 
    void main(void) {
    vFragColor = texture(lut, texture(volume, vUV).r); 
    }
    
  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.
Getting ready

The code for this recipe is in the Chapter7/3DTextureSlicingClassification directory.

Let us start this recipe by following these simple steps:

  1. Load the volume data and setup the texture slicing as in the Implementing volume rendering using 3D texture slicing recipe.
  2. Create a 1D texture that will be our transfer function texture for color lookup. We create a set of color values and then interpolate them on the fly. Refer to LoadTransferFunction in Chapter7/3DTextureSlicingClassification/main.cpp.
    float pData[256][4]; 
    int indices[9];
    for(int i=0;i<9;i++) {
    int index = i*28;  
    pData[index][0] = jet_values[i].x;
    pData[index][1] = jet_values[i].y;
    pData[index][2] = jet_values[i].z;
    pData[index][3] = jet_values[i].w;
    indices[i] = index;
    }
    for(int j=0;j<9-1;j++)
    {
    float dDataR = (pData[indices[j+1]][0] - pData[indices[j]][0]);
    float dDataG = (pData[indices[j+1]][1] - pData[indices[j]][1]);
    float dDataB = (pData[indices[j+1]][2] - pData[indices[j]][2]);
    float dDataA = (pData[indices[j+1]][3] - pData[indices[j]][3]);
    int dIndex = indices[j+1]-indices[j];
    float dDataIncR = dDataR/float(dIndex);
    float dDataIncG = dDataG/float(dIndex);
    float dDataIncB = dDataB/float(dIndex);
    float dDataIncA = dDataA/float(dIndex);
    for(int i=indices[j]+1;i<indices[j+1];i++)
    {
        pData[i][0] = (pData[i-1][0] + dDataIncR);
        pData[i][1] = (pData[i-1][1] + dDataIncG);
        pData[i][2] = (pData[i-1][2] + dDataIncB);
        pData[i][3] = (pData[i-1][3] + dDataIncA);
    }
    }
  3. Generate a 1D OpenGL texture from the interpolated lookup data from step 1. We bind this texture to texture unit 1 (GL_TEXTURE1);
    glGenTextures(1, &tfTexID);
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_1D, tfTexID);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);  
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage1D(GL_TEXTURE_1D,0,GL_RGBA,256,0,GL_RGBA,GL_FLOAT,pData);
  4. In the fragment shader, add a new sampler for the transfer function lookup table. Since we now have two textures, we bind the volume data to texture unit 0 (GL_TEXTURE0) and the transfer function texture to texture unit 1 (GL_TEXTURE1).
    shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/textureSlicer.vert");
    shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/textureSlicer.frag");
    shader.CreateAndLinkProgram();
    shader.Use();
      shader.AddAttribute("vVertex");
      shader.AddUniform("MVP");  
      shader.AddUniform("volume");
      shader.AddUniform("lut");
      glUniform1i(shader("volume"),0);
      glUniform1i(shader("lut"),1);
    shader.UnUse();
  5. Finally, in the fragment shader, instead of directly returning the current volume density value, we lookup the density value in the transfer function and return the appropriate color value. Refer to Chapter7/3DTextureSlicingClassification/shaders/textureSlicer.frag for details.
    uniform sampler3D volume; 
    uniform sampler1D lut;	 
    void main(void) {
    vFragColor = texture(lut, texture(volume, vUV).r); 
    }
    
  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.
How to do it…

Let us start this

recipe by following these simple steps:

  1. Load the volume data and setup the texture slicing as in the Implementing volume rendering using 3D texture slicing recipe.
  2. Create a 1D texture that will be our transfer function texture for color lookup. We create a set of color values and then interpolate them on the fly. Refer to LoadTransferFunction in Chapter7/3DTextureSlicingClassification/main.cpp.
    float pData[256][4]; 
    int indices[9];
    for(int i=0;i<9;i++) {
    int index = i*28;  
    pData[index][0] = jet_values[i].x;
    pData[index][1] = jet_values[i].y;
    pData[index][2] = jet_values[i].z;
    pData[index][3] = jet_values[i].w;
    indices[i] = index;
    }
    for(int j=0;j<9-1;j++)
    {
    float dDataR = (pData[indices[j+1]][0] - pData[indices[j]][0]);
    float dDataG = (pData[indices[j+1]][1] - pData[indices[j]][1]);
    float dDataB = (pData[indices[j+1]][2] - pData[indices[j]][2]);
    float dDataA = (pData[indices[j+1]][3] - pData[indices[j]][3]);
    int dIndex = indices[j+1]-indices[j];
    float dDataIncR = dDataR/float(dIndex);
    float dDataIncG = dDataG/float(dIndex);
    float dDataIncB = dDataB/float(dIndex);
    float dDataIncA = dDataA/float(dIndex);
    for(int i=indices[j]+1;i<indices[j+1];i++)
    {
        pData[i][0] = (pData[i-1][0] + dDataIncR);
        pData[i][1] = (pData[i-1][1] + dDataIncG);
        pData[i][2] = (pData[i-1][2] + dDataIncB);
        pData[i][3] = (pData[i-1][3] + dDataIncA);
    }
    }
  3. Generate a 1D OpenGL texture from the interpolated lookup data from step 1. We bind this texture to texture unit 1 (GL_TEXTURE1);
    glGenTextures(1, &tfTexID);
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_1D, tfTexID);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);  
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage1D(GL_TEXTURE_1D,0,GL_RGBA,256,0,GL_RGBA,GL_FLOAT,pData);
  4. In the fragment shader, add a new sampler for the transfer function lookup table. Since we now have two textures, we bind the volume data to texture unit 0 (GL_TEXTURE0) and the transfer function texture to texture unit 1 (GL_TEXTURE1).
    shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/textureSlicer.vert");
    shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/textureSlicer.frag");
    shader.CreateAndLinkProgram();
    shader.Use();
      shader.AddAttribute("vVertex");
      shader.AddUniform("MVP");  
      shader.AddUniform("volume");
      shader.AddUniform("lut");
      glUniform1i(shader("volume"),0);
      glUniform1i(shader("lut"),1);
    shader.UnUse();
  5. Finally, in the fragment shader, instead of directly returning the current volume density value, we lookup the density value in the transfer function and return the appropriate color value. Refer to Chapter7/3DTextureSlicingClassification/shaders/textureSlicer.frag for details.
    uniform sampler3D volume; 
    uniform sampler1D lut;	 
    void main(void) {
    vFragColor = texture(lut, texture(volume, vUV).r); 
    }
    
  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.
How it works…

There are two
  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.
There's more…

The demo application for this recipe renders the engine dataset as in the 3D texture slicing recipe but now the rendered output is colored using a transfer function. The output from the demo application is displayed in the following screenshot:

There's more…
  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.
See also

  • Chapter 4, Transfer Functions, and Chapter 10, Transfer Functions Reloaded, in Real-time Volume Graphics, AK Peters/CRC Press.

In the Implementing pseudo-isosurface rendering in single-pass GPU ray casting recipe, we implemented pseudo-isosurface rendering in single-pass GPU ray casting. However, these isosurfaces are not composed of triangles; so it is not possible for us to uniquely address individual isosurface regions easily to mark different areas in the volume dataset. This can be achieved by doing an isosurface extraction process for a specific isovalue by traversing the entire volume dataset. This method is known as the Marching Tetrahedra (MT) algorithm. This algorithm traverses the whole volume dataset and tries to fit a specific polygon based on the intersection criterion. This process is repeated for the whole volume and finally, we obtain the polygonal mesh from the volume dataset.

Let us start this recipe by following these simple steps:

  1. Load the 3D volume data and store it into an array:
    std::ifstream infile(filename.c_str(), std::ios_base::binary); 
    if(infile.good()) {
    pVolume = new GLubyte[XDIM*YDIM*ZDIM];
    infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
    infile.close();
    return true;
    } else {
    return false;
    }
  2. Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel:
    vertices.clear(); 
    int dx = XDIM/X_SAMPLING_DIST;
    int dy = YDIM/Y_SAMPLING_DIST;
    int dz = ZDIM/Z_SAMPLING_DIST;
    glm::vec3 scale = glm::vec3(dx,dy,dz); 
    for(int z=0;z<ZDIM;z+=dz) {
    for(int y=0;y<YDIM;y+=dy) {
    for(int x=0;x<XDIM;x+=dx) {
    SampleVoxel(x,y,z, scale);
    }
    }
    }
  3. In each sampling step, estimate the volume density values at the eight corners of the sampling box:
    GLubyte cubeCornerValues[8];
    for( i = 0; i < 8; i++) {
      cubeCornerValues[i] = SampleVolume(
       x + (int)(a2fVertexOffset[i][0] *scale.x),
       y + (int)(a2fVertexOffset[i][1]*scale.y),	
       z + (int)(a2fVertexOffset[i][2]*scale.z));
    }
  4. Estimate an edge flag value to identify the matching tetrahedra case based on the given isovalue:
    int flagIndex = 0;
    for( i= 0; i<8; i++) {
      if(cubeCornerValues[i]<= isoValue) 
           flagIndex |= 1<<i;
    } 
       edgeFlags = aiCubeEdgeFlags[flagIndex];
  5. Use the lookup tables (a2iEdgeConnection) to find the correct edges for the case and then use the offset table (a2fVertexOffset) to find the edge vertices and normals. These tables are defined in the Tables.h header in the Chapter7/MarchingTetrahedra/ directory.
       for(i = 0; i < 12; i++)
      {
         if(edgeFlags & (1<<i))
          {
              float offset = GetOffset(cubeCornerValues[  
              a2iEdgeConnection[i][0] ], 
              cubeCornerValues[ a2iEdgeConnection[i][1] ]);
              edgeVertices[i].x = x + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][0] + offset *  
              a2fEdgeDirection[i][0])*scale.x ;
              edgeVertices[i].y = y + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][1] + offset *  
              a2fEdgeDirection[i][1])*scale.y ;
              edgeVertices[i].z = z + (a2fVertexOffset[   
              a2iEdgeConnection[i][0] ][2] + offset *  
              a2fEdgeDirection[i][2])*scale.z ;
              edgeNormals[i] = GetNormal( (int)edgeVertices[i].x ,    
              (int)edgeVertices[i].y ,  (int)edgeVertices[i].z  );
          } 
      }
  6. Finally, loop through the triangle connectivity table to connect the correct vertices and normals for the given case.
       for(i = 0; i< 5; i++) {
        if(a2iTriangleConnectionTable[flagIndex][3*i] < 0)
          break;      
         for(int j= 0; j< 3; j++) {
            int vertex = a2iTriangleConnectionTable[flagIndex][3*i+j]; 
          Vertex v;
          v.normal = (edgeNormals[vertex]); 
          v.pos = (edgeVertices[vertex])*invDim; 
          vertices.push_back(v);
        }
       }
  7. After the marcher is finished, we pass the generated vertices to a vertex array object containing a vertex buffer object:
    glGenVertexArrays(1, &volumeMarcherVAO);
    glGenBuffers(1, &volumeMarcherVBO);   
    glBindVertexArray(volumeMarcherVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeMarcherVBO);
    glBufferData (GL_ARRAY_BUFFER, marcher-> GetTotalVertices()*sizeof(Vertex), marcher-> GetVertexPointer(), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0); 
    glEnableVertexAttribArray(1);
    glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)offsetof(Vertex, normal));
  8. For rendering of the generated geometry, we bind the Marching Tetrahedra VAO, bind our shader and then render the triangles. For this recipe, we output the per-vertex normals as color.
       glBindVertexArray(volumeMarcherVAO);
       shader.Use(); 
      glUniformMatrix4fv(shader("MVP"),1,GL_FALSE,
       glm::value_ptr(MVP*T));
      glDrawArrays(GL_TRIANGLES, 0, marcher->GetTotalVertices());
      shader.UnUse();     

For convenience, we wrap the entire recipe in a reusable class called TetrahedraMarcher. Marching Tetrahedra, as the name suggests, marches a sampling box throughout the whole volume dataset. To give a bird's eye view there are several cases to consider based on the distribution of density values at the vertices of the sampling cube. Based on the sampling values at the eight corners and the given isovalue, a flag index is generated. This flag index gives us the edge flag by a lookup in a table. This edge flag is then used in an edge lookup table, which is predefined for all possible edge configurations of the marching tetrahedron. The edge connection table is then used to find the appropriate offset for the corner values of the sampling box. These offsets are then used to obtain the edge vertices and normals for the given tetrahedral case. Once the list of edge vertices and normals are estimated, the triangle connectivity is obtained based on the given flag index.

Now we will detail the steps in the Marching Tetrahedra algorithm. First, the flag index is obtained by iterating through all eight sampling cube vertices and comparing the density value at the vertex location with the given isovalue as shown in the following code. The flag index is then used to retrieve the edge flags from the looktup table (aiCubeEdgeFlags).

The vertices and normals for the given index are stored in a local array by looking up the edge connection table (a2iEdgeConnection).

Finally, the triangle connectivity table (a2iTriangleConnectionTable) is used to obtain the proper vertex and normal ordering and these attributes are then stored into a vectors.

After the Marching Tetrahedra code is processed, we store the generated vertices and normals in a buffer object. In the rendering code, we bind the appropriate vertex array object, bind our shader and then draw the triangles. The fragment shader for this recipe outputs the per-vertex normals as colors.

Getting ready

The code for this recipe is in the Chapter7/MarchingTetrahedra directory. For convenience, we will wrap the Marching Tetrahedra algorithm in a simple class called TetrahedraMarcher.

Let us start this recipe by following these simple steps:

  1. Load the 3D volume data and store it into an array:
    std::ifstream infile(filename.c_str(), std::ios_base::binary); 
    if(infile.good()) {
    pVolume = new GLubyte[XDIM*YDIM*ZDIM];
    infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
    infile.close();
    return true;
    } else {
    return false;
    }
  2. Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel:
    vertices.clear(); 
    int dx = XDIM/X_SAMPLING_DIST;
    int dy = YDIM/Y_SAMPLING_DIST;
    int dz = ZDIM/Z_SAMPLING_DIST;
    glm::vec3 scale = glm::vec3(dx,dy,dz); 
    for(int z=0;z<ZDIM;z+=dz) {
    for(int y=0;y<YDIM;y+=dy) {
    for(int x=0;x<XDIM;x+=dx) {
    SampleVoxel(x,y,z, scale);
    }
    }
    }
  3. In each sampling step, estimate the volume density values at the eight corners of the sampling box:
    GLubyte cubeCornerValues[8];
    for( i = 0; i < 8; i++) {
      cubeCornerValues[i] = SampleVolume(
       x + (int)(a2fVertexOffset[i][0] *scale.x),
       y + (int)(a2fVertexOffset[i][1]*scale.y),	
       z + (int)(a2fVertexOffset[i][2]*scale.z));
    }
  4. Estimate an edge flag value to identify the matching tetrahedra case based on the given isovalue:
    int flagIndex = 0;
    for( i= 0; i<8; i++) {
      if(cubeCornerValues[i]<= isoValue) 
           flagIndex |= 1<<i;
    } 
       edgeFlags = aiCubeEdgeFlags[flagIndex];
  5. Use the lookup tables (a2iEdgeConnection) to find the correct edges for the case and then use the offset table (a2fVertexOffset) to find the edge vertices and normals. These tables are defined in the Tables.h header in the Chapter7/MarchingTetrahedra/ directory.
       for(i = 0; i < 12; i++)
      {
         if(edgeFlags & (1<<i))
          {
              float offset = GetOffset(cubeCornerValues[  
              a2iEdgeConnection[i][0] ], 
              cubeCornerValues[ a2iEdgeConnection[i][1] ]);
              edgeVertices[i].x = x + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][0] + offset *  
              a2fEdgeDirection[i][0])*scale.x ;
              edgeVertices[i].y = y + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][1] + offset *  
              a2fEdgeDirection[i][1])*scale.y ;
              edgeVertices[i].z = z + (a2fVertexOffset[   
              a2iEdgeConnection[i][0] ][2] + offset *  
              a2fEdgeDirection[i][2])*scale.z ;
              edgeNormals[i] = GetNormal( (int)edgeVertices[i].x ,    
              (int)edgeVertices[i].y ,  (int)edgeVertices[i].z  );
          } 
      }
  6. Finally, loop through the triangle connectivity table to connect the correct vertices and normals for the given case.
       for(i = 0; i< 5; i++) {
        if(a2iTriangleConnectionTable[flagIndex][3*i] < 0)
          break;      
         for(int j= 0; j< 3; j++) {
            int vertex = a2iTriangleConnectionTable[flagIndex][3*i+j]; 
          Vertex v;
          v.normal = (edgeNormals[vertex]); 
          v.pos = (edgeVertices[vertex])*invDim; 
          vertices.push_back(v);
        }
       }
  7. After the marcher is finished, we pass the generated vertices to a vertex array object containing a vertex buffer object:
    glGenVertexArrays(1, &volumeMarcherVAO);
    glGenBuffers(1, &volumeMarcherVBO);   
    glBindVertexArray(volumeMarcherVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeMarcherVBO);
    glBufferData (GL_ARRAY_BUFFER, marcher-> GetTotalVertices()*sizeof(Vertex), marcher-> GetVertexPointer(), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0); 
    glEnableVertexAttribArray(1);
    glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)offsetof(Vertex, normal));
  8. For rendering of the generated geometry, we bind the Marching Tetrahedra VAO, bind our shader and then render the triangles. For this recipe, we output the per-vertex normals as color.
       glBindVertexArray(volumeMarcherVAO);
       shader.Use(); 
      glUniformMatrix4fv(shader("MVP"),1,GL_FALSE,
       glm::value_ptr(MVP*T));
      glDrawArrays(GL_TRIANGLES, 0, marcher->GetTotalVertices());
      shader.UnUse();     

For convenience, we wrap the entire recipe in a reusable class called TetrahedraMarcher. Marching Tetrahedra, as the name suggests, marches a sampling box throughout the whole volume dataset. To give a bird's eye view there are several cases to consider based on the distribution of density values at the vertices of the sampling cube. Based on the sampling values at the eight corners and the given isovalue, a flag index is generated. This flag index gives us the edge flag by a lookup in a table. This edge flag is then used in an edge lookup table, which is predefined for all possible edge configurations of the marching tetrahedron. The edge connection table is then used to find the appropriate offset for the corner values of the sampling box. These offsets are then used to obtain the edge vertices and normals for the given tetrahedral case. Once the list of edge vertices and normals are estimated, the triangle connectivity is obtained based on the given flag index.

Now we will detail the steps in the Marching Tetrahedra algorithm. First, the flag index is obtained by iterating through all eight sampling cube vertices and comparing the density value at the vertex location with the given isovalue as shown in the following code. The flag index is then used to retrieve the edge flags from the looktup table (aiCubeEdgeFlags).

The vertices and normals for the given index are stored in a local array by looking up the edge connection table (a2iEdgeConnection).

Finally, the triangle connectivity table (a2iTriangleConnectionTable) is used to obtain the proper vertex and normal ordering and these attributes are then stored into a vectors.

After the Marching Tetrahedra code is processed, we store the generated vertices and normals in a buffer object. In the rendering code, we bind the appropriate vertex array object, bind our shader and then draw the triangles. The fragment shader for this recipe outputs the per-vertex normals as colors.

How to do it…

Let us start this

recipe by following these simple steps:

  1. Load the 3D volume data and store it into an array:
    std::ifstream infile(filename.c_str(), std::ios_base::binary); 
    if(infile.good()) {
    pVolume = new GLubyte[XDIM*YDIM*ZDIM];
    infile.read(reinterpret_cast<char*>(pVolume), XDIM*YDIM*ZDIM*sizeof(GLubyte));
    infile.close();
    return true;
    } else {
    return false;
    }
  2. Depending on the sampling box size, run three loops to iterate through the entire volume voxel by voxel:
    vertices.clear(); 
    int dx = XDIM/X_SAMPLING_DIST;
    int dy = YDIM/Y_SAMPLING_DIST;
    int dz = ZDIM/Z_SAMPLING_DIST;
    glm::vec3 scale = glm::vec3(dx,dy,dz); 
    for(int z=0;z<ZDIM;z+=dz) {
    for(int y=0;y<YDIM;y+=dy) {
    for(int x=0;x<XDIM;x+=dx) {
    SampleVoxel(x,y,z, scale);
    }
    }
    }
  3. In each sampling step, estimate the volume density values at the eight corners of the sampling box:
    GLubyte cubeCornerValues[8];
    for( i = 0; i < 8; i++) {
      cubeCornerValues[i] = SampleVolume(
       x + (int)(a2fVertexOffset[i][0] *scale.x),
       y + (int)(a2fVertexOffset[i][1]*scale.y),	
       z + (int)(a2fVertexOffset[i][2]*scale.z));
    }
  4. Estimate an edge flag value to identify the matching tetrahedra case based on the given isovalue:
    int flagIndex = 0;
    for( i= 0; i<8; i++) {
      if(cubeCornerValues[i]<= isoValue) 
           flagIndex |= 1<<i;
    } 
       edgeFlags = aiCubeEdgeFlags[flagIndex];
  5. Use the lookup tables (a2iEdgeConnection) to find the correct edges for the case and then use the offset table (a2fVertexOffset) to find the edge vertices and normals. These tables are defined in the Tables.h header in the Chapter7/MarchingTetrahedra/ directory.
       for(i = 0; i < 12; i++)
      {
         if(edgeFlags & (1<<i))
          {
              float offset = GetOffset(cubeCornerValues[  
              a2iEdgeConnection[i][0] ], 
              cubeCornerValues[ a2iEdgeConnection[i][1] ]);
              edgeVertices[i].x = x + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][0] + offset *  
              a2fEdgeDirection[i][0])*scale.x ;
              edgeVertices[i].y = y + (a2fVertexOffset[  
              a2iEdgeConnection[i][0] ][1] + offset *  
              a2fEdgeDirection[i][1])*scale.y ;
              edgeVertices[i].z = z + (a2fVertexOffset[   
              a2iEdgeConnection[i][0] ][2] + offset *  
              a2fEdgeDirection[i][2])*scale.z ;
              edgeNormals[i] = GetNormal( (int)edgeVertices[i].x ,    
              (int)edgeVertices[i].y ,  (int)edgeVertices[i].z  );
          } 
      }
  6. Finally, loop through the triangle connectivity table to connect the correct vertices and normals for the given case.
       for(i = 0; i< 5; i++) {
        if(a2iTriangleConnectionTable[flagIndex][3*i] < 0)
          break;      
         for(int j= 0; j< 3; j++) {
            int vertex = a2iTriangleConnectionTable[flagIndex][3*i+j]; 
          Vertex v;
          v.normal = (edgeNormals[vertex]); 
          v.pos = (edgeVertices[vertex])*invDim; 
          vertices.push_back(v);
        }
       }
  7. After the marcher is finished, we pass the generated vertices to a vertex array object containing a vertex buffer object:
    glGenVertexArrays(1, &volumeMarcherVAO);
    glGenBuffers(1, &volumeMarcherVBO);   
    glBindVertexArray(volumeMarcherVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeMarcherVBO);
    glBufferData (GL_ARRAY_BUFFER, marcher-> GetTotalVertices()*sizeof(Vertex), marcher-> GetVertexPointer(), GL_STATIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0); 
    glEnableVertexAttribArray(1);
    glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)offsetof(Vertex, normal));
  8. For rendering of the generated geometry, we bind the Marching Tetrahedra VAO, bind our shader and then render the triangles. For this recipe, we output the per-vertex normals as color.
       glBindVertexArray(volumeMarcherVAO);
       shader.Use(); 
      glUniformMatrix4fv(shader("MVP"),1,GL_FALSE,
       glm::value_ptr(MVP*T));
      glDrawArrays(GL_TRIANGLES, 0, marcher->GetTotalVertices());
      shader.UnUse();     

For convenience, we wrap the entire recipe in a reusable class called TetrahedraMarcher. Marching Tetrahedra, as the name suggests, marches a sampling box throughout the whole volume dataset. To give a bird's eye view there are several cases to consider based on the distribution of density values at the vertices of the sampling cube. Based on the sampling values at the eight corners and the given isovalue, a flag index is generated. This flag index gives us the edge flag by a lookup in a table. This edge flag is then used in an edge lookup table, which is predefined for all possible edge configurations of the marching tetrahedron. The edge connection table is then used to find the appropriate offset for the corner values of the sampling box. These offsets are then used to obtain the edge vertices and normals for the given tetrahedral case. Once the list of edge vertices and normals are estimated, the triangle connectivity is obtained based on the given flag index.

Now we will detail the steps in the Marching Tetrahedra algorithm. First, the flag index is obtained by iterating through all eight sampling cube vertices and comparing the density value at the vertex location with the given isovalue as shown in the following code. The flag index is then used to retrieve the edge flags from the looktup table (aiCubeEdgeFlags).

The vertices and normals for the given index are stored in a local array by looking up the edge connection table (a2iEdgeConnection).

Finally, the triangle connectivity table (a2iTriangleConnectionTable) is used to obtain the proper vertex and normal ordering and these attributes are then stored into a vectors.

After the Marching Tetrahedra code is processed, we store the generated vertices and normals in a buffer object. In the rendering code, we bind the appropriate vertex array object, bind our shader and then draw the triangles. The fragment shader for this recipe outputs the per-vertex normals as colors.

How it works…

For convenience, we wrap the entire recipe in a reusable class called TetrahedraMarcher. Marching Tetrahedra, as the name suggests, marches a sampling box throughout the whole volume dataset. To give a bird's eye view there are several cases to consider based on the distribution of density values at the vertices of the sampling cube. Based on the sampling values at the eight corners and the given isovalue, a flag index is generated. This flag index gives us the edge flag by a lookup in a table. This edge flag is then used in an edge lookup table, which is predefined for all possible edge configurations of the marching tetrahedron. The edge connection table is then used to find the appropriate offset for the corner values of the sampling box. These offsets are then used to obtain the edge vertices and normals for the given tetrahedral case. Once the list of edge vertices and

normals are estimated, the triangle connectivity is obtained based on the given flag index.

Now we will detail the steps in the Marching Tetrahedra algorithm. First, the flag index is obtained by iterating through all eight sampling cube vertices and comparing the density value at the vertex location with the given isovalue as shown in the following code. The flag index is then used to retrieve the edge flags from the looktup table (aiCubeEdgeFlags).

The vertices and normals for the given index are stored in a local array by looking up the edge connection table (a2iEdgeConnection).

Finally, the triangle connectivity table (a2iTriangleConnectionTable) is used to obtain the proper vertex and normal ordering and these attributes are then stored into a vectors.

After the Marching Tetrahedra code is processed, we store the generated vertices and normals in a buffer object. In the rendering code, we bind the appropriate vertex array object, bind our shader and then draw the triangles. The fragment shader for this recipe outputs the per-vertex normals as colors.

There's more…

The demo application for this recipe renders the engine dataset as shown in the following screenshot. The fragment shader renders the isosurface normals as color.

There's more…

Pressing the W key See also

Polygonising a scalar field, Paul Bourke:

In this recipe, we will implement volumetric lighting using the half-angle slicing technique. Instead of slicing the volume perpendicular to the viewing direction, the slicing direction is set between the light and the view direction vectors. This enables us to simulate light absorption slice by slice.

Let us start this recipe by following these simple steps:

  1. Setup offscreen rendering using one FBO with two attachments: one for offscreen rendering of the light buffer and the other for offscreen rendering of the eye buffer.
    glGenFramebuffers(1, &lightFBOID); 
    glGenTextures (1, &lightBufferID);  
    glGenTextures (1, &eyeBufferID); 
    glActiveTexture(GL_TEXTURE2); 
    lightBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA);
    eyeBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA); 	 
    glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, lightBufferID, 0);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, eyeBufferID, 0);  
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE )
      printf("Light FBO setup successful !!! \n");
    else
      printf("Problem with Light FBO setup");

    The CreateTexture function performs the texture creation and texture format specification into a single function for convenience. This function is defined as follows:

  2. Load the volume data, as in the 3D texture slicing recipe:
       std::ifstream infile(volume_file.c_str(),     
       std::ios_base::binary);
      if(infile.good()) {
        GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
        infile.read(reinterpret_cast<char*>(pData),  
           XDIM*YDIM*ZDIM*sizeof(GLubyte));
        infile.close();
        glGenTextures(1, &textureID);
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_3D, textureID);
        // set the texture parameters 
        glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,
          GL_RED,GL_UNSIGNED_BYTE,pData);
          GL_CHECK_ERRORS
        glGenerateMipmap(GL_TEXTURE_3D);
        return true;
      } else {
        return false;
      }
  3. Similar to the shadow mapping technique, calculate the shadow matrix by multiplying the model-view and projection matrices of the light with the bias matrix:
       MV_L=glm::lookAt(lightPosOS,glm::vec3(0,0,0),   
            glm::vec3(0,1,0));
       P_L=glm::perspective(45.0f,1.0f,1.0f, 200.0f);
       B=glm::scale(glm::translate(glm::mat4(1),
         glm::vec3(0.5,0.5,0.5)), glm::vec3(0.5,0.5,0.5));
       BP   = B*P_L;
       S    = BP*MV_L;
  4. In the rendering code, calculate the half vector by using the view direction vector and the light direction vector:
       viewVec = -glm::vec3(MV[0][2], MV[1][2], MV[2][2]); 
       lightVec = glm::normalize(lightPosOS);
       bIsViewInverted = glm::dot(viewVec, lightVec)<0;
       halfVec = glm::normalize( (bIsViewInverted?-viewVec:viewVec) +   
       lightVec);
  5. Slice the volume data as in the 3D texture slicing recipe. The only difference here is that instead of slicing the volume data in the direction perpendicular to the view, we slice it in the direction which is halfway between the view and the light vectors.
      float max_dist = glm::dot(halfVec, vertexList[0]);
      float min_dist = max_dist;
      int max_index = 0;
      int count = 0;
      for(int i=1;i<8;i++) {
        float dist = glm::dot(halfVec, vertexList[i]);
        if(dist > max_dist) {
          max_dist = dist;
          max_index = i;
        }
        if(dist<min_dist)
          min_dist = dist;
      }
       //rest of the SliceVolume function as in 3D texture slicing but   
       //viewVec is changed to halfVec
  6. In the rendering code, bind the FBO and then first clear the light buffer with the white color (1,1,1,1) and the eye buffer with the color (0,0,0,0):
       glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
       glDrawBuffer(attachIDs[0]);
       glClearColor(1,1,1,1);
       glClear(GL_COLOR_BUFFER_BIT );
       glDrawBuffer(attachIDs[1]);
       glClearColor(0,0,0,0);
       glClear(GL_COLOR_BUFFER_BIT  ); 
  7. Bind the volume VAO and then run a loop for the total number of slices. In each iteration, first render the slice in the eye buffer but bind the light buffer as the texture. Next, render the slice in the light buffer:
       glBindVertexArray(volumeVAO);	
       for(int i =0;i<num_slices;i++) { 
          shaderShadow.Use();
          glUniformMatrix4fv(shaderShadow("MVP"), 1, GL_FALSE,    
          glm::value_ptr(MVP)); 
          glUniformMatrix4fv(shaderShadow("S"), 1, GL_FALSE,  
          glm::value_ptr(S));
          glBindTexture(GL_TEXTURE_2D, lightBuffer);
          DrawSliceFromEyePointOfView(i);
    
          shader.Use();
          glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE,   
          glm::value_ptr(P_L*MV_L));
          DrawSliceFromLightPointOfView(i);
       }
  8. For the eye buffer rendering step, swap the blend function based on whether the viewer is viewing in the direction of the light or opposite to it:
       void DrawSliceFromEyePointOfView(const int i) { 
          glDrawBuffer(attachIDs[1]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT); 
          if(bIsViewInverted) { 
             glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
          } else {
             glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
          }
          glDrawArrays(GL_TRIANGLES, 12*i, 12);  
       }
  9. For the light buffer, we simply blend the slices using the conventional "over" blending:
       void DrawSliceFromLightPointOfView(const int i) {  
          glDrawBuffer(attachIDs[0]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT);  
          glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
          glDrawArrays(GL_TRIANGLES, 12*i, 12);   
       }
  10. Finally, unbind the FBO and restore the default draw buffer. Next, set the viewport to the entire screen and then render the eye buffer on screen using a shader:
       glBindVertexArray(0);
       glBindFramebuffer(GL_FRAMEBUFFER, 0);
       glDrawBuffer(GL_BACK_LEFT);
       glViewport(0,0,WIDTH, HEIGHT);
       glBindTexture(GL_TEXTURE_2D, eyeBufferID); 
       glBindVertexArray(quadVAOID);
       quadShader.Use();  
       glDrawArrays(GL_TRIANGLES, 0, 6); 
       quadShader.UnUse();
       glBindVertexArray(0);

As the name suggests, this technique renders the volume by accumulating the intermediate results into two separate buffers by slicing the volume halfway between the light and the view vectors. When the scene is rendered from the point of view of the eye, the light buffer is used as texture to find out whether the current fragment is in shadow or not. This is carried out in the fragment shader by looking up the light buffer by using the shadow matrix as in the shadow mapping algorithm. In this step, the blending equation is swapped based on the direction of view with respect to the light direction vector. If the view is inverted, the blending direction is swapped from front-to-back to back-to-front using glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE). On the other hand, if the view direction is not inverted, the blend function is set as glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). Note that here we have not used the over compositing since we premultiply the color with its alpha in the fragment shader (see Chapter7/HalfAngleSlicing/shaders/slicerShadow.frag), as shown in the following code:

In the next step, the scene is rendered from the point of view of the light. This time, the normal over compositing is used. This ensures that the light contributions accumulate with each other similar to how light behaves in normal circumstances. In this case, we use the same fragment shader as was used in the 3D texture slicing recipe (see Chapter7/HalgAngleSlicing/shaders/textureSlicer.frag).

Getting ready

The code for this recipe is in the Chapter7/HalfAngleSlicing directory. As the name suggests, this recipe will build up on the 3D texture slicing code.

Let us start this recipe by following these simple steps:

  1. Setup offscreen rendering using one FBO with two attachments: one for offscreen rendering of the light buffer and the other for offscreen rendering of the eye buffer.
    glGenFramebuffers(1, &lightFBOID); 
    glGenTextures (1, &lightBufferID);  
    glGenTextures (1, &eyeBufferID); 
    glActiveTexture(GL_TEXTURE2); 
    lightBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA);
    eyeBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA); 	 
    glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, lightBufferID, 0);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, eyeBufferID, 0);  
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(status == GL_FRAMEBUFFER_COMPLETE )
      printf("Light FBO setup successful !!! \n");
    else
      printf("Problem with Light FBO setup");

    The CreateTexture function performs the texture creation and texture format specification into a single function for convenience. This function is defined as follows:

  2. Load the volume data, as in the 3D texture slicing recipe:
       std::ifstream infile(volume_file.c_str(),     
       std::ios_base::binary);
      if(infile.good()) {
        GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
        infile.read(reinterpret_cast<char*>(pData),  
           XDIM*YDIM*ZDIM*sizeof(GLubyte));
        infile.close();
        glGenTextures(1, &textureID);
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_3D, textureID);
        // set the texture parameters 
        glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,
          GL_RED,GL_UNSIGNED_BYTE,pData);
          GL_CHECK_ERRORS
        glGenerateMipmap(GL_TEXTURE_3D);
        return true;
      } else {
        return false;
      }
  3. Similar to the shadow mapping technique, calculate the shadow matrix by multiplying the model-view and projection matrices of the light with the bias matrix:
       MV_L=glm::lookAt(lightPosOS,glm::vec3(0,0,0),   
            glm::vec3(0,1,0));
       P_L=glm::perspective(45.0f,1.0f,1.0f, 200.0f);
       B=glm::scale(glm::translate(glm::mat4(1),
         glm::vec3(0.5,0.5,0.5)), glm::vec3(0.5,0.5,0.5));
       BP   = B*P_L;
       S    = BP*MV_L;
  4. In the rendering code, calculate the half vector by using the view direction vector and the light direction vector:
       viewVec = -glm::vec3(MV[0][2], MV[1][2], MV[2][2]); 
       lightVec = glm::normalize(lightPosOS);
       bIsViewInverted = glm::dot(viewVec, lightVec)<0;
       halfVec = glm::normalize( (bIsViewInverted?-viewVec:viewVec) +   
       lightVec);
  5. Slice the volume data as in the 3D texture slicing recipe. The only difference here is that instead of slicing the volume data in the direction perpendicular to the view, we slice it in the direction which is halfway between the view and the light vectors.
      float max_dist = glm::dot(halfVec, vertexList[0]);
      float min_dist = max_dist;
      int max_index = 0;
      int count = 0;
      for(int i=1;i<8;i++) {
        float dist = glm::dot(halfVec, vertexList[i]);
        if(dist > max_dist) {
          max_dist = dist;
          max_index = i;
        }
        if(dist<min_dist)
          min_dist = dist;
      }
       //rest of the SliceVolume function as in 3D texture slicing but   
       //viewVec is changed to halfVec
  6. In the rendering code, bind the FBO and then first clear the light buffer with the white color (1,1,1,1) and the eye buffer with the color (0,0,0,0):
       glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
       glDrawBuffer(attachIDs[0]);
       glClearColor(1,1,1,1);
       glClear(GL_COLOR_BUFFER_BIT );
       glDrawBuffer(attachIDs[1]);
       glClearColor(0,0,0,0);
       glClear(GL_COLOR_BUFFER_BIT  ); 
  7. Bind the volume VAO and then run a loop for the total number of slices. In each iteration, first render the slice in the eye buffer but bind the light buffer as the texture. Next, render the slice in the light buffer:
       glBindVertexArray(volumeVAO);	
       for(int i =0;i<num_slices;i++) { 
          shaderShadow.Use();
          glUniformMatrix4fv(shaderShadow("MVP"), 1, GL_FALSE,    
          glm::value_ptr(MVP)); 
          glUniformMatrix4fv(shaderShadow("S"), 1, GL_FALSE,  
          glm::value_ptr(S));
          glBindTexture(GL_TEXTURE_2D, lightBuffer);
          DrawSliceFromEyePointOfView(i);
    
          shader.Use();
          glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE,   
          glm::value_ptr(P_L*MV_L));
          DrawSliceFromLightPointOfView(i);
       }
  8. For the eye buffer rendering step, swap the blend function based on whether the viewer is viewing in the direction of the light or opposite to it:
       void DrawSliceFromEyePointOfView(const int i) { 
          glDrawBuffer(attachIDs[1]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT); 
          if(bIsViewInverted) { 
             glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
          } else {
             glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
          }
          glDrawArrays(GL_TRIANGLES, 12*i, 12);  
       }
  9. For the light buffer, we simply blend the slices using the conventional "over" blending:
       void DrawSliceFromLightPointOfView(const int i) {  
          glDrawBuffer(attachIDs[0]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT);  
          glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
          glDrawArrays(GL_TRIANGLES, 12*i, 12);   
       }
  10. Finally, unbind the FBO and restore the default draw buffer. Next, set the viewport to the entire screen and then render the eye buffer on screen using a shader:
       glBindVertexArray(0);
       glBindFramebuffer(GL_FRAMEBUFFER, 0);
       glDrawBuffer(GL_BACK_LEFT);
       glViewport(0,0,WIDTH, HEIGHT);
       glBindTexture(GL_TEXTURE_2D, eyeBufferID); 
       glBindVertexArray(quadVAOID);
       quadShader.Use();  
       glDrawArrays(GL_TRIANGLES, 0, 6); 
       quadShader.UnUse();
       glBindVertexArray(0);

As the name suggests, this technique renders the volume by accumulating the intermediate results into two separate buffers by slicing the volume halfway between the light and the view vectors. When the scene is rendered from the point of view of the eye, the light buffer is used as texture to find out whether the current fragment is in shadow or not. This is carried out in the fragment shader by looking up the light buffer by using the shadow matrix as in the shadow mapping algorithm. In this step, the blending equation is swapped based on the direction of view with respect to the light direction vector. If the view is inverted, the blending direction is swapped from front-to-back to back-to-front using glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE). On the other hand, if the view direction is not inverted, the blend function is set as glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). Note that here we have not used the over compositing since we premultiply the color with its alpha in the fragment shader (see Chapter7/HalfAngleSlicing/shaders/slicerShadow.frag), as shown in the following code:

In the next step, the scene is rendered from the point of view of the light. This time, the normal over compositing is used. This ensures that the light contributions accumulate with each other similar to how light behaves in normal circumstances. In this case, we use the same fragment shader as was used in the 3D texture slicing recipe (see Chapter7/HalgAngleSlicing/shaders/textureSlicer.frag).

How to do it…

Let us start this recipe by following these simple steps:

Setup offscreen rendering using one FBO with two attachments: one for offscreen rendering of the light buffer and the other for offscreen rendering of the eye buffer.
glGenFramebuffers(1, &lightFBOID); 
glGenTextures (1, &lightBufferID);  
glGenTextures (1, &eyeBufferID); 
glActiveTexture(GL_TEXTURE2); 
lightBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA);
eyeBufferID = CreateTexture(IMAGE_WIDTH, IMAGE_HEIGHT, GL_RGBA16F, GL_RGBA); 	 
glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, lightBufferID, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, eyeBufferID, 0);  
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status == GL_FRAMEBUFFER_COMPLETE )
  printf("Light FBO setup successful !!! \n");
else
  printf("Problem with Light FBO setup");
The CreateTexture function
  1. performs the texture creation and texture format specification into a single function for convenience. This function is defined as follows:

  2. Load the volume data, as in the 3D texture slicing recipe:
       std::ifstream infile(volume_file.c_str(),     
       std::ios_base::binary);
      if(infile.good()) {
        GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
        infile.read(reinterpret_cast<char*>(pData),  
           XDIM*YDIM*ZDIM*sizeof(GLubyte));
        infile.close();
        glGenTextures(1, &textureID);
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_3D, textureID);
        // set the texture parameters 
        glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,
          GL_RED,GL_UNSIGNED_BYTE,pData);
          GL_CHECK_ERRORS
        glGenerateMipmap(GL_TEXTURE_3D);
        return true;
      } else {
        return false;
      }
  3. Similar to the shadow mapping technique, calculate the shadow matrix by multiplying the model-view and projection matrices of the light with the bias matrix:
       MV_L=glm::lookAt(lightPosOS,glm::vec3(0,0,0),   
            glm::vec3(0,1,0));
       P_L=glm::perspective(45.0f,1.0f,1.0f, 200.0f);
       B=glm::scale(glm::translate(glm::mat4(1),
         glm::vec3(0.5,0.5,0.5)), glm::vec3(0.5,0.5,0.5));
       BP   = B*P_L;
       S    = BP*MV_L;
  4. In the rendering code, calculate the half vector by using the view direction vector and the light direction vector:
       viewVec = -glm::vec3(MV[0][2], MV[1][2], MV[2][2]); 
       lightVec = glm::normalize(lightPosOS);
       bIsViewInverted = glm::dot(viewVec, lightVec)<0;
       halfVec = glm::normalize( (bIsViewInverted?-viewVec:viewVec) +   
       lightVec);
  5. Slice the volume data as in the 3D texture slicing recipe. The only difference here is that instead of slicing the volume data in the direction perpendicular to the view, we slice it in the direction which is halfway between the view and the light vectors.
      float max_dist = glm::dot(halfVec, vertexList[0]);
      float min_dist = max_dist;
      int max_index = 0;
      int count = 0;
      for(int i=1;i<8;i++) {
        float dist = glm::dot(halfVec, vertexList[i]);
        if(dist > max_dist) {
          max_dist = dist;
          max_index = i;
        }
        if(dist<min_dist)
          min_dist = dist;
      }
       //rest of the SliceVolume function as in 3D texture slicing but   
       //viewVec is changed to halfVec
  6. In the rendering code, bind the FBO and then first clear the light buffer with the white color (1,1,1,1) and the eye buffer with the color (0,0,0,0):
       glBindFramebuffer(GL_FRAMEBUFFER, lightFBOID);	
       glDrawBuffer(attachIDs[0]);
       glClearColor(1,1,1,1);
       glClear(GL_COLOR_BUFFER_BIT );
       glDrawBuffer(attachIDs[1]);
       glClearColor(0,0,0,0);
       glClear(GL_COLOR_BUFFER_BIT  ); 
  7. Bind the volume VAO and then run a loop for the total number of slices. In each iteration, first render the slice in the eye buffer but bind the light buffer as the texture. Next, render the slice in the light buffer:
       glBindVertexArray(volumeVAO);	
       for(int i =0;i<num_slices;i++) { 
          shaderShadow.Use();
          glUniformMatrix4fv(shaderShadow("MVP"), 1, GL_FALSE,    
          glm::value_ptr(MVP)); 
          glUniformMatrix4fv(shaderShadow("S"), 1, GL_FALSE,  
          glm::value_ptr(S));
          glBindTexture(GL_TEXTURE_2D, lightBuffer);
          DrawSliceFromEyePointOfView(i);
    
          shader.Use();
          glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE,   
          glm::value_ptr(P_L*MV_L));
          DrawSliceFromLightPointOfView(i);
       }
  8. For the eye buffer rendering step, swap the blend function based on whether the viewer is viewing in the direction of the light or opposite to it:
       void DrawSliceFromEyePointOfView(const int i) { 
          glDrawBuffer(attachIDs[1]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT); 
          if(bIsViewInverted) { 
             glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
          } else {
             glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
          }
          glDrawArrays(GL_TRIANGLES, 12*i, 12);  
       }
  9. For the light buffer, we simply blend the slices using the conventional "over" blending:
       void DrawSliceFromLightPointOfView(const int i) {  
          glDrawBuffer(attachIDs[0]);
          glViewport(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT);  
          glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
          glDrawArrays(GL_TRIANGLES, 12*i, 12);   
       }
  10. Finally, unbind the FBO and restore the default draw buffer. Next, set the viewport to the entire screen and then render the eye buffer on screen using a shader:
       glBindVertexArray(0);
       glBindFramebuffer(GL_FRAMEBUFFER, 0);
       glDrawBuffer(GL_BACK_LEFT);
       glViewport(0,0,WIDTH, HEIGHT);
       glBindTexture(GL_TEXTURE_2D, eyeBufferID); 
       glBindVertexArray(quadVAOID);
       quadShader.Use();  
       glDrawArrays(GL_TRIANGLES, 0, 6); 
       quadShader.UnUse();
       glBindVertexArray(0);

As the name suggests, this technique renders the volume by accumulating the intermediate results into two separate buffers by slicing the volume halfway between the light and the view vectors. When the scene is rendered from the point of view of the eye, the light buffer is used as texture to find out whether the current fragment is in shadow or not. This is carried out in the fragment shader by looking up the light buffer by using the shadow matrix as in the shadow mapping algorithm. In this step, the blending equation is swapped based on the direction of view with respect to the light direction vector. If the view is inverted, the blending direction is swapped from front-to-back to back-to-front using glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE). On the other hand, if the view direction is not inverted, the blend function is set as glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). Note that here we have not used the over compositing since we premultiply the color with its alpha in the fragment shader (see Chapter7/HalfAngleSlicing/shaders/slicerShadow.frag), as shown in the following code:

In the next step, the scene is rendered from the point of view of the light. This time, the normal over compositing is used. This ensures that the light contributions accumulate with each other similar to how light behaves in normal circumstances. In this case, we use the same fragment shader as was used in the 3D texture slicing recipe (see Chapter7/HalgAngleSlicing/shaders/textureSlicer.frag).

How it works…

As the name suggests, this

technique renders the volume by accumulating the intermediate results into two separate buffers by slicing the volume halfway between the light and the view vectors. When the scene is rendered from the point of view of the eye, the light buffer is used as texture to find out whether the current fragment is in shadow or not. This is carried out in the fragment shader by looking up the light buffer by using the shadow matrix as in the shadow mapping algorithm. In this step, the blending equation is swapped based on the direction of view with respect to the light direction vector. If the view is inverted, the blending direction is swapped from front-to-back to back-to-front using glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE). On the other hand, if the view direction is not inverted, the blend function is set as glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). Note that here we have not used the over compositing since we premultiply the color with its alpha in the fragment shader (see Chapter7/HalfAngleSlicing/shaders/slicerShadow.frag), as shown in the following code:

In the next step, the scene is rendered from the point of view of the light. This time, the normal over compositing is used. This ensures that the light contributions accumulate with each other similar to how light behaves in normal circumstances. In this case, we use the same fragment shader as was used in the 3D texture slicing recipe (see Chapter7/HalgAngleSlicing/shaders/textureSlicer.frag).

There's more…

The demo application implementing this recipe renders the scene, as shown in the following screenshot, similar to the previous recipes. The light source position can be changed using the right mouse button. We can see the shadow changing dynamically for the scene. Attenuation of light is also controlled by setting a shader uniform. This is the reason why we can observe a bluish tinge in the output image.

There's more…

Note that we cannot see the black halo around the volume dataset as was evident in earlier recipes. The reason for this is the if condition used in the fragment shader. We only perform these calculations if the current density value is greater than 0.1. This essentially removed air and other low See also

Chapter 39, Volume Rendering Techniques, in GPU Gems 1. Available online at