Book Image

OpenGL ??? Build high performance graphics

By : William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo
Book Image

OpenGL ??? Build high performance graphics

By: William Lo, David Wolff, Muhammad Mobeen Movania, Raymond Chun Hing Lo

Overview of this book

OpenGL is a fully functional, cross-platform API widely adopted across the industry for 2D and 3D graphics development. It is mainly used for game development and applications, but is equally popular in a vast variety of additional sectors. This practical course will help you gain proficiency with OpenGL and build compelling graphics for your games and applications. OpenGL Development Cookbook – This is your go-to guide to learn graphical programming techniques and implement 3D animations with OpenGL. This straight-talking Cookbook is perfect for intermediate C++ programmers who want to exploit the full potential of OpenGL. Full of practical techniques for implementing amazing computer graphics and visualizations using OpenGL. OpenGL 4.0 Shading Language Cookbook, Second Edition – With Version 4, the language has been further refined to provide programmers with greater power and flexibility, with new stages such as tessellation and compute. OpenGL Shading Language 4 Cookbook is a practical guide that takes you from the fundamentals of programming with modern GLSL and OpenGL, through to advanced techniques. OpenGL Data Visualization Cookbook - This easy-to-follow, comprehensive Cookbook shows readers how to create a variety of real-time, interactive data visualization tools. Each topic is explained in a step-by-step format. A range of hot topics is included, including stereoscopic 3D rendering and data visualization on mobile/wearable platforms. By the end of this guide, you will be equipped with the essential skills to develop a wide range of impressive OpenGL-based applications for your unique data visualization needs. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products, OpenGL Development Cookbook by Muhammad Mobeen Movania, OpenGL 4.0 Shading Language Cookbook, Second Edition by David Wolff, OpenGL Data Visualization Cookbook by Raymond C. H. Lo, William C. Y. Lo
Table of Contents (5 chapters)

Several demos and applications require rendering of terrains. This recipe will show how to implement terrain generation in modern OpenGL. The height map is loaded using the SOIL image loading library which contains displacement information. A 2D grid is then generated depending on the required terrain resolution. Then, the displacement information contained in the height map is used to displace the 2D grid in the vertex shader. Usually, the obtained displacement value is scaled to increase or decrease the displacement scale as desired.

Let us start our recipe by following these simple steps:

  1. Load the height map texture using the SOIL image loading library and generate an OpenGL texture from it. The texture filtering is set to GL_NEAREST as we want to obtain the exact values from the height map. If we had changed this to GL_LINEAR, we would get interpolated values. Since the terrain height map is not tiled, we set the texture wrap mode to GL_CLAMP.
      int texture_width = 0, texture_height = 0, channels=0;
      GLubyte* pData = SOIL_load_image(filename.c_str(),&texture_width, &texture_height, &channels, SOIL_LOAD_L);
      //vertically flip the image data
      for( j = 0; j*2 < texture_height; ++j )
      {
        int index1 = j * texture_width ;
        int index2 = (texture_height - 1 - j) * texture_width ;
        for( i = texture_width ; i > 0; --i )
        {
          GLubyte temp = pData[index1];
          pData[index1] = pData[index2];
          pData[index2] = temp;
          ++index1;
          ++index2;
        }
      }
      glGenTextures(1, &heightMapTextureID);
      glActiveTexture(GL_TEXTURE0);
      glBindTexture(GL_TEXTURE_2D, heightMapTextureID);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
      glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture_width, texture_height, 0, GL_RED, GL_UNSIGNED_BYTE, pData);
      SOIL_free_image_data(pData);
  2. Set up the terrain geometry by generating a set of points in the XZ plane. The TERRAIN_WIDTH parameter controls the total number of vertices in the X axis whereas the TERRAIN_DEPTH parameter controls the total number of vertices in the Z axis.
      for( j=0;j<TERRAIN_DEPTH;j++) {
        for( i=0;i<TERRAIN_WIDTH;i++) {
          vertices[count]=glm::vec3((float(i)/(TERRAIN_WIDTH-1)), 0, (float(j)/(TERRAIN_DEPTH-1)));
          count++;
        }
      }
  3. Set up the vertex shader that displaces the 2D terrain mesh. Refer to Chapter5/TerrainLoading/shaders/shader.vert for details. The height value is obtained from the height map. This value is then added to the current vertex position and finally multiplied with the combined modelview projection (MVP) matrix to get the clip space position. The HALF_TERRAIN_SIZE uniform contains half of the total number of vertices in both the X and Z axes, that is, HALF_TERRAIN_SIZE = ivec2(TERRAIN_WIDTH/2, TERRAIN_DEPTH/2). Similarly the scale uniform is used to scale the height read from the height map. The half_scale and HALF_TERRAIN_SIZE uniforms are used to position the mesh at origin.
    #version 330 core 
    layout (location=0) in vec3 vVertex;
    uniform mat4 MVP;
    uniform ivec2 HALF_TERRAIN_SIZE;
    uniform sampler2D heightMapTexture;
    uniform float scale;
    uniform float half_scale;
    void main()
    {
      float height = texture(heightMapTexture, vVertex.xz).r*scale - half_scale;
      vec2 pos  = (vVertex.xz*2.0-1)*HALF_TERRAIN_SIZE;
      gl_Position = MVP*vec4(pos.x, height, pos.y, 1);
    }
  4. Load the shaders and the corresponding uniform and attribute locations. Also, set the values of the uniforms that never change during the lifetime of the application, at initialization.
          shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert");
          shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag");
          shader.CreateAndLinkProgram();
          shader.Use();
               shader.AddAttribute("vVertex");
               shader.AddUniform("heightMapTexture");
               shader.AddUniform("scale");
               shader.AddUniform("half_scale");
               shader.AddUniform("HALF_TERRAIN_SIZE");
               shader.AddUniform("MVP");
               glUniform1i(shader("heightMapTexture"), 0);
               glUniform2i(shader("HALF_TERRAIN_SIZE"), TERRAIN_WIDTH>>1, TERRAIN_DEPTH>>1);
               glUniform1f(shader("scale"), scale);
               glUniform1f(shader("half_scale"), half_scale);
          shader.UnUse();
  5. In the rendering code, set the shader and render the terrain by passing the modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
        glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
        glDrawElements(GL_TRIANGLES,TOTAL_INDICES, GL_UNSIGNED_INT, 0);
    shader.UnUse();

The method we have presented in this recipe uses the vertex displacement to generate a terrain from a height map. There are several tools available that can help with the terrain height map generation. One of them is Terragen (planetside.co.uk). Another useful tool is World Machine (http://world-machine.com/). A general source of information for terrains is available at the virtual terrain project (http://vterrain.org/).

We can also use procedural methods to generate terrains such as fractal terrain generation. Noise methods can also be helpful in the generation of the terrains.

To know more about implementing terrains, you can check the following:

Getting started

For the terrain, first the 2D grid geometry is generated depending on the terrain resolution. The steps to generate such geometry were previously covered in the Doing a ripple mesh deformer using vertex shader recipe in

Chapter 1, Introduction to Modern OpenGL. The code for this recipe is contained in the Chapter5/TerrainLoading directory.

Let us start our recipe by following these simple steps:

  1. Load the height map texture using the SOIL image loading library and generate an OpenGL texture from it. The texture filtering is set to GL_NEAREST as we want to obtain the exact values from the height map. If we had changed this to GL_LINEAR, we would get interpolated values. Since the terrain height map is not tiled, we set the texture wrap mode to GL_CLAMP.
      int texture_width = 0, texture_height = 0, channels=0;
      GLubyte* pData = SOIL_load_image(filename.c_str(),&texture_width, &texture_height, &channels, SOIL_LOAD_L);
      //vertically flip the image data
      for( j = 0; j*2 < texture_height; ++j )
      {
        int index1 = j * texture_width ;
        int index2 = (texture_height - 1 - j) * texture_width ;
        for( i = texture_width ; i > 0; --i )
        {
          GLubyte temp = pData[index1];
          pData[index1] = pData[index2];
          pData[index2] = temp;
          ++index1;
          ++index2;
        }
      }
      glGenTextures(1, &heightMapTextureID);
      glActiveTexture(GL_TEXTURE0);
      glBindTexture(GL_TEXTURE_2D, heightMapTextureID);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
      glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture_width, texture_height, 0, GL_RED, GL_UNSIGNED_BYTE, pData);
      SOIL_free_image_data(pData);
  2. Set up the terrain geometry by generating a set of points in the XZ plane. The TERRAIN_WIDTH parameter controls the total number of vertices in the X axis whereas the TERRAIN_DEPTH parameter controls the total number of vertices in the Z axis.
      for( j=0;j<TERRAIN_DEPTH;j++) {
        for( i=0;i<TERRAIN_WIDTH;i++) {
          vertices[count]=glm::vec3((float(i)/(TERRAIN_WIDTH-1)), 0, (float(j)/(TERRAIN_DEPTH-1)));
          count++;
        }
      }
  3. Set up the vertex shader that displaces the 2D terrain mesh. Refer to Chapter5/TerrainLoading/shaders/shader.vert for details. The height value is obtained from the height map. This value is then added to the current vertex position and finally multiplied with the combined modelview projection (MVP) matrix to get the clip space position. The HALF_TERRAIN_SIZE uniform contains half of the total number of vertices in both the X and Z axes, that is, HALF_TERRAIN_SIZE = ivec2(TERRAIN_WIDTH/2, TERRAIN_DEPTH/2). Similarly the scale uniform is used to scale the height read from the height map. The half_scale and HALF_TERRAIN_SIZE uniforms are used to position the mesh at origin.
    #version 330 core 
    layout (location=0) in vec3 vVertex;
    uniform mat4 MVP;
    uniform ivec2 HALF_TERRAIN_SIZE;
    uniform sampler2D heightMapTexture;
    uniform float scale;
    uniform float half_scale;
    void main()
    {
      float height = texture(heightMapTexture, vVertex.xz).r*scale - half_scale;
      vec2 pos  = (vVertex.xz*2.0-1)*HALF_TERRAIN_SIZE;
      gl_Position = MVP*vec4(pos.x, height, pos.y, 1);
    }
  4. Load the shaders and the corresponding uniform and attribute locations. Also, set the values of the uniforms that never change during the lifetime of the application, at initialization.
          shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert");
          shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag");
          shader.CreateAndLinkProgram();
          shader.Use();
               shader.AddAttribute("vVertex");
               shader.AddUniform("heightMapTexture");
               shader.AddUniform("scale");
               shader.AddUniform("half_scale");
               shader.AddUniform("HALF_TERRAIN_SIZE");
               shader.AddUniform("MVP");
               glUniform1i(shader("heightMapTexture"), 0);
               glUniform2i(shader("HALF_TERRAIN_SIZE"), TERRAIN_WIDTH>>1, TERRAIN_DEPTH>>1);
               glUniform1f(shader("scale"), scale);
               glUniform1f(shader("half_scale"), half_scale);
          shader.UnUse();
  5. In the rendering code, set the shader and render the terrain by passing the modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
        glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
        glDrawElements(GL_TRIANGLES,TOTAL_INDICES, GL_UNSIGNED_INT, 0);
    shader.UnUse();

The method we have presented in this recipe uses the vertex displacement to generate a terrain from a height map. There are several tools available that can help with the terrain height map generation. One of them is Terragen (planetside.co.uk). Another useful tool is World Machine (http://world-machine.com/). A general source of information for terrains is available at the virtual terrain project (http://vterrain.org/).

We can also use procedural methods to generate terrains such as fractal terrain generation. Noise methods can also be helpful in the generation of the terrains.

To know more about implementing terrains, you can check the following:

How to do it…

Let us start our recipe by following these simple steps:

Load the height map texture using the SOIL image loading library and generate an OpenGL texture from it. The texture filtering is set to GL_NEAREST as we want to obtain the exact values from the height map. If we had changed this to GL_LINEAR, we would get interpolated values. Since the terrain height map is not tiled, we set the texture wrap mode to GL_CLAMP.
  int texture_width = 0, texture_height = 0, channels=0;
  GLubyte* pData = SOIL_load_image(filename.c_str(),&texture_width, &texture_height, &channels, SOIL_LOAD_L);
  //vertically flip the image data
  for( j = 0; j*2 < texture_height; ++j )
  {
    int index1 = j * texture_width ;
    int index2 = (texture_height - 1 - j) * texture_width ;
    for( i = texture_width ; i > 0; --i )
    {
      GLubyte temp = pData[index1];
      pData[index1] = pData[index2];
      pData[index2] = temp;
      ++index1;
      ++index2;
    }
  }
  glGenTextures(1, &heightMapTextureID);
  glActiveTexture(GL_TEXTURE0);
  glBindTexture(GL_TEXTURE_2D, heightMapTextureID);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture_width, texture_height, 0, GL_RED, GL_UNSIGNED_BYTE, pData);
  SOIL_free_image_data(pData);
Set up the terrain geometry by generating a set of points in the XZ plane. The TERRAIN_WIDTH parameter
  1. controls the total number of vertices in the X axis whereas the TERRAIN_DEPTH parameter controls the total number of vertices in the Z axis.
      for( j=0;j<TERRAIN_DEPTH;j++) {
        for( i=0;i<TERRAIN_WIDTH;i++) {
          vertices[count]=glm::vec3((float(i)/(TERRAIN_WIDTH-1)), 0, (float(j)/(TERRAIN_DEPTH-1)));
          count++;
        }
      }
  2. Set up the vertex shader that displaces the 2D terrain mesh. Refer to Chapter5/TerrainLoading/shaders/shader.vert for details. The height value is obtained from the height map. This value is then added to the current vertex position and finally multiplied with the combined modelview projection (MVP) matrix to get the clip space position. The HALF_TERRAIN_SIZE uniform contains half of the total number of vertices in both the X and Z axes, that is, HALF_TERRAIN_SIZE = ivec2(TERRAIN_WIDTH/2, TERRAIN_DEPTH/2). Similarly the scale uniform is used to scale the height read from the height map. The half_scale and HALF_TERRAIN_SIZE uniforms are used to position the mesh at origin.
    #version 330 core 
    layout (location=0) in vec3 vVertex;
    uniform mat4 MVP;
    uniform ivec2 HALF_TERRAIN_SIZE;
    uniform sampler2D heightMapTexture;
    uniform float scale;
    uniform float half_scale;
    void main()
    {
      float height = texture(heightMapTexture, vVertex.xz).r*scale - half_scale;
      vec2 pos  = (vVertex.xz*2.0-1)*HALF_TERRAIN_SIZE;
      gl_Position = MVP*vec4(pos.x, height, pos.y, 1);
    }
  3. Load the shaders and the corresponding uniform and attribute locations. Also, set the values of the uniforms that never change during the lifetime of the application, at initialization.
          shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert");
          shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag");
          shader.CreateAndLinkProgram();
          shader.Use();
               shader.AddAttribute("vVertex");
               shader.AddUniform("heightMapTexture");
               shader.AddUniform("scale");
               shader.AddUniform("half_scale");
               shader.AddUniform("HALF_TERRAIN_SIZE");
               shader.AddUniform("MVP");
               glUniform1i(shader("heightMapTexture"), 0);
               glUniform2i(shader("HALF_TERRAIN_SIZE"), TERRAIN_WIDTH>>1, TERRAIN_DEPTH>>1);
               glUniform1f(shader("scale"), scale);
               glUniform1f(shader("half_scale"), half_scale);
          shader.UnUse();
  4. In the rendering code, set the shader and render the terrain by passing the modelview/projection matrices to the shader as shader uniforms.
    shader.Use();
        glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
        glDrawElements(GL_TRIANGLES,TOTAL_INDICES, GL_UNSIGNED_INT, 0);
    shader.UnUse();

The method we have presented in this recipe uses the vertex displacement to generate a terrain from a height map. There are several tools available that can help with the terrain height map generation. One of them is Terragen (planetside.co.uk). Another useful tool is World Machine (http://world-machine.com/). A general source of information for terrains is available at the virtual terrain project (http://vterrain.org/).

We can also use procedural methods to generate terrains such as fractal terrain generation. Noise methods can also be helpful in the generation of the terrains.

To know more about implementing terrains, you can check the following:

How it works…

Terrain rendering

The method we have presented in this recipe uses the vertex displacement to generate a terrain from a height map. There are several tools available that can help with the terrain height map generation. One of them is Terragen (planetside.co.uk). Another useful tool is World Machine (http://world-machine.com/). A general source of information for terrains is available at the virtual terrain project (http://vterrain.org/).

We can also use procedural methods to generate terrains such as fractal terrain generation. Noise methods can also be helpful in the generation of the terrains.

To know more about implementing terrains, you can check the following:

There's more…

The method we have presented in this recipe uses the vertex displacement to generate a terrain from a height map. There are several tools available that can help with the

terrain height map generation. One of them is Terragen (planetside.co.uk). Another useful tool is World Machine (http://world-machine.com/). A general source of information for terrains is available at the virtual terrain project (http://vterrain.org/).

We can also use procedural methods to generate terrains such as fractal terrain generation. Noise methods can also be helpful in the generation of the terrains.

To know more about implementing terrains, you can check the following:

See also

To know more about implementing terrains, you can check the following:

Focus on 3D Terrain Programming, by Trent Polack, Premier Press, 2002
  • Chapter 7, Terrain Level of Detail in Level of Detail for 3D Graphics by David Luebke, Morgan Kaufmann Publishers, 2003.

We will now create model loader and renderer for Autodesk® 3ds model format which is a simple yet efficient binary model format for storing digital assets.

The steps required to implement a 3ds file viewer are as follows:

  1. Create an instance of the C3dsLoader class. Then call the C3dsLoader::Load3DS function passing it the name of the mesh file and a set of vectors to store the submeshes, vertices, normals, uvs, indices, and materials.
    if(!loader.Load3DS(mesh_filename.c_str( ), meshes, vertices, normals, uvs, faces, indices, materials)) {
      cout<<"Cannot load the 3ds mesh"<<endl;
      exit(EXIT_FAILURE);
    }
  2. After the mesh is loaded, use the mesh's material list to load the material textures into the OpenGL texture object.
      for(size_t k=0;k<materials.size();k++) {
        for(size_t m=0;m< materials[k]->textureMaps.size();m++)
        {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename = materials[k]->textureMaps[m]->filename;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
              exit(EXIT_FAILURE);
          }
          //Flip the image on Y axis
          int i,j;
          for( j = 0; j*2 < texture_height; ++j ) {
            int index1 = j * texture_width * channels;
            int index2 = (texture_height - 1 - j) * texture_width * channels;
            for( i = texture_width * channels; i > 0; --i ){
              GLubyte temp = pData[index1];
              pData[index1] = pData[index2];
              pData[index2] = temp;
              ++index1;
              ++index2;
            }
          }
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB; break;
            case 4: format = GL_RGBA; break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textureMaps[filename]=id;
        }
      }
  3. Pass the loaded per-vertex attributes; that is, positions (vertices), texture coordinates (uvs), per-vertex normals (normals), and triangle indices (indices) to GPU memory by allocating separate buffer objects for each attribute. Note that for easier handling of buffer objects, we bind a single vertex array object (vaoID) first.
        glBindVertexArray(vaoID);
        glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* vertices.size(), &(vertices[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vVertex"]);
        glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0);
        glBindBuffer (GL_ARRAY_BUFFER, vboUVsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec2)*uvs.size(), &(uvs[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vUV"]);
        glVertexAttribPointer(shader["vUV"],2,GL_FLOAT,GL_FALSE,0, 0);
        glBindBuffer (GL_ARRAY_BUFFER, vboNormalsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* normals.size(), &(normals[0].x),  GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vNormal"]);
        glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, 0, 0);
  4. If we have only a single material in the 3ds file, we store the face indices into GL_ELEMENT_ARRAY_BUFFER so that we can render the whole mesh in a single call. However, if we have more than one material, we bind the appropriate submeshes separately. The glBufferData call allocates the GPU memory, however, it is not initialized. In order to initialize the buffer object memory, we can use the glMapBuffer function to obtain a direct pointer to the GPU memory. Using this pointer, we can then write to the GPU memory. An alternative to using glMapBuffer is glBufferSubData which can modify the GPU memory by copying contents from a CPU buffer.
        if(materials.size()==1) {
          glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
          glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)* 
          3*faces.size(), 0, GL_STATIC_DRAW);
          GLushort* pIndices = static_cast<GLushort*>(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
          for(size_t i=0;i<faces.size();i++) {
            *(pIndices++)=faces[i].a;
            *(pIndices++)=faces[i].b;
            *(pIndices++)=faces[i].c;
          }
          glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
        }
  5. Set up the vertex shader to output the clip space position as well as the per-vertex texture coordinates. The texture coordinates are then interpolated by the rasterizer to the fragment shader using an output attribute vUVout.
    #version 330 core
    
    layout(location = 0) in vec3 vVertex;
    layout(location = 1) in vec3 vNormal;
    layout(location = 2) in vec2 vUV;
    
    smooth out vec2 vUVout;
    
    uniform mat4 P; 
    uniform mat4 MV;
    uniform mat3 N;
    
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    
    void main()
    {
      vUVout=vUV;
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal = N*vNormal;
      gl_Position = P*vec4(vEyeSpacePosition,1);
    }
  6. Set up the fragment shader, which looks up the texture map sampler with the interpolated texture coordinates from the rasterizer. Depending on whether the submesh has a texture, we linearly interpolate between the texture map color and the diffused color of the material, using the GLSL mix function.
    #version 330 core  
    uniform sampler2D textureMap;
    uniform float hasTexture;
    uniform vec3 light_position;//light position in object space
    uniform mat4 MV;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec2 vUVout;
    
    layout(location=0) out vec4 vFragColor;
    
    const float k0 = 1.0;//constant attenuation
    const float k1 = 0.0;//linear attenuation
    const float k2 = 0.0;//quadratic attenuation
    
    void main()
    {
      vec4 vEyeSpaceLightPosition = (MV*vec4(light_position,1));
      vec3 L = (vEyeSpaceLightPosition.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      diffuse *= attenuationAmount;
    
      vFragColor = diffuse*mix(vec4(1),texture(textureMap, vUVout), hasTexture);
    }
  7. The rendering code binds the shader program, sets the shader uniforms, and then renders the mesh, depending on how many materials the 3ds mesh has. If the mesh has only a single material, it is drawn in a single call to glDrawElement by using the indices attached to the GL_ELEMENT_ARRAY_BUFFER binding point.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
        if(materials.size()==1) {
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          if(textureMaps.size()>0) {
            if(whichID[0] != textureMaps[materials[0]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[0]->textureMaps[0]->filename]);
            glUniform1f(shader("hasTexture"),1.0);
          }
        } else {
          glUniform1f(shader("hasTexture"),0.0);
          glUniform3fv(shader("diffuse_color"),1, materials[0]->diffuse);
        }
        glDrawElements(GL_TRIANGLES, meshes[0]->faces.size()*3, GL_UNSIGNED_SHORT, 0);
      }
  8. If the mesh contains more than one material, we iterate through the material list, and bind the texture map (if the material has one), otherwise we use the diffuse color stored in the material for the submesh. Finally, we pass the sub_indices array stored in the material to the glDrawElements function to load those indices only.
    else {
      for(size_t i=0;i<materials.size();i++) {
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(materials[i]->textureMaps.size()>0) {
          if(whichID[0] != textureMaps[materials[i]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[i]->textureMaps[0]->filename]);
          }
          glUniform1f(shader("hasTexture"),1.0);
        } else {
          glUniform1f(shader("hasTexture"),0.0);
        }
        glUniform3fv(shader("diffuse_color"),1, materials[i]->diffuse);
        glDrawElements(GL_TRIANGLES, materials[i]->sub_indices.size(), GL_UNSIGNED_SHORT, &(materials[i]->sub_indices[0]));
      }
    }
    shader.UnUse();

The main component of this recipe is the C3dsLoader::Load3DS function. The 3ds file is a binary file which is organized into a collection of chunks. Typically, a reader reads the first two bytes from the file which are stored in the chunk ID. The next four bytes store the chunk length in bytes. We continue reading chunks, and their lengths, and then store data appropriately into our vectors/variables until there are no more chunks and we pass reading the end of file. The 3ds specifications detail all of the chunks and their lengths as well as subchunks, as shown in the following figure:

How it works…

Note that if there is a subchunk that we are interested in, we need to read the parent chunk as well, to move the file pointer to the appropriate offset in the file, for our required chunk. The loader first finds the total size of the 3ds mesh file in bytes. Then, it runs a while loop that checks to see if the current file pointer is within the file's size. If it is, it continues to read the first two bytes (the chunk's ID) and the next four bytes (the chunk's length).

Then we start a big switch case with all of the required chunk IDs and then read the bytes from the respective chunks as desired.

All names (object name, material name, or texture map name) have to be read byte-by-byte until the null terminator character (\0) is found. For reading vertices, we first read two bytes that store the total number of vertices (N). Two bytes means that the maximum number of vertices one mesh can store is 65536. Then, we read the whole chunk of bytes, that is, sizeof(glm::vec3)*N, directly into our mesh's vertices, shown as follows:

Similar to how the vertex information is stored, the face information stores the three unsigned short indices of the triangle and another unsigned short index containing the face flags. Therefore, for a mesh with M triangles, we have to read 4*M unsigned shorts from the file. We store the four unsigned shorts into a Face struct for convenience and then read the contents, as shown in the following code snippet:

The code for reading the material face IDs and texture coordinates follows in the same way as the total entries are first read and then the appropriate number of bytes are read from the file. Note that, if a chunk has a color chunk (as for chunk IDs: 0xa010 to 0xa030), the color information is contained in a subchunk (IDs: 0x0010 to 0x0013) depending on the data type used to store the color information in the parent chunk.

After the mesh and material information is loaded, we generate global vertices, uvs, and indices vectors. This makes it easy for us to render the submeshes in the render function.

Note that the 3ds format does not store the per-vertex normal explicitly. It only stores smoothing groups which tell us which faces have shared normals. After we have the vertex positions and face information, we can generate the per-vertex normals by averaging the per-face normals. This is carried out by using the following code snippet in the 3ds.cpp file. We first allocate space for the per-vertex normals. Then we estimate the face's normal by using the cross product of the two edges. Finally, we add the face normal to the appropriate vertex index and then normalize the normal.

Once we have all the per-vertex attributes and faces information, we use this to group the triangles by material. We loop through all of the materials and expand their face IDs to include the three vertex IDs and make the face.

Getting started

The code for this recipe is contained in the Chapter5/3DsViewer folder. This recipe will be using the Drawing a 2D image in a window using a fragment shader and the SOIL image loading library recipe from

Chapter 1, Introduction to Modern OpenGL, for loading the 3ds mesh file's textures using the SOIL image loading library.

The steps required to implement a 3ds file viewer are as follows:

  1. Create an instance of the C3dsLoader class. Then call the C3dsLoader::Load3DS function passing it the name of the mesh file and a set of vectors to store the submeshes, vertices, normals, uvs, indices, and materials.
    if(!loader.Load3DS(mesh_filename.c_str( ), meshes, vertices, normals, uvs, faces, indices, materials)) {
      cout<<"Cannot load the 3ds mesh"<<endl;
      exit(EXIT_FAILURE);
    }
  2. After the mesh is loaded, use the mesh's material list to load the material textures into the OpenGL texture object.
      for(size_t k=0;k<materials.size();k++) {
        for(size_t m=0;m< materials[k]->textureMaps.size();m++)
        {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename = materials[k]->textureMaps[m]->filename;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
              exit(EXIT_FAILURE);
          }
          //Flip the image on Y axis
          int i,j;
          for( j = 0; j*2 < texture_height; ++j ) {
            int index1 = j * texture_width * channels;
            int index2 = (texture_height - 1 - j) * texture_width * channels;
            for( i = texture_width * channels; i > 0; --i ){
              GLubyte temp = pData[index1];
              pData[index1] = pData[index2];
              pData[index2] = temp;
              ++index1;
              ++index2;
            }
          }
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB; break;
            case 4: format = GL_RGBA; break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textureMaps[filename]=id;
        }
      }
  3. Pass the loaded per-vertex attributes; that is, positions (vertices), texture coordinates (uvs), per-vertex normals (normals), and triangle indices (indices) to GPU memory by allocating separate buffer objects for each attribute. Note that for easier handling of buffer objects, we bind a single vertex array object (vaoID) first.
        glBindVertexArray(vaoID);
        glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* vertices.size(), &(vertices[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vVertex"]);
        glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0);
        glBindBuffer (GL_ARRAY_BUFFER, vboUVsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec2)*uvs.size(), &(uvs[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vUV"]);
        glVertexAttribPointer(shader["vUV"],2,GL_FLOAT,GL_FALSE,0, 0);
        glBindBuffer (GL_ARRAY_BUFFER, vboNormalsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* normals.size(), &(normals[0].x),  GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vNormal"]);
        glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, 0, 0);
  4. If we have only a single material in the 3ds file, we store the face indices into GL_ELEMENT_ARRAY_BUFFER so that we can render the whole mesh in a single call. However, if we have more than one material, we bind the appropriate submeshes separately. The glBufferData call allocates the GPU memory, however, it is not initialized. In order to initialize the buffer object memory, we can use the glMapBuffer function to obtain a direct pointer to the GPU memory. Using this pointer, we can then write to the GPU memory. An alternative to using glMapBuffer is glBufferSubData which can modify the GPU memory by copying contents from a CPU buffer.
        if(materials.size()==1) {
          glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
          glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)* 
          3*faces.size(), 0, GL_STATIC_DRAW);
          GLushort* pIndices = static_cast<GLushort*>(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
          for(size_t i=0;i<faces.size();i++) {
            *(pIndices++)=faces[i].a;
            *(pIndices++)=faces[i].b;
            *(pIndices++)=faces[i].c;
          }
          glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
        }
  5. Set up the vertex shader to output the clip space position as well as the per-vertex texture coordinates. The texture coordinates are then interpolated by the rasterizer to the fragment shader using an output attribute vUVout.
    #version 330 core
    
    layout(location = 0) in vec3 vVertex;
    layout(location = 1) in vec3 vNormal;
    layout(location = 2) in vec2 vUV;
    
    smooth out vec2 vUVout;
    
    uniform mat4 P; 
    uniform mat4 MV;
    uniform mat3 N;
    
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    
    void main()
    {
      vUVout=vUV;
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal = N*vNormal;
      gl_Position = P*vec4(vEyeSpacePosition,1);
    }
  6. Set up the fragment shader, which looks up the texture map sampler with the interpolated texture coordinates from the rasterizer. Depending on whether the submesh has a texture, we linearly interpolate between the texture map color and the diffused color of the material, using the GLSL mix function.
    #version 330 core  
    uniform sampler2D textureMap;
    uniform float hasTexture;
    uniform vec3 light_position;//light position in object space
    uniform mat4 MV;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec2 vUVout;
    
    layout(location=0) out vec4 vFragColor;
    
    const float k0 = 1.0;//constant attenuation
    const float k1 = 0.0;//linear attenuation
    const float k2 = 0.0;//quadratic attenuation
    
    void main()
    {
      vec4 vEyeSpaceLightPosition = (MV*vec4(light_position,1));
      vec3 L = (vEyeSpaceLightPosition.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      diffuse *= attenuationAmount;
    
      vFragColor = diffuse*mix(vec4(1),texture(textureMap, vUVout), hasTexture);
    }
  7. The rendering code binds the shader program, sets the shader uniforms, and then renders the mesh, depending on how many materials the 3ds mesh has. If the mesh has only a single material, it is drawn in a single call to glDrawElement by using the indices attached to the GL_ELEMENT_ARRAY_BUFFER binding point.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
        if(materials.size()==1) {
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          if(textureMaps.size()>0) {
            if(whichID[0] != textureMaps[materials[0]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[0]->textureMaps[0]->filename]);
            glUniform1f(shader("hasTexture"),1.0);
          }
        } else {
          glUniform1f(shader("hasTexture"),0.0);
          glUniform3fv(shader("diffuse_color"),1, materials[0]->diffuse);
        }
        glDrawElements(GL_TRIANGLES, meshes[0]->faces.size()*3, GL_UNSIGNED_SHORT, 0);
      }
  8. If the mesh contains more than one material, we iterate through the material list, and bind the texture map (if the material has one), otherwise we use the diffuse color stored in the material for the submesh. Finally, we pass the sub_indices array stored in the material to the glDrawElements function to load those indices only.
    else {
      for(size_t i=0;i<materials.size();i++) {
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(materials[i]->textureMaps.size()>0) {
          if(whichID[0] != textureMaps[materials[i]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[i]->textureMaps[0]->filename]);
          }
          glUniform1f(shader("hasTexture"),1.0);
        } else {
          glUniform1f(shader("hasTexture"),0.0);
        }
        glUniform3fv(shader("diffuse_color"),1, materials[i]->diffuse);
        glDrawElements(GL_TRIANGLES, materials[i]->sub_indices.size(), GL_UNSIGNED_SHORT, &(materials[i]->sub_indices[0]));
      }
    }
    shader.UnUse();

The main component of this recipe is the C3dsLoader::Load3DS function. The 3ds file is a binary file which is organized into a collection of chunks. Typically, a reader reads the first two bytes from the file which are stored in the chunk ID. The next four bytes store the chunk length in bytes. We continue reading chunks, and their lengths, and then store data appropriately into our vectors/variables until there are no more chunks and we pass reading the end of file. The 3ds specifications detail all of the chunks and their lengths as well as subchunks, as shown in the following figure:

How it works…

Note that if there is a subchunk that we are interested in, we need to read the parent chunk as well, to move the file pointer to the appropriate offset in the file, for our required chunk. The loader first finds the total size of the 3ds mesh file in bytes. Then, it runs a while loop that checks to see if the current file pointer is within the file's size. If it is, it continues to read the first two bytes (the chunk's ID) and the next four bytes (the chunk's length).

Then we start a big switch case with all of the required chunk IDs and then read the bytes from the respective chunks as desired.

All names (object name, material name, or texture map name) have to be read byte-by-byte until the null terminator character (\0) is found. For reading vertices, we first read two bytes that store the total number of vertices (N). Two bytes means that the maximum number of vertices one mesh can store is 65536. Then, we read the whole chunk of bytes, that is, sizeof(glm::vec3)*N, directly into our mesh's vertices, shown as follows:

Similar to how the vertex information is stored, the face information stores the three unsigned short indices of the triangle and another unsigned short index containing the face flags. Therefore, for a mesh with M triangles, we have to read 4*M unsigned shorts from the file. We store the four unsigned shorts into a Face struct for convenience and then read the contents, as shown in the following code snippet:

The code for reading the material face IDs and texture coordinates follows in the same way as the total entries are first read and then the appropriate number of bytes are read from the file. Note that, if a chunk has a color chunk (as for chunk IDs: 0xa010 to 0xa030), the color information is contained in a subchunk (IDs: 0x0010 to 0x0013) depending on the data type used to store the color information in the parent chunk.

After the mesh and material information is loaded, we generate global vertices, uvs, and indices vectors. This makes it easy for us to render the submeshes in the render function.

Note that the 3ds format does not store the per-vertex normal explicitly. It only stores smoothing groups which tell us which faces have shared normals. After we have the vertex positions and face information, we can generate the per-vertex normals by averaging the per-face normals. This is carried out by using the following code snippet in the 3ds.cpp file. We first allocate space for the per-vertex normals. Then we estimate the face's normal by using the cross product of the two edges. Finally, we add the face normal to the appropriate vertex index and then normalize the normal.

Once we have all the per-vertex attributes and faces information, we use this to group the triangles by material. We loop through all of the materials and expand their face IDs to include the three vertex IDs and make the face.

How to do it…

The steps

required to implement a 3ds file viewer are as follows:

  1. Create an instance of the C3dsLoader class. Then call the C3dsLoader::Load3DS function passing it the name of the mesh file and a set of vectors to store the submeshes, vertices, normals, uvs, indices, and materials.
    if(!loader.Load3DS(mesh_filename.c_str( ), meshes, vertices, normals, uvs, faces, indices, materials)) {
      cout<<"Cannot load the 3ds mesh"<<endl;
      exit(EXIT_FAILURE);
    }
  2. After the mesh is loaded, use the mesh's material list to load the material textures into the OpenGL texture object.
      for(size_t k=0;k<materials.size();k++) {
        for(size_t m=0;m< materials[k]->textureMaps.size();m++)
        {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename = materials[k]->textureMaps[m]->filename;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
              exit(EXIT_FAILURE);
          }
          //Flip the image on Y axis
          int i,j;
          for( j = 0; j*2 < texture_height; ++j ) {
            int index1 = j * texture_width * channels;
            int index2 = (texture_height - 1 - j) * texture_width * channels;
            for( i = texture_width * channels; i > 0; --i ){
              GLubyte temp = pData[index1];
              pData[index1] = pData[index2];
              pData[index2] = temp;
              ++index1;
              ++index2;
            }
          }
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB; break;
            case 4: format = GL_RGBA; break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textureMaps[filename]=id;
        }
      }
  3. Pass the loaded per-vertex attributes; that is, positions (vertices), texture coordinates (uvs), per-vertex normals (normals), and triangle indices (indices) to GPU memory by allocating separate buffer objects for each attribute. Note that for easier handling of buffer objects, we bind a single vertex array object (vaoID) first.
        glBindVertexArray(vaoID);
        glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* vertices.size(), &(vertices[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vVertex"]);
        glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0);
        glBindBuffer (GL_ARRAY_BUFFER, vboUVsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec2)*uvs.size(), &(uvs[0].x), GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vUV"]);
        glVertexAttribPointer(shader["vUV"],2,GL_FLOAT,GL_FALSE,0, 0);
        glBindBuffer (GL_ARRAY_BUFFER, vboNormalsID);
        glBufferData (GL_ARRAY_BUFFER, sizeof(glm::vec3)* normals.size(), &(normals[0].x),  GL_STATIC_DRAW);
        glEnableVertexAttribArray(shader["vNormal"]);
        glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, 0, 0);
  4. If we have only a single material in the 3ds file, we store the face indices into GL_ELEMENT_ARRAY_BUFFER so that we can render the whole mesh in a single call. However, if we have more than one material, we bind the appropriate submeshes separately. The glBufferData call allocates the GPU memory, however, it is not initialized. In order to initialize the buffer object memory, we can use the glMapBuffer function to obtain a direct pointer to the GPU memory. Using this pointer, we can then write to the GPU memory. An alternative to using glMapBuffer is glBufferSubData which can modify the GPU memory by copying contents from a CPU buffer.
        if(materials.size()==1) {
          glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
          glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)* 
          3*faces.size(), 0, GL_STATIC_DRAW);
          GLushort* pIndices = static_cast<GLushort*>(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
          for(size_t i=0;i<faces.size();i++) {
            *(pIndices++)=faces[i].a;
            *(pIndices++)=faces[i].b;
            *(pIndices++)=faces[i].c;
          }
          glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
        }
  5. Set up the vertex shader to output the clip space position as well as the per-vertex texture coordinates. The texture coordinates are then interpolated by the rasterizer to the fragment shader using an output attribute vUVout.
    #version 330 core
    
    layout(location = 0) in vec3 vVertex;
    layout(location = 1) in vec3 vNormal;
    layout(location = 2) in vec2 vUV;
    
    smooth out vec2 vUVout;
    
    uniform mat4 P; 
    uniform mat4 MV;
    uniform mat3 N;
    
    smooth out vec3 vEyeSpaceNormal;
    smooth out vec3 vEyeSpacePosition;
    
    void main()
    {
      vUVout=vUV;
      vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz;
      vEyeSpaceNormal = N*vNormal;
      gl_Position = P*vec4(vEyeSpacePosition,1);
    }
  6. Set up the fragment shader, which looks up the texture map sampler with the interpolated texture coordinates from the rasterizer. Depending on whether the submesh has a texture, we linearly interpolate between the texture map color and the diffused color of the material, using the GLSL mix function.
    #version 330 core  
    uniform sampler2D textureMap;
    uniform float hasTexture;
    uniform vec3 light_position;//light position in object space
    uniform mat4 MV;
    smooth in vec3 vEyeSpaceNormal;
    smooth in vec3 vEyeSpacePosition;
    smooth in vec2 vUVout;
    
    layout(location=0) out vec4 vFragColor;
    
    const float k0 = 1.0;//constant attenuation
    const float k1 = 0.0;//linear attenuation
    const float k2 = 0.0;//quadratic attenuation
    
    void main()
    {
      vec4 vEyeSpaceLightPosition = (MV*vec4(light_position,1));
      vec3 L = (vEyeSpaceLightPosition.xyz-vEyeSpacePosition);
      float d = length(L);
      L = normalize(L);
      float diffuse = max(0, dot(vEyeSpaceNormal, L));
      float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d));
      diffuse *= attenuationAmount;
    
      vFragColor = diffuse*mix(vec4(1),texture(textureMap, vUVout), hasTexture);
    }
  7. The rendering code binds the shader program, sets the shader uniforms, and then renders the mesh, depending on how many materials the 3ds mesh has. If the mesh has only a single material, it is drawn in a single call to glDrawElement by using the indices attached to the GL_ELEMENT_ARRAY_BUFFER binding point.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x));
        if(materials.size()==1) {
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          if(textureMaps.size()>0) {
            if(whichID[0] != textureMaps[materials[0]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[0]->textureMaps[0]->filename]);
            glUniform1f(shader("hasTexture"),1.0);
          }
        } else {
          glUniform1f(shader("hasTexture"),0.0);
          glUniform3fv(shader("diffuse_color"),1, materials[0]->diffuse);
        }
        glDrawElements(GL_TRIANGLES, meshes[0]->faces.size()*3, GL_UNSIGNED_SHORT, 0);
      }
  8. If the mesh contains more than one material, we iterate through the material list, and bind the texture map (if the material has one), otherwise we use the diffuse color stored in the material for the submesh. Finally, we pass the sub_indices array stored in the material to the glDrawElements function to load those indices only.
    else {
      for(size_t i=0;i<materials.size();i++) {
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(materials[i]->textureMaps.size()>0) {
          if(whichID[0] != textureMaps[materials[i]->textureMaps[0]->filename]) {
            glBindTexture(GL_TEXTURE_2D, textureMaps[materials[i]->textureMaps[0]->filename]);
          }
          glUniform1f(shader("hasTexture"),1.0);
        } else {
          glUniform1f(shader("hasTexture"),0.0);
        }
        glUniform3fv(shader("diffuse_color"),1, materials[i]->diffuse);
        glDrawElements(GL_TRIANGLES, materials[i]->sub_indices.size(), GL_UNSIGNED_SHORT, &(materials[i]->sub_indices[0]));
      }
    }
    shader.UnUse();

The main component of this recipe is the C3dsLoader::Load3DS function. The 3ds file is a binary file which is organized into a collection of chunks. Typically, a reader reads the first two bytes from the file which are stored in the chunk ID. The next four bytes store the chunk length in bytes. We continue reading chunks, and their lengths, and then store data appropriately into our vectors/variables until there are no more chunks and we pass reading the end of file. The 3ds specifications detail all of the chunks and their lengths as well as subchunks, as shown in the following figure:

How it works…

Note that if there is a subchunk that we are interested in, we need to read the parent chunk as well, to move the file pointer to the appropriate offset in the file, for our required chunk. The loader first finds the total size of the 3ds mesh file in bytes. Then, it runs a while loop that checks to see if the current file pointer is within the file's size. If it is, it continues to read the first two bytes (the chunk's ID) and the next four bytes (the chunk's length).

Then we start a big switch case with all of the required chunk IDs and then read the bytes from the respective chunks as desired.

All names (object name, material name, or texture map name) have to be read byte-by-byte until the null terminator character (\0) is found. For reading vertices, we first read two bytes that store the total number of vertices (N). Two bytes means that the maximum number of vertices one mesh can store is 65536. Then, we read the whole chunk of bytes, that is, sizeof(glm::vec3)*N, directly into our mesh's vertices, shown as follows:

Similar to how the vertex information is stored, the face information stores the three unsigned short indices of the triangle and another unsigned short index containing the face flags. Therefore, for a mesh with M triangles, we have to read 4*M unsigned shorts from the file. We store the four unsigned shorts into a Face struct for convenience and then read the contents, as shown in the following code snippet:

The code for reading the material face IDs and texture coordinates follows in the same way as the total entries are first read and then the appropriate number of bytes are read from the file. Note that, if a chunk has a color chunk (as for chunk IDs: 0xa010 to 0xa030), the color information is contained in a subchunk (IDs: 0x0010 to 0x0013) depending on the data type used to store the color information in the parent chunk.

After the mesh and material information is loaded, we generate global vertices, uvs, and indices vectors. This makes it easy for us to render the submeshes in the render function.

Note that the 3ds format does not store the per-vertex normal explicitly. It only stores smoothing groups which tell us which faces have shared normals. After we have the vertex positions and face information, we can generate the per-vertex normals by averaging the per-face normals. This is carried out by using the following code snippet in the 3ds.cpp file. We first allocate space for the per-vertex normals. Then we estimate the face's normal by using the cross product of the two edges. Finally, we add the face normal to the appropriate vertex index and then normalize the normal.

Once we have all the per-vertex attributes and faces information, we use this to group the triangles by material. We loop through all of the materials and expand their face IDs to include the three vertex IDs and make the face.

How it works…

The main component of this recipe is the

C3dsLoader::Load3DS function. The 3ds file is a binary file which is organized into a collection of chunks. Typically, a reader reads the first two bytes from the file which are stored in the chunk ID. The next four bytes store the chunk length in bytes. We continue reading chunks, and their lengths, and then store data appropriately into our vectors/variables until there are no more chunks and we pass reading the end of file. The 3ds specifications detail all of the chunks and their lengths as well as subchunks, as shown in the following figure:

How it works…

Note that if there is a subchunk that we are interested in, we need to read the parent chunk as well, to move the file pointer to the appropriate offset in the file, for our required chunk. The loader first finds the total size of the 3ds mesh file in bytes. Then, it runs a while loop that checks to see if the current file pointer is within the file's size. If it is, it continues to read the first two bytes (the chunk's ID) and the next four bytes (the chunk's length).

Then we start a big switch case with all of the required chunk IDs and then read the bytes from the respective chunks as desired.

All names (object name, material name, or texture map name) have to be read byte-by-byte until the null terminator character (\0) is found. For reading vertices, we first read two bytes that store the total number of vertices (N). Two bytes means that the maximum number of vertices one mesh can store is 65536. Then, we read the whole chunk of bytes, that is, sizeof(glm::vec3)*N, directly into our mesh's vertices, shown as follows:

Similar to how the vertex information is stored, the face information stores the three unsigned short indices of the triangle and another unsigned short index containing the face flags. Therefore, for a mesh with M triangles, we have to read 4*M unsigned shorts from the file. We store the four unsigned shorts into a Face struct for convenience and then read the contents, as shown in the following code snippet:

The code for reading the material face IDs and texture coordinates follows in the same way as the total entries are first read and then the appropriate number of bytes are read from the file. Note that, if a chunk has a color chunk (as for chunk IDs: 0xa010 to 0xa030), the color information is contained in a subchunk (IDs: 0x0010 to 0x0013) depending on the data type used to store the color information in the parent chunk.

After the mesh and material information is loaded, we generate global vertices, uvs, and indices vectors. This makes it easy for us to render the submeshes in the render function.

Note that the 3ds format does not store the per-vertex normal explicitly. It only stores smoothing groups which tell us which faces have shared normals. After we have the vertex positions and face information, we can generate the per-vertex normals by averaging the per-face normals. This is carried out by using the following code snippet in the 3ds.cpp file. We first allocate space for the per-vertex normals. Then we estimate the face's normal by using the cross product of the two edges. Finally, we add the face normal to the appropriate vertex index and then normalize the normal.

Once we have all the per-vertex attributes and faces information, we use this to group the triangles by material. We loop through all of the materials and expand their face IDs to include the three vertex IDs and make the face.

There's more…

The output from the demo application implementing this recipe is given in the following figure. In this recipe, we render three blocks on a quad plane. The camera position can be changed using the left mouse button. The point light source position can be changed using the right mouse button. Each block has six textures attached to it, whereas the plane has no texture, hence it uses the diffuse color value.

There's more…

Note that the 3ds loader shown in this recipe does not take smoothing groups into consideration. For a more robust loader, we recommend the lib3ds library which provides a more elaborate 3ds file loader with support for smoothing groups, animation tracks, cameras, lights, keyframes, and so on. See also

For more information on implementing 3ds model loading, you can refer to the following links:

Lib3ds

In this recipe we will implement the Wavefront ® OBJ model. Instead of using separate buffer objects for storing positions, normals, and texture coordinates as in the previous recipe, we will use a single buffer object with interleaved data. This ensures that we have more chances of a cache hit since related attributes are stored next to each other in the buffer object memory.

Let us start the recipe by following these simple steps:

  1. Create a global reference of the ObjLoader object. Call the ObjLoader::Load function, passing it the name of the OBJ file. Pass vectors to store the meshes, vertices, indices, and materials contained in the OBJ file.
      ObjLoader obj;
      if(!obj.Load(mesh_filename.c_str(), meshes, vertices, indices, materials)) {
        cout<<"Cannot load the 3ds mesh"<<endl;
        exit(EXIT_FAILURE);
      }
  2. Generate OpenGL texture objects for each material using the SOIL library if the material has a texture map.
      for(size_t k=0;k<materials.size();k++) {
        if(materials[k]->map_Kd != "") {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename =  materials[k]->map_Kd;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
    
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
            exit(EXIT_FAILURE);
          }
          //… image flipping code
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB;  break;
            case 4: format = GL_RGBA;  break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textures.push_back(id);
        }
      }
  3. Set up shaders and generate buffer objects to store the mesh file data in the GPU memory. The shader setup is similar to the previous recipes.
      glGenVertexArrays(1, &vaoID);
      glGenBuffers(1, &vboVerticesID);
      glGenBuffers(1, &vboIndicesID); 
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_STATIC_DRAW);
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
      
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)(offsetof( Vertex, normal)) );
      
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
      if(materials.size()==1) {
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)*indices.size(), &(indices[0]), GL_STATIC_DRAW);
      }
  4. Bind the vertex array object associated with the mesh, use the shader and pass the shader uniforms, that is, the modelview (MV), projection (P), normal matrices (N) and light position, and so on.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); 
  5. To draw the mesh/submesh, loop through all of the materials in the mesh and then bind the texture to the GL_TEXTURE_2D target if the material contains a texture map. Otherwise, use a default color for the mesh. Finally, call the glDrawElements function to render the mesh/submesh.
    for(size_t i=0;i<materials.size();i++) {
      Material* pMat = materials[i];
      if(pMat->map_Kd !="") {
        glUniform1f(shader("useDefault"), 0.0);
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(whichID[0] != textures[i])
          glBindTexture(GL_TEXTURE_2D, textures[i]);
      }
      else
      glUniform1f(shader("useDefault"), 1.0);
      
      if(materials.size()==1)
      glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, 0);
      else
      glDrawElements(GL_TRIANGLES, pMat->count, GL_UNSIGNED_SHORT, (const GLvoid*)(& indices[pMat->offset])); 
    }
    shader.UnUse();

The main component of this recipe is the ObjLoader::Load function defined in the Obj.cpp file. The Wavefront® OBJ file is a text file which has different text descriptors for different mesh components. Usually, the mesh starts with the geometry definition, that is, vertices that begin with the letter v followed by three floating point values. If there are normals, their definitions begin with vn followed by three floating point values. If there are texture coordinates, their definitions begin with vt, followed by two floating point values. Comments start with the # character, so whenever a line with this character is encountered, it is ignored.

Following the geometry definition, the topology is defined. In this case, the line is prefixed with f followed by the indices for the polygon vertices. In case of a triangle, three indices sections are given such that the vertex position indices are given first, followed by texture coordinates indices (if any), and finally the normal indices (if any). Note that the indices start from 1, not 0.

So, for example, say that we have a quad geometry having four position indices (1,2,3,4) having four texture coordinate indices (5,6,7,8), and four normal indices (1,1,1,1) then the topology would be stored as follows:

If the mesh is a triangular mesh with position vertices (1,2,3), texture coordinates (7,8,9), and normals (4,5,6) then the topology would be stored as follows:

Now, if the texture coordinates are omitted from the first example, then the topology would be stored as follows:

The OBJ file stores material information in a separate material (.mtl) file. This file contains similar text descriptors that define different materials with their ambient, diffuse, and specular color values, texture maps, and so on. The details of the defined elements are given in the OBJ format specifications. The material file for the current OBJ file is declared using the mtllib keyword followed by the name of the .mtl file. Usually, the .mtl file is stored in the same folder as the OBJ file. A polygon definition is preceded with a usemtl keyword followed by the name of the material to use for the upcoming polygon definition. Several polygonal definitions can be grouped using the g or o prefix followed by the name of the group/object respectively.

The ObjLoader::Load function first finds the current prefix. Then, the code branches to the appropriate section depending on the prefix. The suffix strings are then parsed and the extracted data is stored in the corresponding vectors. For efficiency, rather than storing the indices directly, we store them by material so that we can then sort and render the mesh by material. The associated material library file (.mtl) is loaded using the ReadMaterialLibrary function. Refer to the Obj.cpp file for details.

The file parsing is the first piece of the puzzle. The second piece is the transfer of this data to the GPU memory. In this recipe, we use an interleaved buffer, that is, instead of storing each per-vertex attribute separately in its own vertex buffer object, we store them interleaved one after the other in a single buffer object. First positions are followed by normals and then texture coordinates. We achieve this by first defining our vertex format using a custom Vertex struct. Our vertices are a vector of this struct.

We generate the vertex array object and then the vertex buffer object. Next, we bind the buffer object passing it our vertices. In this case, we specify the stride of each attribute in the data stream separately as follows:

If the mesh has a single material, we store the mesh indices into a GL_ELEMENT_ARRAY_BUFFER target. Otherwise, we render the submeshes by material.

At the time of rendering, if we have a single material, we render the whole mesh, otherwise we render the subset stored with the material.

Getting started

The code for this recipe is contained in the Chapter5/ObjViewer folder.

Let us start the recipe by following these simple steps:

  1. Create a global reference of the ObjLoader object. Call the ObjLoader::Load function, passing it the name of the OBJ file. Pass vectors to store the meshes, vertices, indices, and materials contained in the OBJ file.
      ObjLoader obj;
      if(!obj.Load(mesh_filename.c_str(), meshes, vertices, indices, materials)) {
        cout<<"Cannot load the 3ds mesh"<<endl;
        exit(EXIT_FAILURE);
      }
  2. Generate OpenGL texture objects for each material using the SOIL library if the material has a texture map.
      for(size_t k=0;k<materials.size();k++) {
        if(materials[k]->map_Kd != "") {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename =  materials[k]->map_Kd;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
    
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
            exit(EXIT_FAILURE);
          }
          //… image flipping code
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB;  break;
            case 4: format = GL_RGBA;  break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textures.push_back(id);
        }
      }
  3. Set up shaders and generate buffer objects to store the mesh file data in the GPU memory. The shader setup is similar to the previous recipes.
      glGenVertexArrays(1, &vaoID);
      glGenBuffers(1, &vboVerticesID);
      glGenBuffers(1, &vboIndicesID); 
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_STATIC_DRAW);
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
      
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)(offsetof( Vertex, normal)) );
      
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
      if(materials.size()==1) {
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)*indices.size(), &(indices[0]), GL_STATIC_DRAW);
      }
  4. Bind the vertex array object associated with the mesh, use the shader and pass the shader uniforms, that is, the modelview (MV), projection (P), normal matrices (N) and light position, and so on.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); 
  5. To draw the mesh/submesh, loop through all of the materials in the mesh and then bind the texture to the GL_TEXTURE_2D target if the material contains a texture map. Otherwise, use a default color for the mesh. Finally, call the glDrawElements function to render the mesh/submesh.
    for(size_t i=0;i<materials.size();i++) {
      Material* pMat = materials[i];
      if(pMat->map_Kd !="") {
        glUniform1f(shader("useDefault"), 0.0);
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(whichID[0] != textures[i])
          glBindTexture(GL_TEXTURE_2D, textures[i]);
      }
      else
      glUniform1f(shader("useDefault"), 1.0);
      
      if(materials.size()==1)
      glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, 0);
      else
      glDrawElements(GL_TRIANGLES, pMat->count, GL_UNSIGNED_SHORT, (const GLvoid*)(& indices[pMat->offset])); 
    }
    shader.UnUse();

The main component of this recipe is the ObjLoader::Load function defined in the Obj.cpp file. The Wavefront® OBJ file is a text file which has different text descriptors for different mesh components. Usually, the mesh starts with the geometry definition, that is, vertices that begin with the letter v followed by three floating point values. If there are normals, their definitions begin with vn followed by three floating point values. If there are texture coordinates, their definitions begin with vt, followed by two floating point values. Comments start with the # character, so whenever a line with this character is encountered, it is ignored.

Following the geometry definition, the topology is defined. In this case, the line is prefixed with f followed by the indices for the polygon vertices. In case of a triangle, three indices sections are given such that the vertex position indices are given first, followed by texture coordinates indices (if any), and finally the normal indices (if any). Note that the indices start from 1, not 0.

So, for example, say that we have a quad geometry having four position indices (1,2,3,4) having four texture coordinate indices (5,6,7,8), and four normal indices (1,1,1,1) then the topology would be stored as follows:

If the mesh is a triangular mesh with position vertices (1,2,3), texture coordinates (7,8,9), and normals (4,5,6) then the topology would be stored as follows:

Now, if the texture coordinates are omitted from the first example, then the topology would be stored as follows:

The OBJ file stores material information in a separate material (.mtl) file. This file contains similar text descriptors that define different materials with their ambient, diffuse, and specular color values, texture maps, and so on. The details of the defined elements are given in the OBJ format specifications. The material file for the current OBJ file is declared using the mtllib keyword followed by the name of the .mtl file. Usually, the .mtl file is stored in the same folder as the OBJ file. A polygon definition is preceded with a usemtl keyword followed by the name of the material to use for the upcoming polygon definition. Several polygonal definitions can be grouped using the g or o prefix followed by the name of the group/object respectively.

The ObjLoader::Load function first finds the current prefix. Then, the code branches to the appropriate section depending on the prefix. The suffix strings are then parsed and the extracted data is stored in the corresponding vectors. For efficiency, rather than storing the indices directly, we store them by material so that we can then sort and render the mesh by material. The associated material library file (.mtl) is loaded using the ReadMaterialLibrary function. Refer to the Obj.cpp file for details.

The file parsing is the first piece of the puzzle. The second piece is the transfer of this data to the GPU memory. In this recipe, we use an interleaved buffer, that is, instead of storing each per-vertex attribute separately in its own vertex buffer object, we store them interleaved one after the other in a single buffer object. First positions are followed by normals and then texture coordinates. We achieve this by first defining our vertex format using a custom Vertex struct. Our vertices are a vector of this struct.

We generate the vertex array object and then the vertex buffer object. Next, we bind the buffer object passing it our vertices. In this case, we specify the stride of each attribute in the data stream separately as follows:

If the mesh has a single material, we store the mesh indices into a GL_ELEMENT_ARRAY_BUFFER target. Otherwise, we render the submeshes by material.

At the time of rendering, if we have a single material, we render the whole mesh, otherwise we render the subset stored with the material.

How to do it…

Let us start

the recipe by following these simple steps:

  1. Create a global reference of the ObjLoader object. Call the ObjLoader::Load function, passing it the name of the OBJ file. Pass vectors to store the meshes, vertices, indices, and materials contained in the OBJ file.
      ObjLoader obj;
      if(!obj.Load(mesh_filename.c_str(), meshes, vertices, indices, materials)) {
        cout<<"Cannot load the 3ds mesh"<<endl;
        exit(EXIT_FAILURE);
      }
  2. Generate OpenGL texture objects for each material using the SOIL library if the material has a texture map.
      for(size_t k=0;k<materials.size();k++) {
        if(materials[k]->map_Kd != "") {
          GLuint id = 0;
          glGenTextures(1, &id);
          glBindTexture(GL_TEXTURE_2D, id);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
          
          int texture_width = 0, texture_height = 0, channels=0;
          const string& filename =  materials[k]->map_Kd;
          std::string full_filename = mesh_path;
          full_filename.append(filename);
    
          GLubyte* pData = SOIL_load_image(full_filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO);
          if(pData == NULL) {
            cerr<<"Cannot load image: "<<full_filename.c_str()<<endl;
            exit(EXIT_FAILURE);
          }
          //… image flipping code
          GLenum format = GL_RGBA;
          switch(channels) {
            case 2: format = GL_RG32UI; break;
            case 3: format = GL_RGB;  break;
            case 4: format = GL_RGBA;  break;
          }
          glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
          SOIL_free_image_data(pData);
          textures.push_back(id);
        }
      }
  3. Set up shaders and generate buffer objects to store the mesh file data in the GPU memory. The shader setup is similar to the previous recipes.
      glGenVertexArrays(1, &vaoID);
      glGenBuffers(1, &vboVerticesID);
      glGenBuffers(1, &vboIndicesID); 
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_STATIC_DRAW);
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
      
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),(const GLvoid*)(offsetof( Vertex, normal)) );
      
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
      if(materials.size()==1) {
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)*indices.size(), &(indices[0]), GL_STATIC_DRAW);
      }
  4. Bind the vertex array object associated with the mesh, use the shader and pass the shader uniforms, that is, the modelview (MV), projection (P), normal matrices (N) and light position, and so on.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); 
  5. To draw the mesh/submesh, loop through all of the materials in the mesh and then bind the texture to the GL_TEXTURE_2D target if the material contains a texture map. Otherwise, use a default color for the mesh. Finally, call the glDrawElements function to render the mesh/submesh.
    for(size_t i=0;i<materials.size();i++) {
      Material* pMat = materials[i];
      if(pMat->map_Kd !="") {
        glUniform1f(shader("useDefault"), 0.0);
        GLint whichID[1];
        glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
        if(whichID[0] != textures[i])
          glBindTexture(GL_TEXTURE_2D, textures[i]);
      }
      else
      glUniform1f(shader("useDefault"), 1.0);
      
      if(materials.size()==1)
      glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, 0);
      else
      glDrawElements(GL_TRIANGLES, pMat->count, GL_UNSIGNED_SHORT, (const GLvoid*)(& indices[pMat->offset])); 
    }
    shader.UnUse();

The main component of this recipe is the ObjLoader::Load function defined in the Obj.cpp file. The Wavefront® OBJ file is a text file which has different text descriptors for different mesh components. Usually, the mesh starts with the geometry definition, that is, vertices that begin with the letter v followed by three floating point values. If there are normals, their definitions begin with vn followed by three floating point values. If there are texture coordinates, their definitions begin with vt, followed by two floating point values. Comments start with the # character, so whenever a line with this character is encountered, it is ignored.

Following the geometry definition, the topology is defined. In this case, the line is prefixed with f followed by the indices for the polygon vertices. In case of a triangle, three indices sections are given such that the vertex position indices are given first, followed by texture coordinates indices (if any), and finally the normal indices (if any). Note that the indices start from 1, not 0.

So, for example, say that we have a quad geometry having four position indices (1,2,3,4) having four texture coordinate indices (5,6,7,8), and four normal indices (1,1,1,1) then the topology would be stored as follows:

If the mesh is a triangular mesh with position vertices (1,2,3), texture coordinates (7,8,9), and normals (4,5,6) then the topology would be stored as follows:

Now, if the texture coordinates are omitted from the first example, then the topology would be stored as follows:

The OBJ file stores material information in a separate material (.mtl) file. This file contains similar text descriptors that define different materials with their ambient, diffuse, and specular color values, texture maps, and so on. The details of the defined elements are given in the OBJ format specifications. The material file for the current OBJ file is declared using the mtllib keyword followed by the name of the .mtl file. Usually, the .mtl file is stored in the same folder as the OBJ file. A polygon definition is preceded with a usemtl keyword followed by the name of the material to use for the upcoming polygon definition. Several polygonal definitions can be grouped using the g or o prefix followed by the name of the group/object respectively.

The ObjLoader::Load function first finds the current prefix. Then, the code branches to the appropriate section depending on the prefix. The suffix strings are then parsed and the extracted data is stored in the corresponding vectors. For efficiency, rather than storing the indices directly, we store them by material so that we can then sort and render the mesh by material. The associated material library file (.mtl) is loaded using the ReadMaterialLibrary function. Refer to the Obj.cpp file for details.

The file parsing is the first piece of the puzzle. The second piece is the transfer of this data to the GPU memory. In this recipe, we use an interleaved buffer, that is, instead of storing each per-vertex attribute separately in its own vertex buffer object, we store them interleaved one after the other in a single buffer object. First positions are followed by normals and then texture coordinates. We achieve this by first defining our vertex format using a custom Vertex struct. Our vertices are a vector of this struct.

We generate the vertex array object and then the vertex buffer object. Next, we bind the buffer object passing it our vertices. In this case, we specify the stride of each attribute in the data stream separately as follows:

If the mesh has a single material, we store the mesh indices into a GL_ELEMENT_ARRAY_BUFFER target. Otherwise, we render the submeshes by material.

At the time of rendering, if we have a single material, we render the whole mesh, otherwise we render the subset stored with the material.

How it works…

The main component of this recipe is the ObjLoader::Load function defined in the Obj.cpp file. The Wavefront® OBJ file is a text file which has different text descriptors for different mesh components. Usually, the mesh starts with the geometry definition, that is, vertices that begin with the letter v followed by three floating point values. If there are normals, their definitions begin with vn followed by three floating point values. If there are texture coordinates, their definitions begin with vt, followed by two floating point values. Comments start with the # character, so whenever a line with this character is encountered, it is ignored.

Following the

geometry definition, the topology is defined. In this case, the line is prefixed with f followed by the indices for the polygon vertices. In case of a triangle, three indices sections are given such that the vertex position indices are given first, followed by texture coordinates indices (if any), and finally the normal indices (if any). Note that the indices start from 1, not 0.

So, for example, say that we have a quad geometry having four position indices (1,2,3,4) having four texture coordinate indices (5,6,7,8), and four normal indices (1,1,1,1) then the topology would be stored as follows:

If the mesh is a triangular mesh with position vertices (1,2,3), texture coordinates (7,8,9), and normals (4,5,6) then the topology would be stored as follows:

Now, if the texture coordinates are omitted from the first example, then the topology would be stored as follows:

The OBJ file stores material information in a separate material (.mtl) file. This file contains similar text descriptors that define different materials with their ambient, diffuse, and specular color values, texture maps, and so on. The details of the defined elements are given in the OBJ format specifications. The material file for the current OBJ file is declared using the mtllib keyword followed by the name of the .mtl file. Usually, the .mtl file is stored in the same folder as the OBJ file. A polygon definition is preceded with a usemtl keyword followed by the name of the material to use for the upcoming polygon definition. Several polygonal definitions can be grouped using the g or o prefix followed by the name of the group/object respectively.

The ObjLoader::Load function first finds the current prefix. Then, the code branches to the appropriate section depending on the prefix. The suffix strings are then parsed and the extracted data is stored in the corresponding vectors. For efficiency, rather than storing the indices directly, we store them by material so that we can then sort and render the mesh by material. The associated material library file (.mtl) is loaded using the ReadMaterialLibrary function. Refer to the Obj.cpp file for details.

The file parsing is the first piece of the puzzle. The second piece is the transfer of this data to the GPU memory. In this recipe, we use an interleaved buffer, that is, instead of storing each per-vertex attribute separately in its own vertex buffer object, we store them interleaved one after the other in a single buffer object. First positions are followed by normals and then texture coordinates. We achieve this by first defining our vertex format using a custom Vertex struct. Our vertices are a vector of this struct.

We generate the vertex array object and then the vertex buffer object. Next, we bind the buffer object passing it our vertices. In this case, we specify the stride of each attribute in the data stream separately as follows:

If the mesh has a single material, we store the mesh indices into a GL_ELEMENT_ARRAY_BUFFER target. Otherwise, we render the submeshes by material.

At the time of rendering, if we have a single material, we render the whole mesh, otherwise we render the subset stored with the material.

There's more…

The demo application implementing this recipe shows a scene with three blocks on a planar quad. The camera view can be rotated with the left mouse button. The light source's position is shown by a 3D crosshair that can be moved by dragging the right mouse button. The output from this demo application is shown in the following figure:

There's more… See also

You can see the OBJ file specification

In this recipe, we will learn how to load and render an EZMesh model. There are several skeletal animation formats such as Quake's md2 (.md2), Autodesk® FBX (.fbx), and Collada (.dae). The conventional model formats such as Collada are overly complicated for doing simple skeletal animation. Therefore, in this recipe, we will learn how to load and render an EZMesh (.ezm) skeletal model.

The code for this recipe is contained in the Chapter5/EZMeshViewer directory. For this recipe, we will be using two external libraries to aid with the EZMesh (.ezm) mesh file parsing. The first library is called MeshImport and it can be downloaded from http://code.google.com/p/meshimport/. Make sure to get the latest svn trunk of the code. After downloading, change directory to the compiler subdirectory which contains the visual studio solution files. Double-click to open the solution and build the project dlls. After the library is built successfully, copy MeshImport_[x86/x64].dll and MeshImportEZM_[x86/x64].dll (subject to your machine configuration) into your current project directory. In addition, also copy the MeshImport.[h/cpp] files which contain some useful library loading routines.

In addition, since EZMesh is an XML format to support loading of textures, we parse the EZMesh XML manually with the help of the pugixml library. You can download it from http://pugixml.org/downloads/. As pugixml is tiny, we can directly include the source files with the project.

Let us start this recipe by following these simple steps:

  1. Create a global reference to an EzmLoader object. Call the EzmLoader::Load function passing it the name of the EZMesh (.ezm) file. Pass the vectors to store the submeshes, vertices, indices, and materials-to-image map. The Load function also accepts the min and max vectors to store the EZMesh bounding box.
       if(!ezm.Load(mesh_filename.c_str(), submeshes, vertices, indices, material2ImageMap, min, max)) {
         cout<<"Cannot load the EZMesh mesh"<<endl;
         exit(EXIT_FAILURE);
       }
  2. Using the material information, generate the OpenGL textures for the EZMesh geometry.
      for(size_t k=0;k<materialNames.size();k++) {
        GLuint id = 0;
        glGenTextures(1, &id);
        glBindTexture(GL_TEXTURE_2D, id);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
        int texture_width = 0, texture_height = 0, channels=0;
        const string& filename =  materialNames[k];
    
        std::string full_filename = mesh_path;
        full_filename.append(filename);
    
        //Image loading using SOIL and vertical image flipping
        //…
        GLenum format = GL_RGBA;
        switch(channels) {
          case 2:  format = GL_RG32UI; break;
          case 3: format = GL_RGB;  break;
          case 4: format = GL_RGBA;  break;
        }
        glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
        SOIL_free_image_data(pData);
        materialMap[filename] = id ;
      }
  3. Set up the interleaved buffer object as in the previous recipe, Implementing OBJ model loading using interleaved buffers.
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_DYNAMIC_DRAW);
    
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
    
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, normal)) );
    
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
  4. To render the EZMesh, bind the mesh's vertex array object, set up the shader, and pass the shader uniforms.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosES.x));
  5. Loop through all submeshes, bind the submesh texture, and then issue the glDrawEements call, passing it the submesh indices. If the submesh has no materials, a default solid color material is assigned to the submesh.
      for(size_t i=0;i<submeshes.size();i++) {
        if(strlen(submeshes[i].materialName)>0) {
          GLuint id = materialMap[material2ImageMap[ submeshes[i].materialName]];
    
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          
          if(whichID[0] != id)
            glBindTexture(GL_TEXTURE_2D, id);
            glUniform1f(shader("useDefault"), 0.0);
          } else {
            glUniform1f(shader("useDefault"), 1.0);
          }
          glDrawElements(GL_TRIANGLES, submeshes[i].indices.size(), GL_UNSIGNED_INT, &submeshes[i].indices[0]);
        }
      }

EZMesh is an XML based skeletal animation format. There are two parts to this recipe: parsing of the EZMesh file using the MeshImport/pugixml libraries and handling of the data using OpenGL buffer objects. The first part is handled by the EzmLoader::Load function. Along with the filename, this function accepts vectors to store the submeshes, vertices, indices, and material names map contained in the mesh file.

If we open an EZMesh file, it contains a collection of XML elements. The first element is MeshSystem. This element contains four child elements: Skeletons, Animations, Materials, and Meshes. Each of these subelements has a count attribute that stores the total number of corresponding items in the EZMesh file. Note that we can remove the element as desired. So the hierarchy is typically as follows:

For this recipe, we are interested in the last two subelements: Materials and Meshes. We will be using the first two subelements in the skeletal animation recipe in a later chapter of this book. Each Materials element has a counted number of Material elements. Each Material element stores the material's name in the name attribute and the material's details. For example, the texture map file name in the meta_data attribute. In the EZMLoader::Load function, we use pugi_xml to parse the Materials element and its subelements into a material map. This map stores the material's name and its texture file name. Note that the MeshImport library does provide functions for reading material information, but they are broken.

After the material information is loaded in, we initialize the MeshImport library by calling the NVSHARE::loadMeshImporters function and passing it the directory where MeshImport dlls (MeshImport_[x86,x64].dll and MeshImportEZM_[x86,x64].dll) are placed. Upon success, this function returns the NVSHARE::MeshImport library object. Using the MeshImport library object, we first create the mesh system container by calling the NVSHARE::MeshImport::createMeshSystemContainer function. This function accepts the object name and the EZMesh file contents. If successful, this function returns the MeshSystemContainer object which is then passed to the NVSHARE::MeshImport::getMeshSystem function which returns the NVSHARE::MeshSystem object. This represents the MeshSystem node in the EZMesh XML file.

Once we have the MeshSystem object, we can query all of the subelements. These reside in the MeshSystem object as member variables. So let's say we want to traverse through all of the meshes in the current EZMesh file and copy the per-vertex attributes to our own vector (vertices), we would simply do the following:

In an EZMesh file, the indices are sorted by materials into submeshes. We iterate through all of the submeshes and then store their material name and indices into our container.

After the EZMesh file is parsed and we have the per-vertex data stored, we first generate the OpenGL textures from the EZMesh materials list. Then we store the texture IDs into a material map so that we can refer to the textures by material name.

After the materials, the shaders are loaded as in the previous recipes. The per-vertex data is then transferred to the GPU using vertex array and vertex buffer objects. In this case, we use the interleaved vertex buffer format.

For rendering of the mesh, we first bind the vertex array object of the mesh, attach our shader and pass the shader uniforms. Then we loop over all of the submeshes and bind the appropriate texture (if the submesh has texture). Otherwise, a default color is used. Finally, the indices of the submesh are used to draw the mesh using the glDrawElements function.

Getting started

The code for this recipe is contained in the Chapter5/EZMeshViewer directory. For this recipe, we will be using two external libraries to aid with the EZMesh (.ezm) mesh file parsing. The first library is called MeshImport and it can be downloaded from

http://code.google.com/p/meshimport/. Make sure to get the latest svn trunk of the code. After downloading, change directory to the compiler subdirectory which contains the visual studio solution files. Double-click to open the solution and build the project dlls. After the library is built successfully, copy MeshImport_[x86/x64].dll and MeshImportEZM_[x86/x64].dll (subject to your machine configuration) into your current project directory. In addition, also copy the MeshImport.[h/cpp] files which contain some useful library loading routines.

In addition, since EZMesh is an XML format to support loading of textures, we parse the EZMesh XML manually with the help of the pugixml library. You can download it from http://pugixml.org/downloads/. As pugixml is tiny, we can directly include the source files with the project.

Let us start this recipe by following these simple steps:

  1. Create a global reference to an EzmLoader object. Call the EzmLoader::Load function passing it the name of the EZMesh (.ezm) file. Pass the vectors to store the submeshes, vertices, indices, and materials-to-image map. The Load function also accepts the min and max vectors to store the EZMesh bounding box.
       if(!ezm.Load(mesh_filename.c_str(), submeshes, vertices, indices, material2ImageMap, min, max)) {
         cout<<"Cannot load the EZMesh mesh"<<endl;
         exit(EXIT_FAILURE);
       }
  2. Using the material information, generate the OpenGL textures for the EZMesh geometry.
      for(size_t k=0;k<materialNames.size();k++) {
        GLuint id = 0;
        glGenTextures(1, &id);
        glBindTexture(GL_TEXTURE_2D, id);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
        int texture_width = 0, texture_height = 0, channels=0;
        const string& filename =  materialNames[k];
    
        std::string full_filename = mesh_path;
        full_filename.append(filename);
    
        //Image loading using SOIL and vertical image flipping
        //…
        GLenum format = GL_RGBA;
        switch(channels) {
          case 2:  format = GL_RG32UI; break;
          case 3: format = GL_RGB;  break;
          case 4: format = GL_RGBA;  break;
        }
        glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
        SOIL_free_image_data(pData);
        materialMap[filename] = id ;
      }
  3. Set up the interleaved buffer object as in the previous recipe, Implementing OBJ model loading using interleaved buffers.
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_DYNAMIC_DRAW);
    
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
    
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, normal)) );
    
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
  4. To render the EZMesh, bind the mesh's vertex array object, set up the shader, and pass the shader uniforms.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosES.x));
  5. Loop through all submeshes, bind the submesh texture, and then issue the glDrawEements call, passing it the submesh indices. If the submesh has no materials, a default solid color material is assigned to the submesh.
      for(size_t i=0;i<submeshes.size();i++) {
        if(strlen(submeshes[i].materialName)>0) {
          GLuint id = materialMap[material2ImageMap[ submeshes[i].materialName]];
    
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          
          if(whichID[0] != id)
            glBindTexture(GL_TEXTURE_2D, id);
            glUniform1f(shader("useDefault"), 0.0);
          } else {
            glUniform1f(shader("useDefault"), 1.0);
          }
          glDrawElements(GL_TRIANGLES, submeshes[i].indices.size(), GL_UNSIGNED_INT, &submeshes[i].indices[0]);
        }
      }

EZMesh is an XML based skeletal animation format. There are two parts to this recipe: parsing of the EZMesh file using the MeshImport/pugixml libraries and handling of the data using OpenGL buffer objects. The first part is handled by the EzmLoader::Load function. Along with the filename, this function accepts vectors to store the submeshes, vertices, indices, and material names map contained in the mesh file.

If we open an EZMesh file, it contains a collection of XML elements. The first element is MeshSystem. This element contains four child elements: Skeletons, Animations, Materials, and Meshes. Each of these subelements has a count attribute that stores the total number of corresponding items in the EZMesh file. Note that we can remove the element as desired. So the hierarchy is typically as follows:

For this recipe, we are interested in the last two subelements: Materials and Meshes. We will be using the first two subelements in the skeletal animation recipe in a later chapter of this book. Each Materials element has a counted number of Material elements. Each Material element stores the material's name in the name attribute and the material's details. For example, the texture map file name in the meta_data attribute. In the EZMLoader::Load function, we use pugi_xml to parse the Materials element and its subelements into a material map. This map stores the material's name and its texture file name. Note that the MeshImport library does provide functions for reading material information, but they are broken.

After the material information is loaded in, we initialize the MeshImport library by calling the NVSHARE::loadMeshImporters function and passing it the directory where MeshImport dlls (MeshImport_[x86,x64].dll and MeshImportEZM_[x86,x64].dll) are placed. Upon success, this function returns the NVSHARE::MeshImport library object. Using the MeshImport library object, we first create the mesh system container by calling the NVSHARE::MeshImport::createMeshSystemContainer function. This function accepts the object name and the EZMesh file contents. If successful, this function returns the MeshSystemContainer object which is then passed to the NVSHARE::MeshImport::getMeshSystem function which returns the NVSHARE::MeshSystem object. This represents the MeshSystem node in the EZMesh XML file.

Once we have the MeshSystem object, we can query all of the subelements. These reside in the MeshSystem object as member variables. So let's say we want to traverse through all of the meshes in the current EZMesh file and copy the per-vertex attributes to our own vector (vertices), we would simply do the following:

In an EZMesh file, the indices are sorted by materials into submeshes. We iterate through all of the submeshes and then store their material name and indices into our container.

After the EZMesh file is parsed and we have the per-vertex data stored, we first generate the OpenGL textures from the EZMesh materials list. Then we store the texture IDs into a material map so that we can refer to the textures by material name.

After the materials, the shaders are loaded as in the previous recipes. The per-vertex data is then transferred to the GPU using vertex array and vertex buffer objects. In this case, we use the interleaved vertex buffer format.

For rendering of the mesh, we first bind the vertex array object of the mesh, attach our shader and pass the shader uniforms. Then we loop over all of the submeshes and bind the appropriate texture (if the submesh has texture). Otherwise, a default color is used. Finally, the indices of the submesh are used to draw the mesh using the glDrawElements function.

How to do it...

Let us start this

recipe by following these simple steps:

  1. Create a global reference to an EzmLoader object. Call the EzmLoader::Load function passing it the name of the EZMesh (.ezm) file. Pass the vectors to store the submeshes, vertices, indices, and materials-to-image map. The Load function also accepts the min and max vectors to store the EZMesh bounding box.
       if(!ezm.Load(mesh_filename.c_str(), submeshes, vertices, indices, material2ImageMap, min, max)) {
         cout<<"Cannot load the EZMesh mesh"<<endl;
         exit(EXIT_FAILURE);
       }
  2. Using the material information, generate the OpenGL textures for the EZMesh geometry.
      for(size_t k=0;k<materialNames.size();k++) {
        GLuint id = 0;
        glGenTextures(1, &id);
        glBindTexture(GL_TEXTURE_2D, id);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
        int texture_width = 0, texture_height = 0, channels=0;
        const string& filename =  materialNames[k];
    
        std::string full_filename = mesh_path;
        full_filename.append(filename);
    
        //Image loading using SOIL and vertical image flipping
        //…
        GLenum format = GL_RGBA;
        switch(channels) {
          case 2:  format = GL_RG32UI; break;
          case 3: format = GL_RGB;  break;
          case 4: format = GL_RGBA;  break;
        }
        glTexImage2D(GL_TEXTURE_2D, 0, format, texture_width, texture_height, 0, format, GL_UNSIGNED_BYTE, pData);
        SOIL_free_image_data(pData);
        materialMap[filename] = id ;
      }
  3. Set up the interleaved buffer object as in the previous recipe, Implementing OBJ model loading using interleaved buffers.
      glBindVertexArray(vaoID);
      glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID);
      glBufferData (GL_ARRAY_BUFFER, sizeof(Vertex)*vertices.size(), &(vertices[0].pos.x), GL_DYNAMIC_DRAW);
    
      glEnableVertexAttribArray(shader["vVertex"]);
      glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,sizeof(Vertex),0);
    
      glEnableVertexAttribArray(shader["vNormal"]);
      glVertexAttribPointer(shader["vNormal"], 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, normal)) );
    
      glEnableVertexAttribArray(shader["vUV"]);
      glVertexAttribPointer(shader["vUV"], 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)(offsetof(Vertex, uv)) );
  4. To render the EZMesh, bind the mesh's vertex array object, set up the shader, and pass the shader uniforms.
      glBindVertexArray(vaoID); {
        shader.Use();
        glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV));
        glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV))));
        glUniformMatrix4fv(shader("P"), 1, GL_FALSE, glm::value_ptr(P));
        glUniform3fv(shader("light_position"),1, &(lightPosES.x));
  5. Loop through all submeshes, bind the submesh texture, and then issue the glDrawEements call, passing it the submesh indices. If the submesh has no materials, a default solid color material is assigned to the submesh.
      for(size_t i=0;i<submeshes.size();i++) {
        if(strlen(submeshes[i].materialName)>0) {
          GLuint id = materialMap[material2ImageMap[ submeshes[i].materialName]];
    
          GLint whichID[1];
          glGetIntegerv(GL_TEXTURE_BINDING_2D, whichID);
          
          if(whichID[0] != id)
            glBindTexture(GL_TEXTURE_2D, id);
            glUniform1f(shader("useDefault"), 0.0);
          } else {
            glUniform1f(shader("useDefault"), 1.0);
          }
          glDrawElements(GL_TRIANGLES, submeshes[i].indices.size(), GL_UNSIGNED_INT, &submeshes[i].indices[0]);
        }
      }

EZMesh is an XML based skeletal animation format. There are two parts to this recipe: parsing of the EZMesh file using the MeshImport/pugixml libraries and handling of the data using OpenGL buffer objects. The first part is handled by the EzmLoader::Load function. Along with the filename, this function accepts vectors to store the submeshes, vertices, indices, and material names map contained in the mesh file.

If we open an EZMesh file, it contains a collection of XML elements. The first element is MeshSystem. This element contains four child elements: Skeletons, Animations, Materials, and Meshes. Each of these subelements has a count attribute that stores the total number of corresponding items in the EZMesh file. Note that we can remove the element as desired. So the hierarchy is typically as follows:

For this recipe, we are interested in the last two subelements: Materials and Meshes. We will be using the first two subelements in the skeletal animation recipe in a later chapter of this book. Each Materials element has a counted number of Material elements. Each Material element stores the material's name in the name attribute and the material's details. For example, the texture map file name in the meta_data attribute. In the EZMLoader::Load function, we use pugi_xml to parse the Materials element and its subelements into a material map. This map stores the material's name and its texture file name. Note that the MeshImport library does provide functions for reading material information, but they are broken.

After the material information is loaded in, we initialize the MeshImport library by calling the NVSHARE::loadMeshImporters function and passing it the directory where MeshImport dlls (MeshImport_[x86,x64].dll and MeshImportEZM_[x86,x64].dll) are placed. Upon success, this function returns the NVSHARE::MeshImport library object. Using the MeshImport library object, we first create the mesh system container by calling the NVSHARE::MeshImport::createMeshSystemContainer function. This function accepts the object name and the EZMesh file contents. If successful, this function returns the MeshSystemContainer object which is then passed to the NVSHARE::MeshImport::getMeshSystem function which returns the NVSHARE::MeshSystem object. This represents the MeshSystem node in the EZMesh XML file.

Once we have the MeshSystem object, we can query all of the subelements. These reside in the MeshSystem object as member variables. So let's say we want to traverse through all of the meshes in the current EZMesh file and copy the per-vertex attributes to our own vector (vertices), we would simply do the following:

In an EZMesh file, the indices are sorted by materials into submeshes. We iterate through all of the submeshes and then store their material name and indices into our container.

After the EZMesh file is parsed and we have the per-vertex data stored, we first generate the OpenGL textures from the EZMesh materials list. Then we store the texture IDs into a material map so that we can refer to the textures by material name.

After the materials, the shaders are loaded as in the previous recipes. The per-vertex data is then transferred to the GPU using vertex array and vertex buffer objects. In this case, we use the interleaved vertex buffer format.

For rendering of the mesh, we first bind the vertex array object of the mesh, attach our shader and pass the shader uniforms. Then we loop over all of the submeshes and bind the appropriate texture (if the submesh has texture). Otherwise, a default color is used. Finally, the indices of the submesh are used to draw the mesh using the glDrawElements function.

How it works…

EZMesh is an

XML based skeletal animation format. There are two parts to this recipe: parsing of the EZMesh file using the MeshImport/pugixml libraries and handling of the data using OpenGL buffer objects. The first part is handled by the EzmLoader::Load function. Along with the filename, this function accepts vectors to store the submeshes, vertices, indices, and material names map contained in the mesh file.

If we open an EZMesh file, it contains a collection of XML elements. The first element is MeshSystem. This element contains four child elements: Skeletons, Animations, Materials, and Meshes. Each of these subelements has a count attribute that stores the total number of corresponding items in the EZMesh file. Note that we can remove the element as desired. So the hierarchy is typically as follows:

For this recipe, we are interested in the last two subelements: Materials and Meshes. We will be using the first two subelements in the skeletal animation recipe in a later chapter of this book. Each Materials element has a counted number of Material elements. Each Material element stores the material's name in the name attribute and the material's details. For example, the texture map file name in the meta_data attribute. In the EZMLoader::Load function, we use pugi_xml to parse the Materials element and its subelements into a material map. This map stores the material's name and its texture file name. Note that the MeshImport library does provide functions for reading material information, but they are broken.

After the material information is loaded in, we initialize the MeshImport library by calling the NVSHARE::loadMeshImporters function and passing it the directory where MeshImport dlls (MeshImport_[x86,x64].dll and MeshImportEZM_[x86,x64].dll) are placed. Upon success, this function returns the NVSHARE::MeshImport library object. Using the MeshImport library object, we first create the mesh system container by calling the NVSHARE::MeshImport::createMeshSystemContainer function. This function accepts the object name and the EZMesh file contents. If successful, this function returns the MeshSystemContainer object which is then passed to the NVSHARE::MeshImport::getMeshSystem function which returns the NVSHARE::MeshSystem object. This represents the MeshSystem node in the EZMesh XML file.

Once we have the MeshSystem object, we can query all of the subelements. These reside in the MeshSystem object as member variables. So let's say we want to traverse through all of the meshes in the current EZMesh file and copy the per-vertex attributes to our own vector (vertices), we would simply do the following:

In an EZMesh file, the indices are sorted by materials into submeshes. We iterate through all of the submeshes and then store their material name and indices into our container.

After the EZMesh file is parsed and we have the per-vertex data stored, we first generate the OpenGL textures from the EZMesh materials list. Then we store the texture IDs into a material map so that we can refer to the textures by material name.

After the materials, the shaders are loaded as in the previous recipes. The per-vertex data is then transferred to the GPU using vertex array and vertex buffer objects. In this case, we use the interleaved vertex buffer format.

For rendering of the mesh, we first bind the vertex array object of the mesh, attach our shader and pass the shader uniforms. Then we loop over all of the submeshes and bind the appropriate texture (if the submesh has texture). Otherwise, a default color is used. Finally, the indices of the submesh are used to draw the mesh using the glDrawElements function.

There's more…

The demo application implementing this recipe renders a skeletal model with textures. The point light source can be moved by dragging the right mouse button. The output result is shown in the following figure:

There's more… See also

You can also see John Ratcliff's code repository: A test application for MeshImport library

In this recipe, we will implement a simple particle system. Particle systems are a special category of objects that enable us to simulate fuzzy effects in computer graphics; for example, fire or smoke. In this recipe, we will implement a simple particle system that emits particles at the specified rate from an oriented emitter. In this recipe, we will assign particles with a basic fire color map without texture, to give the effect of fire.

Let us start this recipe by following these simple steps:

  1. Create a vertex shader without any per-vertex attribute. The vertex shader generates the current particle position and outputs a smooth color to the fragment shader for use as the current fragment color.
    #version 330 core  
    smooth out vec4 vSmoothColor;
    uniform mat4 MVP;
    uniform float time;
    
    const vec3 a = vec3(0,2,0);    //acceleration of particles
    //vec3 g = vec3(0,-9.8,0);  // acceleration due to gravity
    
    const float rate = 1/500.0;    //rate of emission
    const float life = 2;        //life of particle
    
    //constants
    const float PI = 3.14159;
    const float TWO_PI = 2*PI;
    
    //colormap colours
    const vec3 RED = vec3(1,0,0);
    const vec3 GREEN = vec3(0,1,0);
    const vec3 YELLOW = vec3(1,1,0);
    
    //pseudorandom number generator
    float rand(vec2 co){
      return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
    }
    
    //pseudorandom direction on a sphere
    vec3 uniformRadomDir(vec2 v, out vec2 r) {
      r.x = rand(v.xy);
      r.y = rand(v.yx);
      float theta = mix(0.0, PI / 6.0, r.x);
      float phi = mix(0.0, TWO_PI, r.y);
      return vec3(sin(theta) * cos(phi), cos(theta), sin(theta) * sin(phi));
    }
    
    void main() {
      vec3 pos=vec3(0);
      float t = gl_VertexID*rate;
      float alpha = 1;
      if(time>t) {
        float dt = mod((time-t), life);
        vec2 xy = vec2(gl_VertexID,t);
        vec2 rdm=vec2(0);
        pos = ((uniformRadomDir(xy, rdm) + 0.5*a*dt)*dt);
        alpha = 1.0 - (dt/life);    
      }
      vSmoothColor = vec4(mix(RED,YELLOW,alpha),alpha);
      gl_Position = MVP*vec4(pos,1);
    }
  2. The fragment shader outputs the smooth color as the current fragment output color.
    #version 330 core
    smooth in vec4 vSmoothColor;
    
    layout(location=0) out vec4 vFragColor;
    
    void main() {
      vFragColor = vSmoothColor;
    }
  3. Set up a single vertex array object and bind it.
      glGenVertexArrays(1, &vaoID);
      glBindVertexArray(vaoID);
  4. In the rendering code, set up the shader and pass the shader uniforms. For example, pass the current time to the time shader uniform and the combined modelview projection matrix (MVP). Here we add an emitter transform matrix (emitterXForm) to the combined MVP matrix that controls the orientation of our particle emitter.
      shader.Use();
      glUniform1f(shader("time"), time);
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV*emitterXForm));
    
  5. Finally, we render the total number of particles (MAX_PARTICLES) with a call to the glDrawArrays function and unbind our shader.
      glDrawArrays(GL_POINTS, 0, MAX_PARTICLES);
      shader.UnUse();

The entire code from generation of particle positions to assignment of colors and forces is carried out in the vertex shader. In this recipe, we do not store any per-vertex attribute as in the previous recipes. Instead, we simply invoke the glDrawArrays call with the number of particles (MAX_PARTICLES) we need to render. This calls our vertex shader for each particle in turn.

We have two uniforms in the vertex shader, the combined modelview projection matrix (MVP) and the current simulation time (time). The other variables required for particle simulation are stored as shader constants.

In the main function, we calculate the current particle time (t) by multiplying its vertex ID (gl_VertexID) with the emission rate (rate). The gl_VertexID attribute is a unique integer identifier associated with each vertex. We then check the current time (time) against the particle's time (t). If it is greater, we calculate the time step amount (dt) and then calculate the particle's position using a simple kinematics formula.

To generate the particle, we need to have its initial velocity. This is generated on the fly by using a pseudorandom generator with the vertex ID and time as the seeds using the function uniformRandomDir which is defined as follows:

The particle's position is then calculated using the current time and the random initial velocity. To enable respawning, we use the modulus operator (mod) of the difference between the particle's time and the current time (time-t) with the life of particle (life). After calculation of the position, we calculate the particle's alpha to gently fade it when its life is consumed.

The alpha value is used to linearly interpolate between red and yellow colors by calling the GLSL mix function to give the fire effect. Finally, the generated position is multiplied with the combined modelview projection (MVP) matrix to get the clip space position of the particle.

The fragment shader simply uses the vSmoothColor output variable from the vertex shader as the current fragment color.

Extending to textured billboarded particles requires us to change only the fragment shader. The point sprites provide a varying gl_PointCoord that can be used to sample a texture in the fragment shader as shown in the textured particle fragment shader (Chapter5/SimpleParticles/shaders/textured.frag).

The application loads a particle texture and generates an OpenGL texture object from it.

Next, the texture unit to which the texture is bound is passed to the shader.

Finally, the particles are rendered using the glDrawArrays call as shown earlier.

The demo application for this recipe renders a particle system to simulate fire emitting from a point emitter as would typically come out from a rocket's exhaust. We can press the space bar key to toggle display of textured particles. The current view can be rotated and zoomed by dragging the left and middle mouse buttons respectively. The output result from the demo is displayed in the following figure:

There's more…

If the textured particles shader is used, we get the following output:

There's more…

The orientation and position of the emitter is controlled using the emitter transformation matrix (emitterXForm). We can change this matrix to reorient/reposition the particle system in the 3D space.

The shader code given in the previous subsection generates a particle system from a point emitter source. If we want to change the source to a rectangular emitter, we can replace the position calculation with the following shader code snippet:

This gives the following output:

There's more…

Changing the emitter to a disc shape further filters the points spawned in the rectangle emitter by only accepting those which lie inside the circle of a given radius, as given in the following code snippet:

Using this position calculation gives a disc emitter as shown in the following output:

There's more…

We can also add additional forces such as air drag, wind, vortex, and so on, by simply adding to the acceleration or velocity component of the particle system. Another option could be to direct the emitter to a specific path such as a b-spline. We could also add deflectors to deflect the generated particles or create particles that spawn other particles as is typically used in a fireworks particle system. Particle systems are an extremely interesting area in computer graphics which help us obtain wonderful effects easily.

The recipe detailed here shows how to do a very simple particle system entirely on the GPU. While such a particle system might be useful for basic effects, more detailed effects would need more elaborate treatment as detailed in the references in the See also section.

Getting started

The code for this recipe

Let us start this recipe by following these simple steps:

  1. Create a vertex shader without any per-vertex attribute. The vertex shader generates the current particle position and outputs a smooth color to the fragment shader for use as the current fragment color.
    #version 330 core  
    smooth out vec4 vSmoothColor;
    uniform mat4 MVP;
    uniform float time;
    
    const vec3 a = vec3(0,2,0);    //acceleration of particles
    //vec3 g = vec3(0,-9.8,0);  // acceleration due to gravity
    
    const float rate = 1/500.0;    //rate of emission
    const float life = 2;        //life of particle
    
    //constants
    const float PI = 3.14159;
    const float TWO_PI = 2*PI;
    
    //colormap colours
    const vec3 RED = vec3(1,0,0);
    const vec3 GREEN = vec3(0,1,0);
    const vec3 YELLOW = vec3(1,1,0);
    
    //pseudorandom number generator
    float rand(vec2 co){
      return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
    }
    
    //pseudorandom direction on a sphere
    vec3 uniformRadomDir(vec2 v, out vec2 r) {
      r.x = rand(v.xy);
      r.y = rand(v.yx);
      float theta = mix(0.0, PI / 6.0, r.x);
      float phi = mix(0.0, TWO_PI, r.y);
      return vec3(sin(theta) * cos(phi), cos(theta), sin(theta) * sin(phi));
    }
    
    void main() {
      vec3 pos=vec3(0);
      float t = gl_VertexID*rate;
      float alpha = 1;
      if(time>t) {
        float dt = mod((time-t), life);
        vec2 xy = vec2(gl_VertexID,t);
        vec2 rdm=vec2(0);
        pos = ((uniformRadomDir(xy, rdm) + 0.5*a*dt)*dt);
        alpha = 1.0 - (dt/life);    
      }
      vSmoothColor = vec4(mix(RED,YELLOW,alpha),alpha);
      gl_Position = MVP*vec4(pos,1);
    }
  2. The fragment shader outputs the smooth color as the current fragment output color.
    #version 330 core
    smooth in vec4 vSmoothColor;
    
    layout(location=0) out vec4 vFragColor;
    
    void main() {
      vFragColor = vSmoothColor;
    }
  3. Set up a single vertex array object and bind it.
      glGenVertexArrays(1, &vaoID);
      glBindVertexArray(vaoID);
  4. In the rendering code, set up the shader and pass the shader uniforms. For example, pass the current time to the time shader uniform and the combined modelview projection matrix (MVP). Here we add an emitter transform matrix (emitterXForm) to the combined MVP matrix that controls the orientation of our particle emitter.
      shader.Use();
      glUniform1f(shader("time"), time);
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV*emitterXForm));
    
  5. Finally, we render the total number of particles (MAX_PARTICLES) with a call to the glDrawArrays function and unbind our shader.
      glDrawArrays(GL_POINTS, 0, MAX_PARTICLES);
      shader.UnUse();

The entire code from generation of particle positions to assignment of colors and forces is carried out in the vertex shader. In this recipe, we do not store any per-vertex attribute as in the previous recipes. Instead, we simply invoke the glDrawArrays call with the number of particles (MAX_PARTICLES) we need to render. This calls our vertex shader for each particle in turn.

We have two uniforms in the vertex shader, the combined modelview projection matrix (MVP) and the current simulation time (time). The other variables required for particle simulation are stored as shader constants.

In the main function, we calculate the current particle time (t) by multiplying its vertex ID (gl_VertexID) with the emission rate (rate). The gl_VertexID attribute is a unique integer identifier associated with each vertex. We then check the current time (time) against the particle's time (t). If it is greater, we calculate the time step amount (dt) and then calculate the particle's position using a simple kinematics formula.

To generate the particle, we need to have its initial velocity. This is generated on the fly by using a pseudorandom generator with the vertex ID and time as the seeds using the function uniformRandomDir which is defined as follows:

The particle's position is then calculated using the current time and the random initial velocity. To enable respawning, we use the modulus operator (mod) of the difference between the particle's time and the current time (time-t) with the life of particle (life). After calculation of the position, we calculate the particle's alpha to gently fade it when its life is consumed.

The alpha value is used to linearly interpolate between red and yellow colors by calling the GLSL mix function to give the fire effect. Finally, the generated position is multiplied with the combined modelview projection (MVP) matrix to get the clip space position of the particle.

The fragment shader simply uses the vSmoothColor output variable from the vertex shader as the current fragment color.

Extending to textured billboarded particles requires us to change only the fragment shader. The point sprites provide a varying gl_PointCoord that can be used to sample a texture in the fragment shader as shown in the textured particle fragment shader (Chapter5/SimpleParticles/shaders/textured.frag).

The application loads a particle texture and generates an OpenGL texture object from it.

Next, the texture unit to which the texture is bound is passed to the shader.

Finally, the particles are rendered using the glDrawArrays call as shown earlier.

The demo application for this recipe renders a particle system to simulate fire emitting from a point emitter as would typically come out from a rocket's exhaust. We can press the space bar key to toggle display of textured particles. The current view can be rotated and zoomed by dragging the left and middle mouse buttons respectively. The output result from the demo is displayed in the following figure:

There's more…

If the textured particles shader is used, we get the following output:

There's more…

The orientation and position of the emitter is controlled using the emitter transformation matrix (emitterXForm). We can change this matrix to reorient/reposition the particle system in the 3D space.

The shader code given in the previous subsection generates a particle system from a point emitter source. If we want to change the source to a rectangular emitter, we can replace the position calculation with the following shader code snippet:

This gives the following output:

There's more…

Changing the emitter to a disc shape further filters the points spawned in the rectangle emitter by only accepting those which lie inside the circle of a given radius, as given in the following code snippet:

Using this position calculation gives a disc emitter as shown in the following output:

There's more…

We can also add additional forces such as air drag, wind, vortex, and so on, by simply adding to the acceleration or velocity component of the particle system. Another option could be to direct the emitter to a specific path such as a b-spline. We could also add deflectors to deflect the generated particles or create particles that spawn other particles as is typically used in a fireworks particle system. Particle systems are an extremely interesting area in computer graphics which help us obtain wonderful effects easily.

The recipe detailed here shows how to do a very simple particle system entirely on the GPU. While such a particle system might be useful for basic effects, more detailed effects would need more elaborate treatment as detailed in the references in the See also section.

How to do it…

Let us start this recipe by following these simple steps:

Create a vertex shader without any per-vertex attribute. The vertex shader generates the current particle position and outputs a smooth color to the fragment shader
  1. for use as the current fragment color.
    #version 330 core  
    smooth out vec4 vSmoothColor;
    uniform mat4 MVP;
    uniform float time;
    
    const vec3 a = vec3(0,2,0);    //acceleration of particles
    //vec3 g = vec3(0,-9.8,0);  // acceleration due to gravity
    
    const float rate = 1/500.0;    //rate of emission
    const float life = 2;        //life of particle
    
    //constants
    const float PI = 3.14159;
    const float TWO_PI = 2*PI;
    
    //colormap colours
    const vec3 RED = vec3(1,0,0);
    const vec3 GREEN = vec3(0,1,0);
    const vec3 YELLOW = vec3(1,1,0);
    
    //pseudorandom number generator
    float rand(vec2 co){
      return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
    }
    
    //pseudorandom direction on a sphere
    vec3 uniformRadomDir(vec2 v, out vec2 r) {
      r.x = rand(v.xy);
      r.y = rand(v.yx);
      float theta = mix(0.0, PI / 6.0, r.x);
      float phi = mix(0.0, TWO_PI, r.y);
      return vec3(sin(theta) * cos(phi), cos(theta), sin(theta) * sin(phi));
    }
    
    void main() {
      vec3 pos=vec3(0);
      float t = gl_VertexID*rate;
      float alpha = 1;
      if(time>t) {
        float dt = mod((time-t), life);
        vec2 xy = vec2(gl_VertexID,t);
        vec2 rdm=vec2(0);
        pos = ((uniformRadomDir(xy, rdm) + 0.5*a*dt)*dt);
        alpha = 1.0 - (dt/life);    
      }
      vSmoothColor = vec4(mix(RED,YELLOW,alpha),alpha);
      gl_Position = MVP*vec4(pos,1);
    }
  2. The fragment shader outputs the smooth color as the current fragment output color.
    #version 330 core
    smooth in vec4 vSmoothColor;
    
    layout(location=0) out vec4 vFragColor;
    
    void main() {
      vFragColor = vSmoothColor;
    }
  3. Set up a single vertex array object and bind it.
      glGenVertexArrays(1, &vaoID);
      glBindVertexArray(vaoID);
  4. In the rendering code, set up the shader and pass the shader uniforms. For example, pass the current time to the time shader uniform and the combined modelview projection matrix (MVP). Here we add an emitter transform matrix (emitterXForm) to the combined MVP matrix that controls the orientation of our particle emitter.
      shader.Use();
      glUniform1f(shader("time"), time);
      glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV*emitterXForm));
    
  5. Finally, we render the total number of particles (MAX_PARTICLES) with a call to the glDrawArrays function and unbind our shader.
      glDrawArrays(GL_POINTS, 0, MAX_PARTICLES);
      shader.UnUse();

The entire code from generation of particle positions to assignment of colors and forces is carried out in the vertex shader. In this recipe, we do not store any per-vertex attribute as in the previous recipes. Instead, we simply invoke the glDrawArrays call with the number of particles (MAX_PARTICLES) we need to render. This calls our vertex shader for each particle in turn.

We have two uniforms in the vertex shader, the combined modelview projection matrix (MVP) and the current simulation time (time). The other variables required for particle simulation are stored as shader constants.

In the main function, we calculate the current particle time (t) by multiplying its vertex ID (gl_VertexID) with the emission rate (rate). The gl_VertexID attribute is a unique integer identifier associated with each vertex. We then check the current time (time) against the particle's time (t). If it is greater, we calculate the time step amount (dt) and then calculate the particle's position using a simple kinematics formula.

To generate the particle, we need to have its initial velocity. This is generated on the fly by using a pseudorandom generator with the vertex ID and time as the seeds using the function uniformRandomDir which is defined as follows:

The particle's position is then calculated using the current time and the random initial velocity. To enable respawning, we use the modulus operator (mod) of the difference between the particle's time and the current time (time-t) with the life of particle (life). After calculation of the position, we calculate the particle's alpha to gently fade it when its life is consumed.

The alpha value is used to linearly interpolate between red and yellow colors by calling the GLSL mix function to give the fire effect. Finally, the generated position is multiplied with the combined modelview projection (MVP) matrix to get the clip space position of the particle.

The fragment shader simply uses the vSmoothColor output variable from the vertex shader as the current fragment color.

Extending to textured billboarded particles requires us to change only the fragment shader. The point sprites provide a varying gl_PointCoord that can be used to sample a texture in the fragment shader as shown in the textured particle fragment shader (Chapter5/SimpleParticles/shaders/textured.frag).

The application loads a particle texture and generates an OpenGL texture object from it.

Next, the texture unit to which the texture is bound is passed to the shader.

Finally, the particles are rendered using the glDrawArrays call as shown earlier.

The demo application for this recipe renders a particle system to simulate fire emitting from a point emitter as would typically come out from a rocket's exhaust. We can press the space bar key to toggle display of textured particles. The current view can be rotated and zoomed by dragging the left and middle mouse buttons respectively. The output result from the demo is displayed in the following figure:

There's more…

If the textured particles shader is used, we get the following output:

There's more…

The orientation and position of the emitter is controlled using the emitter transformation matrix (emitterXForm). We can change this matrix to reorient/reposition the particle system in the 3D space.

The shader code given in the previous subsection generates a particle system from a point emitter source. If we want to change the source to a rectangular emitter, we can replace the position calculation with the following shader code snippet:

This gives the following output:

There's more…

Changing the emitter to a disc shape further filters the points spawned in the rectangle emitter by only accepting those which lie inside the circle of a given radius, as given in the following code snippet:

Using this position calculation gives a disc emitter as shown in the following output:

There's more…

We can also add additional forces such as air drag, wind, vortex, and so on, by simply adding to the acceleration or velocity component of the particle system. Another option could be to direct the emitter to a specific path such as a b-spline. We could also add deflectors to deflect the generated particles or create particles that spawn other particles as is typically used in a fireworks particle system. Particle systems are an extremely interesting area in computer graphics which help us obtain wonderful effects easily.

The recipe detailed here shows how to do a very simple particle system entirely on the GPU. While such a particle system might be useful for basic effects, more detailed effects would need more elaborate treatment as detailed in the references in the See also section.

How it works…

The entire code from generation of particle positions to assignment of colors and forces is carried out in the vertex shader. In this recipe, we do not store any per-vertex attribute as in the previous recipes. Instead, we simply invoke the glDrawArrays call with the number of particles (MAX_PARTICLES) we need to render. This calls our vertex shader for each particle in turn.

We have two

uniforms in the vertex shader, the combined modelview projection matrix (MVP) and the current simulation time (time). The other variables required for particle simulation are stored as shader constants.

In the main function, we calculate the current particle time (t) by multiplying its vertex ID (gl_VertexID) with the emission rate (rate). The gl_VertexID attribute is a unique integer identifier associated with each vertex. We then check the current time (time) against the particle's time (t). If it is greater, we calculate the time step amount (dt) and then calculate the particle's position using a simple kinematics formula.

To generate the particle, we need to have its initial velocity. This is generated on the fly by using a pseudorandom generator with the vertex ID and time as the seeds using the function uniformRandomDir which is defined as follows:

The particle's position is then calculated using the current time and the random initial velocity. To enable respawning, we use the modulus operator (mod) of the difference between the particle's time and the current time (time-t) with the life of particle (life). After calculation of the position, we calculate the particle's alpha to gently fade it when its life is consumed.

The alpha value is used to linearly interpolate between red and yellow colors by calling the GLSL mix function to give the fire effect. Finally, the generated position is multiplied with the combined modelview projection (MVP) matrix to get the clip space position of the particle.

The fragment shader simply uses the vSmoothColor output variable from the vertex shader as the current fragment color.

Extending to textured billboarded particles requires us to change only the fragment shader. The point sprites provide a varying gl_PointCoord that can be used to sample a texture in the fragment shader as shown in the textured particle fragment shader (Chapter5/SimpleParticles/shaders/textured.frag).

The application loads a particle texture and generates an OpenGL texture object from it.

Next, the texture unit to which the texture is bound is passed to the shader.

Finally, the particles are rendered using the glDrawArrays call as shown earlier.

The demo application for this recipe renders a particle system to simulate fire emitting from a point emitter as would typically come out from a rocket's exhaust. We can press the space bar key to toggle display of textured particles. The current view can be rotated and zoomed by dragging the left and middle mouse buttons respectively. The output result from the demo is displayed in the following figure:

There's more…

If the textured particles shader is used, we get the following output:

There's more…

The orientation and position of the emitter is controlled using the emitter transformation matrix (emitterXForm). We can change this matrix to reorient/reposition the particle system in the 3D space.

The shader code given in the previous subsection generates a particle system from a point emitter source. If we want to change the source to a rectangular emitter, we can replace the position calculation with the following shader code snippet:

This gives the following output:

There's more…

Changing the emitter to a disc shape further filters the points spawned in the rectangle emitter by only accepting those which lie inside the circle of a given radius, as given in the following code snippet:

Using this position calculation gives a disc emitter as shown in the following output:

There's more…

We can also add additional forces such as air drag, wind, vortex, and so on, by simply adding to the acceleration or velocity component of the particle system. Another option could be to direct the emitter to a specific path such as a b-spline. We could also add deflectors to deflect the generated particles or create particles that spawn other particles as is typically used in a fireworks particle system. Particle systems are an extremely interesting area in computer graphics which help us obtain wonderful effects easily.

The recipe detailed here shows how to do a very simple particle system entirely on the GPU. While such a particle system might be useful for basic effects, more detailed effects would need more elaborate treatment as detailed in the references in the See also section.

There's more…

The demo application for this recipe renders a particle system to simulate fire emitting from a point emitter as would typically come out from a rocket's exhaust. We can press the space bar key to toggle display of textured particles. The current view can be rotated and zoomed by dragging the left and middle mouse buttons respectively. The output result from the demo is displayed in the following figure:

There's more…

If the textured particles shader is used, we get the following output:

There's more…

The orientation and

position of the emitter is controlled using the emitter transformation matrix (emitterXForm). We can change this matrix to reorient/reposition the particle system in the 3D space.

The shader code given in the previous subsection generates a particle system from a point emitter source. If we want to change the source to a rectangular emitter, we can replace the position calculation with the following shader code snippet:

This gives the following output:

There's more…

Changing the emitter to a disc shape further filters the points spawned in the rectangle emitter by only accepting those which lie inside the circle of a given radius, as given in the following code snippet:

Using this position calculation gives a disc emitter as shown in the following output:

There's more…

We can also add additional forces such as air drag, wind, vortex, and so on, by simply adding to the acceleration or velocity component of the particle system. Another option could be to direct the emitter to a specific path such as a b-spline. We could also add deflectors to deflect the generated particles or create particles that spawn other particles as is typically used in a fireworks particle system. Particle systems are an extremely interesting area in computer graphics which help us obtain wonderful effects easily.

The recipe detailed here shows how to do a very simple particle system entirely on the GPU. While such a particle system might be useful for basic effects, more detailed effects would need more elaborate treatment as detailed in the references in the See also section.

See also

To know more about detailed effects you can refer to the following links:

Real-time particle systems on the GPU in Dynamic Environment SIGGRAPH 2007 Talk: