Book Image

OpenGL 4 Shading Language Cookbook - Third Edition

By : David Wolff
Book Image

OpenGL 4 Shading Language Cookbook - Third Edition

By: David Wolff

Overview of this book

OpenGL 4 Shading Language Cookbook, Third Edition provides easy-to-follow recipes that first walk you through the theory and background behind each technique, and then proceed to showcase and explain the GLSL and OpenGL code needed to implement them. The book begins by familiarizing you with beginner-level topics such as compiling and linking shader programs, saving and loading shader binaries (including SPIR-V), and using an OpenGL function loader library. We then proceed to cover basic lighting and shading effects. After that, you'll learn to use textures, produce shadows, and use geometry and tessellation shaders. Topics such as particle systems, screen-space ambient occlusion, deferred rendering, depth-based tessellation, and physically based rendering will help you tackle advanced topics. OpenGL 4 Shading Language Cookbook, Third Edition also covers advanced topics such as shadow techniques (including the two of the most common techniques: shadow maps and shadow volumes). You will learn how to use noise in shaders and how to use compute shaders. The book provides examples of modern shading techniques that can be used as a starting point for programmers to expand upon to produce modern, interactive, 3D computer-graphics applications.
Table of Contents (17 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Implementing an edge detection filter with the compute shader


In the Applying an edge detection filter recipe in Chapter 6, Image Processing and Screen Space Techniques, we saw an example of how to implement edge detection using the fragment shader. The fragment shader is well-suited for many image-processing operations, because we can trigger the execution of the fragment shader for each pixel by rendering a screen-filling quad. Since image processing filters are often applied to the result of a render, we can render to a texture, then invoke the fragment shader for each screen pixel (by rendering a quad), and each fragment shader invocation is then responsible for processing a single pixel. Each invocation might need to read from several locations in the (rendered) image texture, and a texel might be read multiple times from different invocations.

This works well for many situations, but the fragment shader was not designed for image processing. With the compute shader, we can have more...