Hello there, today, we are going to talk about too many things.
Firstly, we are going to show you how our engine work, and how to compute a ray sphere, or ray plane intersection.
We are going to talk about optimization too.
So, to begin, we are going to explain how our engine is build.
We use a technique called Deferred Shading.
Into our GBuffer (it’s the name of buffer used in deferred shading), we have 4 + 1 textures.
We can see that we have 4 important textures, and one less important.
So, now, I’m going to explain what their utilities are.
The color Texture is the basic texture that contains color of each pixel. (RGBA8 : normalized unsigned byte per component)
The position texture is a texture that contains positions of each object projected in the screen. (RGBA32F : float per component)
The normal Texture is a texture that contains normal of each object projected on the screen. (RGBA16_SNORM : normalized signed short per component)
The Distance to the Camera Texture that contains the distance of camera for each position written in the position texture. The same applies to the depth map. (GL_R32F : float for one component).
For example, here are 3 screenshots of Color, Position, and Normal textures.
So, now, we are going to explain the multiple passes in our rendering algorithm.
The first pass, is a rasterization pass, we use a vertex and fragment shaders to do this.
As a reminder, vertex is a processor of vertex, it is invoked one time for each vertex (if you have 1 000 triangles, it’s invoked 3 000 times), and fragment shaders are invoked once time for each pixel.
Our engine works in world space, so, positions, normals are in world position.
To rasterize, we use a simple shader with Frame Buffers (which replace screen, or multiple screens)
So, now, we will explain the maths about Ray Tracing.
As a reminder, here are the respective equations of plane, and spheres.
So, your ray begin at position ro (position of Camera), and have a rd direction.
Now, you just can solve for t the above equations, and you have a position of the intersection object.
But, how can I get a rd vector??
It’s really easy.
The signification of « 1.0 » for the z component here is : « far plane ».
Now, to optimize the sphere computing, you only have to draw one cube with rasterization, and, in a shader, you can do a computing, so, instead of computing of all your pixels, you compute only in a little area in your screen. It’s a power of deferred shading.
Ohhhh, I have a problem : Spheres are cut …
It’s because you have to use a infiniteProjection (glm see around of infinitePerspective).
For the plane, my advise is to use Compute Shaders instead Fragment shaders. Indeed, if you put plane in shared memory (equivalence to L1 cache), you obtain a little gain in performance.
Indeed, shared memory is about 100 times faster than global memory, so if you have a big number of accesses of your buffers, it’s better to use shared memory ^^.
Now, we can draw lights and shadows ^^.
But it will be for the next article ^^.
Bye 🙂 .