New Project : Architecture and Performances.

Hello there :-).

As I have promised the last time, I have started again from the beginning my engine. I am going to explain why I had need to change my architecture, and explain my differents choices.

To begin, after thinking a lot about Illumination in real time, I found that it’s not useful to have Global Illumination, with no « correct » direct illumination. That’s why, I will implement a Phong Shading gamma correct.

About performances, my old code was very very bad (you can see the proof after), I had to implement a powerful multi stage culling. Moreover, I had to use the latest technologies of GPU to increase performances. Therefore, I am using Compute Shaders, Bindless Textures, Multi Drawing.

For the beginning, I will explain how my engine works internally. I am using several big buffers, for example one which contains all of Vertex, another one which contains all of Index, Command, and so on. The Matrix, Command, AABB3D buffers are filled in each frame. After, a view frustrum culling on GPU is performed, therefore, the Command Buffer is also bind to SHADER_STORAGE_BUFFER and INDIRECT_DRAW. After culling, a pre Depth pass is performed and, finally, the final model pass is performed. We will see the details later.

To have a certain coherence in the scene, like glasses on cupboard, if you move the cupboard, your glasses have to move as well, you have to have a scene graph. To do that, I already explained that I am using a system of Node. But now, I have improve this model with one obvious feature. If the Parent Node don’t have to be render, I will not render the children Nodes and Models as well. My system is decomposed in 3 Objects. One SceneManager, One Node, and one Part of this Node (There may be several kind of Part (lights for example is another part)) : ModelNode.

ModelNode

It is the Model Node. It is a part of a Node which contains a class like « pair » between one Model and One Matrix.
ModelNode contains :
– One bounding box in World Landmark (it depends of the Global Matrix of the parent Node).
– One matrix which contains a relative landmark on the Global Matrix of the Parent Node. If one transformation is performed by this Object, the bounding boxe for the ModelNode and the bounding boxes of « all » parents Node are recomputed.

The function « PushInPipeline » can be strange for you, that is why I am going to try to explain you its role. As I already said, I have several big buffers, this function will modify few buffers, like Matrix, bounding box, and commands buffer. You have exactly the same number of Command, AABB, and Matrix. It’s not obvious to have exactly the same number for Matrix and Command, for examples, if you have one model with 300 meshes, you have, normally, 300 commands, but one matrix. In my system, you have therefore 300 matrix. After you will see why.

Node

The Node object is a root of a Sub-Tree in our Scene Graph. Each child of a Node depends of a Landmark of this root, and the bounding box of this root depends of childs bounding boxes, therefore, the building of this « tree » may be one down approach (if a Root transformation is performed) or one up approach(if a child transformation or Model transformation is performed).
This Node contains :
-a Parent
-a vector of children
-a vector of Model
-a globalFactor : It is a global scale factor (useful for the future lights if scaling Node).

  SceneManager

As you can see, the Scene Manager contains two objects, one pointer on Node and one Camera, the last one is not important here, and it can may be move later to Node for example. The objective of this class is to give to user one interface to render all the scene, here, it doesn’t matter the lights, but later, we can add renderLights for example. The function render call succesively initialize, renderModels, renderFinal, the function initlalize update Camera Position and Frustrum, and « reset » our pipeline, the function renderModels render all Models and material information on FrameBuffer, the last one function draws the final texture on the screen with post processing.

Wait, it seems like highly to the « normal » pipeline, right?
Yes, but now, we can improve performances with pre Depth Pass and View Frustrum culling.

What is View Frustrum Culling? The idea behind this term is to reduce a number of « Draw Objects » in this frame. So, you will test if the object is or not into the Camera View, and if not, you don’t render it.
How to perform efficiently this culling?
For one unaccurate culling, you have to perform test on the CPU. Globally, this test is perform in the Node function pushModelsInPipeline.
To have a really accurate and efficient culling, because you have a high number of bounding boxes, you should use your GPU to perform culling. So, a good combinaison between Draw Indirects and Shader Buffer storage will be your friend with the Compute Shaders.

A pre Depth Pass is to avoid to render on the color buffers data that you will change with a nearest object. So, the rendering take 2 passes, but it is, in general, most efficient :).

There is a little diagram with and without optimizations. (ms / size)

courbe1

As you can see on this picture, the performances gave by culling are almost constant, but a Pre Depth pass performances depends of the size of FrameBuffer’s textures.
We gain nearly twice performances with pre depth pass and culling, but if you have a higher number of objects (here is just Sponza drew), you will earn too much performances :-).

If you want to see the code or the documentation of the code :

https://github.com/qnope/Galaxy-Engine
http://qnope.github.io/Galaxy-Engine/

See you next time :-).

Kisses !

Shadows of Darkness

Hello :-). Today we are going to talk about shadows in my engine. So, why shadows are very useful in video games?

Simply because they let to improve strongly the realism of one scene.One exemple with, and without shadows. Capture d'écran de 2014-12-30 14:47:55

Capture d'écran de 2014-12-30 14:49:25

The first one is better, right?
So, how can we draw shadows in real times?
You have two way to do it.

The first way, which we don’t talk really today, is the Stencil Shadow Volume. The main idea hide behind this concept, is you detect the silhouette, and extend the light ray, thus, you can know if you are in the shadow or not. If you want more explanations, this is some links to improve your knowledge about Stencil Shadow Volume.

Silhouette Detection
Stencil Shadow Volume
Efficient Shadow Volume

I don’t use shadow volume because they don’t allow soft shadowing, and they are more complicate than shadow mapping.

The second way is : Shadow Mapping. We are not going to talk soft shadow mapping yet, only hard shadow mapping, even if soft shadows improve more realism, but they are more complicate too.

So, what is the main idea behind Shadow Mapping ?
It is a multi pass algorithm.
In the first pass, you render the scene from the point of view of the light, and in the shadow map, you store the distance of mesh from the point light position.
In the second pass, you render the scene from the point of view of Camera, and you compare the current distance with the shadow map distance to know if you are in the shadow. If Shadow map distance is bigger, you are not in shadow, else, you are.
If you want to use a point light shadow, you have to render 6 times the scene from de light point of view in all directions. It’s my own code to render in a cube map.

static vec3 const look[] = {vec3(1.0, 0.0, 0.0),
                            vec3(-1.0, 0.0, 0.0),
                            vec3(0.0, 1.0, 0.0),
                            vec3(0.0, -1.0, 0.0),
                            vec3(0.0, 0.0, 1.0),
                            vec3(0.0, 0.0, -1.0)};

static vec3 const up[] = {vec3(0.0, -1.0, 0.0),
                          vec3(0.0, -1.0, 0.0),
                          vec3(0.0, 0.0, 1.0),
                          vec3(0.0, 0.0, -1.0),
                          vec3(0.0, -1.0, 0.0),
                          vec3(0.0, -1.0, 0.0)};

LightShadow *lightShadowPtr = (LightShadow*)shadowBuffer->map();

vec3 position = (matrix * vec4(mPosRadius.xyz(), 1.0)).xyz();

lightShadowPtr->positionRadius = vec4(position, mPosRadius.w);

glMemoryBarrier(GL_UNIFORM_BARRIER_BIT);

mat4 projection = perspective<float>(M_PI / 2.0, 1.0, 1.0, mPosRadius.w);

mShadowMaps.bindToDraw(true, device);

for(u32 i = 0; i < 6; ++i)
{
    mShadowMaps.attachCubeMapLayer(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 1);
    device->clearDepthColorBuffer();

    mat4 projectionView = projection * lookAt(position, position + look[i], up[i]);

    rootNode->renderModelsShadow(mat4(1.0f), projectionView, matrixBuffer);
    synchronize();
}

mShadowMaps.bindToDraw(false, device);

 

Capture d'écran de 2014-12-28 18:39:16

It’s the Cube shadow map of my scene.

You have to apply a little bias, else, you can have some little issues ^^.

Capture d'écran de 2014-12-28 18:43:54

You can see many little artefacts, even if it’s beautiful, it’s not a realistic thing :D.

Capture d'écran de 2014-12-28 18:49:37

You can see a little bit aliasing.

Many solutions exist to avoid this aliasing. We are going to talk about variance shadow map.

I remind you esperance and variance

E(x)=\int_{-\infty}^{+\infty}xp(x)dx and Var(x)=\int_{-\infty}^{+\infty}x^2p(x)dx

Variance can be approximated by \displaystyle 0.25\left(\left(\frac{\partial f}{\partial x} \right)^{2} + \left(\frac{\partial f}{\partial y} \right)^{2}\right)

So, my code to store data into variance shadow map is :

moments.x = distance(position, positionRadius.xyz) / positionRadius.w;
float dx = dFdx(moments.x);
float dy = dFdy(moments.x);
moments.y = 0.25 * (dx * dx + dy * dy);

And to compute shadow factor :

vec2 moments = texture(depthCube, lightToVertex).xy;
float variance = max(moments.y, 0.005);
float diff = dist - moments.x;
float p = variance / (variance + diff * diff);
return smoothstep(0.5, 1.0, 2 * (p - 0.5));

Capture d'écran de 2014-12-28 20:29:07

No aliasing now :).

The next time, we will talk about reflections with environment map.

See you soon :).

References :

OpenGL-Tutorial Shadow mapping
Omnidirectional shadows
Variance Shadow Maps