New Project : Architecture and Performances.

Hello there :-).

As I have promised the last time, I have started again from the beginning my engine. I am going to explain why I had need to change my architecture, and explain my differents choices.

To begin, after thinking a lot about Illumination in real time, I found that it’s not useful to have Global Illumination, with no « correct » direct illumination. That’s why, I will implement a Phong Shading gamma correct.

About performances, my old code was very very bad (you can see the proof after), I had to implement a powerful multi stage culling. Moreover, I had to use the latest technologies of GPU to increase performances. Therefore, I am using Compute Shaders, Bindless Textures, Multi Drawing.

For the beginning, I will explain how my engine works internally. I am using several big buffers, for example one which contains all of Vertex, another one which contains all of Index, Command, and so on. The Matrix, Command, AABB3D buffers are filled in each frame. After, a view frustrum culling on GPU is performed, therefore, the Command Buffer is also bind to SHADER_STORAGE_BUFFER and INDIRECT_DRAW. After culling, a pre Depth pass is performed and, finally, the final model pass is performed. We will see the details later.

To have a certain coherence in the scene, like glasses on cupboard, if you move the cupboard, your glasses have to move as well, you have to have a scene graph. To do that, I already explained that I am using a system of Node. But now, I have improve this model with one obvious feature. If the Parent Node don’t have to be render, I will not render the children Nodes and Models as well. My system is decomposed in 3 Objects. One SceneManager, One Node, and one Part of this Node (There may be several kind of Part (lights for example is another part)) : ModelNode.

ModelNode

It is the Model Node. It is a part of a Node which contains a class like « pair » between one Model and One Matrix.
ModelNode contains :
– One bounding box in World Landmark (it depends of the Global Matrix of the parent Node).
– One matrix which contains a relative landmark on the Global Matrix of the Parent Node. If one transformation is performed by this Object, the bounding boxe for the ModelNode and the bounding boxes of « all » parents Node are recomputed.

The function « PushInPipeline » can be strange for you, that is why I am going to try to explain you its role. As I already said, I have several big buffers, this function will modify few buffers, like Matrix, bounding box, and commands buffer. You have exactly the same number of Command, AABB, and Matrix. It’s not obvious to have exactly the same number for Matrix and Command, for examples, if you have one model with 300 meshes, you have, normally, 300 commands, but one matrix. In my system, you have therefore 300 matrix. After you will see why.

Node

The Node object is a root of a Sub-Tree in our Scene Graph. Each child of a Node depends of a Landmark of this root, and the bounding box of this root depends of childs bounding boxes, therefore, the building of this « tree » may be one down approach (if a Root transformation is performed) or one up approach(if a child transformation or Model transformation is performed).
This Node contains :
-a Parent
-a vector of children
-a vector of Model
-a globalFactor : It is a global scale factor (useful for the future lights if scaling Node).

  SceneManager

As you can see, the Scene Manager contains two objects, one pointer on Node and one Camera, the last one is not important here, and it can may be move later to Node for example. The objective of this class is to give to user one interface to render all the scene, here, it doesn’t matter the lights, but later, we can add renderLights for example. The function render call succesively initialize, renderModels, renderFinal, the function initlalize update Camera Position and Frustrum, and « reset » our pipeline, the function renderModels render all Models and material information on FrameBuffer, the last one function draws the final texture on the screen with post processing.

Wait, it seems like highly to the « normal » pipeline, right?
Yes, but now, we can improve performances with pre Depth Pass and View Frustrum culling.

What is View Frustrum Culling? The idea behind this term is to reduce a number of « Draw Objects » in this frame. So, you will test if the object is or not into the Camera View, and if not, you don’t render it.
How to perform efficiently this culling?
For one unaccurate culling, you have to perform test on the CPU. Globally, this test is perform in the Node function pushModelsInPipeline.
To have a really accurate and efficient culling, because you have a high number of bounding boxes, you should use your GPU to perform culling. So, a good combinaison between Draw Indirects and Shader Buffer storage will be your friend with the Compute Shaders.

A pre Depth Pass is to avoid to render on the color buffers data that you will change with a nearest object. So, the rendering take 2 passes, but it is, in general, most efficient :).

There is a little diagram with and without optimizations. (ms / size)

courbe1

As you can see on this picture, the performances gave by culling are almost constant, but a Pre Depth pass performances depends of the size of FrameBuffer’s textures.
We gain nearly twice performances with pre depth pass and culling, but if you have a higher number of objects (here is just Sponza drew), you will earn too much performances :-).

If you want to see the code or the documentation of the code :

https://github.com/qnope/Galaxy-Engine
http://qnope.github.io/Galaxy-Engine/

See you next time :-).

Kisses !

Ambient Occlusion revisited, reflections, global illumination and new project

Hello there.

Today, we are going to talk about 4 subjects.
The first one is a new computation of Ambient Occlusion, the second one is reflection, as I said the last time.

The final part of this article will be on GI and about a graphic engine.

Even if we are going to talk of several subjects, it will be fast because I don’t have too much things to say on every field.

So, for the ambient occlusion

We remind to:

\displaystyle{AO=\frac{1}{\pi}\int_{\Omega}V(\overrightarrow{x},\overrightarrow{\omega})\cdot\cos(\theta)\cdot d\omega}
\displaystyle{AO=\frac{1}{\pi}\int_{[0,2\pi]}\int_{[0,\frac{\pi}{2}]}V(\overrightarrow{x},\overrightarrow{\omega})\cdot\cos(\theta)\sin(\theta)\cdot d\theta d\phi}
\displaystyle{AO=1-\frac{1}{2\pi}\int_{[0,2\pi]}\int_{[0,\frac{\pi}{2}]}\overline{V}(\overrightarrow{x},\overrightarrow{\omega})\cdot\sin(2\theta)\cdot d\theta d\phi}
\displaystyle{AO\approx 1-\frac{\pi}{2n}\sum_{i=0}^{n}\overline{V}(\overrightarrow{x},\overrightarrow{\omega_{i}})\cdot\sin(2\theta_{i})}

where \theta_{i} and \omega_{i} are a « random » variable.

If you want the « real AO » (real is not just, because Ambient Occlusion is not one physic based measure), you have to raytrace this function, and take care about all of these members.

So, we want to approximate again this formula, and, now, we have 2 textures, and the second one is the normal map, another own the position map.

If you want to know the AO in one point, you have to take few samplers around this pixel (the sampling area is a disk in my code).
So, if your ray is in the hemisphere, the angle between ray and normal is \theta\in[-\frac{\pi}{2},+\frac{\pi}{2}] . You can improve this formula
with a factor distance
We don’t need really the view function, indeed, if you take a sampler no « empty », you have one intersection, and if the cell is empty, \overrightarrow{n}\cdot\overrightarrow{\omega_{i}}=0, so doesn’t matter.

#version 440 core

/* Uniform */
#define CONTEXT 0

layout(shared, binding = CONTEXT) uniform Context
{
    vec4 invSizeScreen;
    vec4 cameraPosition;
};

layout(binding = 0) uniform sampler2D positionSampler;
layout(binding = 1) uniform sampler2D normalSampler;

// Poisson disk
vec2 samples[16] = {
vec2(-0.114895, -0.222034),
vec2(-0.0587226, -0.243005),
vec2(0.249325, 0.0183541),
vec2(0.13406, 0.211016),
vec2(-0.382147, -0.322433),
vec2(0.193765, -0.460928),
vec2(0.459788, -0.196457),
vec2(-0.0730352, 0.494637),
vec2(-0.323177, -0.676799),
vec2(0.198718, -0.723195),
vec2(0.722377, -0.201672),
vec2(0.396227, 0.636792),
vec2(-0.372639, -0.927976),
vec2(0.474483, -0.880265),
vec2(0.911652, 0.410962),
vec2(-0.0112949, 0.999936),
};

in vec2 texCoord;
out float ao;

void main(void)
{
    vec3 positionAO = texture(positionSampler, texCoord).xyz;
    vec3 N = texture(normalSampler, texCoord).xyz;

    vec2 radius = vec2(4.0 * invSizeScreen.xy); // 4 pixels
    float total = 0;

    for(uint i = 0; i < 16; i ++)
    {
        vec2 texSampler = texCoord + samples[i] * radius;
        vec3 dirRay = texture(positionSampler, texSampler).xyz - positionAO;
        float cosTheta = dot(normalize(dirRay), N);

        if(cosTheta > 0.01 && dot(dirRay, dirRay) < 100.0)
        {
            ao += sin(2.0 * acos(cosTheta));
            total += 1.0;
        }
    }

    // Don't multiply by Pi to get a better result
    // addition is here if total == 0
    ao = 1.0 - ao / (2.0 * total + 0.01);
}
 

For that, you perform the calculation over the screen divided by 4 (2 for width, and 2 for height), and you can apply a gaussian blur with separate pass thanks to compute shaders.
For one rendering in 1920 * 1080p, (Thus, the occlusion map size is : 1024 * 1024), the ambient occlusion pass take less than 2ms to be computed.

I have nothing to say about reflections, I don’t use the optimisations of László Szirmay-Kalos, Barnabás Aszódi, István Lazányi and Mátyás Premecz with their Approximate Ray-Tracing on the GPU with Distance Impostors.

So, it is only a environment map, like omnidirectional shadow (the prior article).

reflexion

Global Illumination (GI), a very interesting subject where researchers spend a lot of time…

The GI, is bounces of light on different surfaces.

GI1

You can see blue, green, and red on the floor.

I don’t implement a very efficient model, so I gonna to present you only the way and the idea that I found.

So, for our mathematical model, we use \overline{\cos} for cos clamped between 0 and 1.

We need to use BRDF (Bidirectional reflectance distribution function). The BRDF is a function (really :p? ) which is given 2 directions(\omega_{i} and \omega_{o}) and return a rapport of « power of outgoing vector on power of incident vector ». This function verify these properties :

\displaystyle{\int_{\Omega}BRDF(\omega_{i},\omega_{o})\cdot\cos(\theta_{r})\cdot d\omega_{r}\leq 1}
\displaystyle{BRDF(\omega_{i},\omega_{o})=BRDF(\omega_{o},\omega_{i})}

For a lambertian surface, the light is reflected with the same intensity all over the hemisphere, so, we can lay BRDF=K and, with these previous property

\displaystyle{\int_{\Omega}BRDF(\omega_{i},\omega_{o})\cdot\cos(\theta_{r})\cdot d\omega_{r}\leq 1}
\displaystyle{\int_{\Omega}K\cdot\cos(\theta_{r})d\omega_{r}\leq 1}
\displaystyle{K\pi\leq 1}
\displaystyle{K\leq\frac{1}{\pi}}

so

\displaystyle{BRDF_{lambertian}=\frac{\rho}{\pi}} where \rho\in[0, 1].

\rho is called « Albedo ». For the clean snow, \rho is close to 1, for a very dark surface \rho is close to 0.

The basic idea behind my algorithm is based on Instant Radiosity from Keller.

We render the scene from the light source, on a very little FrameBuffer (8*8, 16 * 16, 32 * 32, or 64 * 64 (but it’s already too much) ), and thanks to shader buffer storage, for each pixel, we can store on this buffer our lights.

\displaystyle{L_{o}=\frac{\rho\cdot K_{d}}{N\cdot\pi}\cdot L_{i}\cdot\overline{\cos(\theta_{i})}PHONG}
where PHONG (Phong lighting on google) is calculation of the power of light in x and Kd is the colour of the object touch by light.

After that, you just have to render lights like other.

So, for the new project, I will begin to write one open source graphic engine. I will try to do it totally bindless. (Zero bind texture : extension bindless texture, Few bind buffer : Only few big buffers). I will implement a basic tree of partitionning space totally in GPU thanks to compute shaders. I will implement phong shading or phong brdf (if I success ^^).

See you !

Shadows of Darkness

Hello :-). Today we are going to talk about shadows in my engine. So, why shadows are very useful in video games?

Simply because they let to improve strongly the realism of one scene.One exemple with, and without shadows. Capture d'écran de 2014-12-30 14:47:55

Capture d'écran de 2014-12-30 14:49:25

The first one is better, right?
So, how can we draw shadows in real times?
You have two way to do it.

The first way, which we don’t talk really today, is the Stencil Shadow Volume. The main idea hide behind this concept, is you detect the silhouette, and extend the light ray, thus, you can know if you are in the shadow or not. If you want more explanations, this is some links to improve your knowledge about Stencil Shadow Volume.

Silhouette Detection
Stencil Shadow Volume
Efficient Shadow Volume

I don’t use shadow volume because they don’t allow soft shadowing, and they are more complicate than shadow mapping.

The second way is : Shadow Mapping. We are not going to talk soft shadow mapping yet, only hard shadow mapping, even if soft shadows improve more realism, but they are more complicate too.

So, what is the main idea behind Shadow Mapping ?
It is a multi pass algorithm.
In the first pass, you render the scene from the point of view of the light, and in the shadow map, you store the distance of mesh from the point light position.
In the second pass, you render the scene from the point of view of Camera, and you compare the current distance with the shadow map distance to know if you are in the shadow. If Shadow map distance is bigger, you are not in shadow, else, you are.
If you want to use a point light shadow, you have to render 6 times the scene from de light point of view in all directions. It’s my own code to render in a cube map.

static vec3 const look[] = {vec3(1.0, 0.0, 0.0),
                            vec3(-1.0, 0.0, 0.0),
                            vec3(0.0, 1.0, 0.0),
                            vec3(0.0, -1.0, 0.0),
                            vec3(0.0, 0.0, 1.0),
                            vec3(0.0, 0.0, -1.0)};

static vec3 const up[] = {vec3(0.0, -1.0, 0.0),
                          vec3(0.0, -1.0, 0.0),
                          vec3(0.0, 0.0, 1.0),
                          vec3(0.0, 0.0, -1.0),
                          vec3(0.0, -1.0, 0.0),
                          vec3(0.0, -1.0, 0.0)};

LightShadow *lightShadowPtr = (LightShadow*)shadowBuffer->map();

vec3 position = (matrix * vec4(mPosRadius.xyz(), 1.0)).xyz();

lightShadowPtr->positionRadius = vec4(position, mPosRadius.w);

glMemoryBarrier(GL_UNIFORM_BARRIER_BIT);

mat4 projection = perspective<float>(M_PI / 2.0, 1.0, 1.0, mPosRadius.w);

mShadowMaps.bindToDraw(true, device);

for(u32 i = 0; i < 6; ++i)
{
    mShadowMaps.attachCubeMapLayer(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 1);
    device->clearDepthColorBuffer();

    mat4 projectionView = projection * lookAt(position, position + look[i], up[i]);

    rootNode->renderModelsShadow(mat4(1.0f), projectionView, matrixBuffer);
    synchronize();
}

mShadowMaps.bindToDraw(false, device);

 

Capture d'écran de 2014-12-28 18:39:16

It’s the Cube shadow map of my scene.

You have to apply a little bias, else, you can have some little issues ^^.

Capture d'écran de 2014-12-28 18:43:54

You can see many little artefacts, even if it’s beautiful, it’s not a realistic thing :D.

Capture d'écran de 2014-12-28 18:49:37

You can see a little bit aliasing.

Many solutions exist to avoid this aliasing. We are going to talk about variance shadow map.

I remind you esperance and variance

E(x)=\int_{-\infty}^{+\infty}xp(x)dx and Var(x)=\int_{-\infty}^{+\infty}x^2p(x)dx

Variance can be approximated by \displaystyle 0.25\left(\left(\frac{\partial f}{\partial x} \right)^{2} + \left(\frac{\partial f}{\partial y} \right)^{2}\right)

So, my code to store data into variance shadow map is :

moments.x = distance(position, positionRadius.xyz) / positionRadius.w;
float dx = dFdx(moments.x);
float dy = dFdy(moments.x);
moments.y = 0.25 * (dx * dx + dy * dy);

And to compute shadow factor :

vec2 moments = texture(depthCube, lightToVertex).xy;
float variance = max(moments.y, 0.005);
float diff = dist - moments.x;
float p = variance / (variance + diff * diff);
return smoothstep(0.5, 1.0, 2 * (p - 0.5));

Capture d'écran de 2014-12-28 20:29:07

No aliasing now :).

The next time, we will talk about reflections with environment map.

See you soon :).

References :

OpenGL-Tutorial Shadow mapping
Omnidirectional shadows
Variance Shadow Maps

Ambient occlusion : explanations

Hello there. I am sorry that I have not written since long time ago, but I had to pass my exams ^^.

So, what is ambient occlusion?

AO_firstResult

There, we can see respictively : No ambient occlusion, occlusion map, and rendering with ambient occlusion.

So, Ambient occlusion allow to improve shadow in the scene.

Generally, Ambient Occlusion is defined by

ao_integral

Ohhhh, it’s a very difficult formula, with a strange integrale.

We are going to see how we can get one formula in this kind.

Firstly, ambient occlusion is a « ray tracing » technique. The idea behind ambient occlusion is, you launch many rays throughout the hemisphere oriented by normal \overrightarrow{n}

Obviously, we can say that one orthogonal ray to normal is not influent compared to one parallel ray, so, we can introduce a dot product between ray and normal into the integral. We do not perform this integral in all hemisphere, but only in the hemisphere’s surface.

I remind you that the infinitesimal surface of sphere is d\omega = R^{2}\sin(\theta) d\theta d\phi .

So, we can lay down that :

\displaystyle{ ka = K \cdot ka_{total} = \int_{\Omega} V(\overrightarrow{\omega})\cdot \cos(\overrightarrow{n},\overrightarrow{\omega})\cdot d\overrightarrow{\omega}}

Where \Omega is the hemisphere oriented by \overrightarrow{n} and \overrightarrow{\omega} is the ray « launched » in the hemisphere, V(\overrightarrow{\omega}) is the view function defined by

\displaystyle{V(\overrightarrow{\omega}) =  \left\{\begin{matrix}  &0&\hspace{1mm}if\hspace{1mm}occluder\\  &1&\hspace{1mm}if\hspace{1mm}no\hspace{1mm}occluder  \end{matrix}\right.}

K is the constant as we have to compute, because ka have to be ranged between 0 and 1. So, now we can try to compute K, with the view function always to 1, because we compute the case with no occlusion is performed.

\displaystyle{\begin{array}{lcl}&&\int_{\Omega} V(\overrightarrow{\omega})\cdot \cos(\overrightarrow{n},\overrightarrow{\omega})\cdot d\omega \\  &=&\int_{\Omega}\cos(\overrightarrow{n},\overrightarrow{\omega})\cdot d\omega\\  &=&\int_{0}^{2\pi}\int_{0}^{\frac{\pi}{2}}R^2\cdot\frac{\overrightarrow{n}}{{||\overrightarrow{n}||}}\cdot\frac{\overrightarrow{\omega}}{{||{\overrightarrow{w}||}}}\cdot \sin(\theta)\cdot d\theta d\phi \left(||\overrightarrow{\omega}||=||\overrightarrow{n}|| = R \right ) \\  &=&\int_{0}^{2\pi}\int_{0}^{\frac{\pi}{2}}\cos(\theta)\cdot \sin(\theta)\cdot d\theta d\phi\\  &=&\int_{0}^{2\pi}\frac{1}{2}\cdot d\phi\\  &=&\pi \end{array}}

So, K=\frac{1}{\pi}, so, we have the same expression of the first integral in the beginning of this article :

\displaystyle{ka=\frac{1}{\pi}\int_{\Omega}V(\overrightarrow{\omega})\cdot \cos(\theta)\cdot d\omega}

For people who like so much rigourous mathematics, I know it’s not very rigourous, but it’s with this « method » that we will compute our occlusion factor :).
If you prefer a more accurate technique, you can integrate in all Hemisphere (with the variable radius and take a view function who return one value between 0 and 1 according to the distance of occluder from the origin of ray) and, you get exactly the same formula cause you do one thing like this : \frac{1}{R}\int_{0}^{R} dr = 1 with \frac{1}{R} is from the View function to limit the return value from 0 to 1.

So, now, we can try to approximate this integral. We have two problems, we can’t perform « launching » infinite rays, so, we launch only few rays, and we have to use a « inverse » of view function \bar{V}

\displaystyle{\begin{array}{lcl}&&\frac{1}{\pi}\int_{\Omega}V(\overrightarrow{\omega})\cdot\overrightarrow{n}\cdot\overrightarrow{\omega}\cdot d\omega \\  &=&1-\frac{1}{\pi}\int_{\Omega}\bar{V}(\overrightarrow{\omega})\cdot\overrightarrow{n}\cdot\overrightarrow{\omega}\cdot d\omega\\  &\approx&1-\frac{1}{N}\sum_{\mathbb{N}}\bar{V}(\overrightarrow{\omega})\cdot\overrightarrow{n}\cdot\overrightarrow{\omega} \end{array}}

After that, we can blur the occlusion map to improve its rendering.

So we just use a simple blur.

#version 440 core

/* Uniform */
#define CONTEXT 0
#define MATRIX 1
#define MATERIAL 2
#define POINT_LIGHT 3
#define MATRIX_SHADOW 4

layout(local_size_x = 256)in;

layout(shared, binding = CONTEXT) uniform Context
{
    uvec4 sizeScreenFrameBuffer;
    vec4 posCamera;
    mat4 invProjectionViewMatrix;
};

layout(binding = 4) uniform sampler2D AO;
layout(binding = 4, r32f) uniform image2D imageAO;

void main(void)
{
    float blur = texture(AO, vec2(gl_GlobalInvocationID.xy) / sizeScreenFrameBuffer.zw).x;

    for(int i = 4; i > 0; --i)
        blur += texture(AO, vec2((ivec2(gl_GlobalInvocationID.xy) + ivec2(-i, 0))) / sizeScreenFrameBuffer.zw).x;

    for(int i = 4; i > 0; --i)
        blur += texture(AO, vec2((ivec2(gl_GlobalInvocationID.xy) + ivec2(i, 0))) / sizeScreenFrameBuffer.zw).x;

    imageStore(imageAO, ivec2(gl_GlobalInvocationID.xy), vec4(blur / 9, 0, 0, 0));
}

Now, we have to code our ambient occlusion.

ssao_sphere_samples

This picture is very good to understand how we can use our approximation.
Indeed, there, we can see that if the point is red, we have V(\overrightarrow{\omega}) = 1, so \bar{V}\overrightarrow{\omega}) = 0 You just have to test the depth buffer to know if the point is occluder or no.

#version 440 core

/* Uniform */
#define CONTEXT 0
#define MATRIX 1
#define MATERIAL 2
#define POINT_LIGHT 3
#define MATRIX_SHADOW 4

layout(shared, binding = CONTEXT) uniform Context
{
    uvec4 sizeScreenFrameBuffer;
    vec4 posCamera;
    mat4 invProjectionViewMatrix;
};

layout(local_size_x = 16, local_size_y = 16) in;

layout(binding = 1) uniform sampler2D position;
layout(binding = 2) uniform sampler2D normal;
layout(binding = 3) uniform sampler2D distSquare;

writeonly layout(binding = 4, r16f) uniform image2D AO;

void main(void)
{
    float ao = 0.0;

    const ivec2 texCoord = ivec2(gl_GlobalInvocationID.xy);
    const vec2 texCoordAO = vec2(texCoord) / sizeScreenFrameBuffer.zw;

    vec3 positionAO = texture(position, texCoordAO).xyz;
    vec3 normalAO = texture(normal, texCoordAO).xyz;
    float distSquareAO = texture(distSquare, texCoordAO).x;

    for(int j = -2; j < 3; ++j)
    {
        for(int i = -2; i < 3; ++i)
        {
            vec2 texCoordRay = vec2(texCoord + ivec2(i, j)) / sizeScreenFrameBuffer.zw;

            vec3 positionRay = texture(position, texCoordRay).xyz;
            float distSquareRay = texture(distSquare, texCoordRay).x;

            float c = dot(normalAO, normalize(positionRay - positionAO));

            if(c < 0.0)
                c = -c;

            if(distSquareRay < distSquareAO)
                ao += c;
        }
    }

    imageStore(AO, texCoord, vec4((1 - ao / 25), 0.0, 0.0, 0.0));
}

Strangely, in this case, if I use shared memory, I have badder result than just texture. Maybe the allocation of shared memory is longer and it’s not very efficient here :-).

I advise you to use texture instead of imageLoad, indeed, I get 8 times performance better with texture ^^.

Bye :). The next time, we will talk about shadows !