Lighting, lighting… Did you say something about Lights?

Hello there !

I have implemented many features inside the engine, Lighting, Shadows, and Indirect lighting (as I am going to talk with further details as the second last one article).

I have nothing specialy to say about lighting.
I have implemented one deferred shading with projection into the compute shaders.

The good news is the « Array Shadowing ». Indeed, it lets to have a totally lighting bindless shaders, buffers, textures and frameBuffer. The idea behind the « Array Shadowing », is to render all cube map in an ARB_TEXTURE_CUBE_MAP_ARRAY, so you can access to all of them in the direct lighting step. You store into the light the index of your shadow map. You have only to apply the formula saw in the third last one article (Variance Shadow maps) .
To improve the quality of your rendering, you can blur your shadow maps, but I didn’t implement that, because I would have implemented other features before. For example, I will try to implement specular reflections before glossy reflections, they are easier too implement, and faster to render.

Today, I don’t want to make a big article, and we have already talk about lighting, shadows, reflections, and a little bit about global illumination, that’s why we are going to explain many things about the global illumination.

What is Global Illumination ?

In this order, you can see, one bounce, two bounces, and three bounces. In this article, we will see only the first bounce, but you can iterate the following methods to approximate multi bounces global illumination.

James Kajiya lay in 1986 the rendering equation :

$\displaystyle{L^{O}(\bold{x}\rightarrow\omega_o)=L^{e}(\bold{x}\rightarrow\omega_o)+\int_{\omega_i\in\Omega}BRDF(\bold{x},\omega_i\rightarrow\omega_o)L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)\cdot d\omega_i}$

The main objectif is to approximate this equation, but, as you can see, this equation is infinity recursive (into the integral), so we have to stop the computing after few bounces.

Explanations of each terms :

The term $L^{O}(\bold{x}\rightarrow\omega_o)$ is the radiance leaving $\bold{x}$ in direction $\omega_o$, radiance is measured by sensor, and the colour is proportionnal at radiance, in our model, we will lay that radiance is the colour of pixel.

The term $L^{e}(\bold{x}\rightarrow\omega_o)$ is as well the radiance leaving $\bold{x}$ in direction $\omega_o$, but is the radiance emmited without the radiance reflected. This term will be almost everywhere null, except for the « direct lighting ».

The term $\int_{\omega_i\in\Omega}BRDF(\bold{x},\omega_i\rightarrow\omega_o)L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)\cdot d\omega_i$ is more complicate, so I am going to explain every members one by one.

The $\int_{\omega_i\in\Omega}$ means we integrate over all the hemisphere oriented by normal $\bold{N}$ at the point $\bold{x}$ .

The $BRDF(\bold{x},\omega_i\rightarrow\omega_o)$ is the BRDF, you can refer to my article about Global Illumination.

The $L^{i}(\omega_i\rightarrow\bold{x})$ is the radiance coming in $\bold{x}$, it’s the most important term of this equation.

Inside the rendering equation : The direct lighting

The first bounce of light is also known as direct lighting.

We can write this :

$\displaystyle{L^{O}(\bold{x}\rightarrow\omega_o)=L^{e}(\bold{x}\rightarrow\omega_o)+\int_{\omega_i\in\Omega}BRDF(\bold{x},\omega_i\rightarrow\omega_o)L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)\cdot d\omega_i}$
$\displaystyle{L^{O}(\bold{x}\rightarrow\omega_o)=\sum_{i\in \{Lights\}}BRDF(\bold{x},\omega_i\rightarrow\omega_o)L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)}$
$\displaystyle{L^{O}(\bold{x}\rightarrow\omega_o)=\sum_{i\in\{Lights\}}\bold{c_{diff}}\otimes L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)}$

You can find the same result with the area formulation of the global rendering equation.

Inside the rendering equation : The indirect lighting.

The idea is to compute the differents $L^{i}(\omega_i\rightarrow\bold{x})$.
To make that in real time, we can use the Instant Radiosity (from Keller (in 1997)).

The idea is to put virtual point lights for each intersection between light ray and world geometry.
We use an atomic counter with a little framebuffer and we render the geometry to put at the good place virtual point lights.

Thus, we have to compute the « power » of the new light and use this new light as a normal light.
If we write the « differential » rendering equation, we find :

$\displaystyle{dL^{O}(\bold{x}\rightarrow\omega_o)=BRDF(\bold{x},\omega_i\rightarrow\omega_o)L^{i}(\omega_i\rightarrow\bold{x})(\bold{N}\cdot\omega_i)\cdot d\omega_i}$

For PointLights and lambertian surface, we have

$\displaystyle{d\omega_i=\frac{4\pi}{6S},\; S=\frac{1}{width\cdot height},\; BRDF=\frac{\rho}{\pi}}$
$\displaystyle{dL^{O}(\bold{x}\rightarrow\omega_o)=\frac{4\rho}{6S}L^{i}(\omega_i\rightarrow\bold{x})}$.

With a FrameBuffer of 8×8 pixels I have a good results and VPL from the ceiling (With an albedo of 1.0)

The first one is without global illumination provide by VPL, and the second one is with GI.

What will be the future?

1. I will try to implement a photon mapper on the GPU
2. I will continue my engine, the next article about it will be ray marching AO or improving models of VPLs.
3. I will try to implement a voxel cone tracing (at least the ambient occlusion).

Thus, maybe future article are not talking about « real time » in the proper sense, but interractive rendering ^_^.