How to render a hundred thousand lights in Maia, or how I stopped worrying and learned to love the GPU.
Posted on Tue 17th July 2012 11.27PM
Maia has its own renderer. Custom written and unlike any other indie game has ever seen.
The gameplay relies on your careful use of lighting to guide your IMP bots, adjust your colonists moods and also to avoid attracting the attention of curious alien fauna.

To do this the game needed a renderer that could allow the user to place lights anywhere they wanted, and to have no limit to the number that could be displayed on screen. This system needed to work on even the most lightweight GPU without crippling the framerate and leaving the game unplayable.

I'm going to explain how the hell I did this.
"Hold on to your butts!"
Standard renderers draw fully lit 3d objects onto the screen in one go. As the pixels of the object are drawn, a program is run for each pixel to decide its colour. We call this a fragment program or a
pixel shader. This shader will do things such as sample a colour from a texture and then multiply it with the lighting value of that pixel.
To calculate the lighting we need what is known as a
normal vector. The normal of a surface is a vector (a direction) that specifies which way a surface of a polygon is pointing. You've probably heard of normal mapping? Normal mapping is a way of using a texture on a polygon to distort an otherwise flat normal to make the surface's lighting look more detailed.
Once we have this vector all we need do to is an operation called a dot or scalar product with the direction vector of the light and we have a value between zero and one to describe how well lit that pixel is. Zero being unlit as the surface normal is facing away from the light source and one being fully illuminated.
So using this method we draw lots of objects onto the screen, but its hard to do lots of lights. We have to calculate which object gets lit by which light. We need to keep the shaders updated with what lights which takes a lot of bandwidth sending data to the gpu. We can probably do a few lights per object before things start crapping out.
Whats more, when an object is drawn, we do all the expensive lighting shading before we even know if it will be visible in the scene. Another object might get drawn right in front of it completely obscuring it, which is a massive waste of resources.
So how can we avoid this? The trick is something called deferred rendering. We defer the lighting calculations and do them in 2d image space.
Instead of rendering out the final coloured and lit pixels, we render out images containing the data we need to create the image.
So first we render out the colour maps of the scene into an image.
Then when we get the normals and distorted normals maps and draw them all into an image too.
We also save out a depth buffer. Which is a black and white image where each pixel stores the distance from the camera.
We test each lights area of effect (usually a bounding box or sphere) against the cameras frustum to see if they are inside it. The cameras frustum is a box-like 3d shape that encompasses everything you can see on screen. Then with the list of lights, we can draw the scene onto a big rectangle the size of the screen and use a pixel shader to light it as if it were a 3d scene by rebuilding that data.
Thing is, like before, we can only render a few lights in that pixel shader as lighting calculations are still expensive. The shader can't really decide whether its worth rendering a light without.. well.. rendering it. So what do we do?
We split the screen into thousands of smaller screens( tiles ) and test every light to see if it would effect that tile.
Then each tile can be given a list of lights that it can deal with, and since we now know in advance how many lights can be rendered we can have special shaders per tile if we know there will be less lights and reduce the amount of calculations involved.
In the case of Maia's engine we can draw up to sixteen lights in each little tile, meaning we can have hundreds of thousands of lights effecting the screen at once. Cool eh?
Ok, so I cheated a bit. The screens above also have whats called "screen space ambient occlusion". This brilliant idea, conceived by a genius at Crytek allows you to use the depth buffer of a scene to guess which parts would be a bit darker. Corners, nooks and crannies etc.
I have my own method that creates a very nice effect. So I multiply that with my ambient lighting (the constant light added to the whole scene) to give it added subtleties and shadowing. Leaving us the basics of Maia's lovely lighting!
I hope that's cleared up your questions about the lighting engine. I'll be writing some more technical blogs soon about how this improves upon many off-the-shelf renderers soon.
-Simo
(
2) comments :
Comment by cYnborg on Wed 18th July 2012 5. 04PM
Ah thanks for that, interesting. Funnily enough I came up with a game concept, before reading this, using similar principles to Maia.
I was thinking about mini light sensitive robots, the little hobbiest toys. Then thinking about them in a game context. Besides being modular with regards to augmentation they could be lead with lights!
Comment by Nork on Sat 8th September 2012 10. 50PM
Would the light pass through a wall if the light is next to the wall and the the light radius is larger than the wall's thickness?
Comments have been disabled.