Early in Full Bore’s development, we thought that we had painted ourselves in to a corner visually. We had a look that we liked, but the clean easy-to-read block visuals that we favored while making an arcade game quickly became boring to look at when transplanted into open world. Locations were in danger of looking indistinguishable from one another because we were limited in the kinds of visuals we could produce. Arrangements of blocks can only go so far to create a memorable world when the player can only see 300 at time. But we liked our distinct look; we needed something new to help define our areas, and that something was lighting.
About this Article
This write-up is meant a conceptual tutorial- I will go over everything that needs to be in place to do lighting in Full Bore but I will not be getting into specific implementation details. My hope is that anyone reading this who is familiar enough with their tools (be it C++ or Unity or what-have-you) will be able apply some or all of what I’ve written here to their own games. For reference, Full Bore was written from scratch in C++ and has both Direct3D and OpenGL back-ends, but a tool or engine that gives you decent control over your rendering pipeline should be able accommodate this technique. Of course, I have no experience using higher-level tools like Unity so please do as the internet does and tell me how wrong I am in the comments!
First, here’s a quick look at what it being rendered behind the scenes in Full Bore:
The previous three images (click to embiggen) combined with a list of lights and their properties, are used to produce a final image:
For those of you familiar with graphics programming this is pretty much a bog-standard deferred shading implementation like you would expect to see in a modern FPS game. For those of you just getting started with graphics programming this is reasonably radical departure from the more textbook method, so there are few things that need to come together for this particular technique to work. First off…
Your Art Workload Just Doubled (Kinda)
In a 3D game, normal maps are used to add extra detail to models without the expense of adding in extra geometry. In a 2D game, Normal maps similarly let us pretend that the scenes we’re making have depth so that we can light things in a way that reveals extra surface detail. In Full Bore just about every sprite in the game has a normal map.
It is possible to make all of the normal maps you need by making a greyscale height map and then using the NVIDIA Texture Tools for Photoshop or this excellent GIMP plugin to compute the resulting normal map. These plugins all blur the source height map at some point so small per-pixel details are usually obliterated or obscured, but if you’re working on a higher resolution game like, say, Escape Goat 2, computing normal maps from height maps could very well be all you need.
But what if you are working on a low-resolution game? Full Bore’s art is particularly low-res, so many normal maps are computed and then hand-tweaked or completely hand-drawn, and in order to do that we had to develop a good understanding of what, exactly, is encoded in a normal map.
Understanding Normal Maps
Mathematically, a normal map encodes a 3D vector that describes the direction that pixel is facing into the red, green and blue color channels. However, looking at a normal map you may notice the image shows some human-comprehensible surface detail. If you separate the color channels of a normal map out this becomes even more apparent. From a purely visual point of view, a normal map is the combination of a shape lit from the right or left by a red light, from above or below by a green light, and from head-on by a blue
light. The numbers are still a bit important, so, to rephrase:
- The Red channel indicates the horizontal angle of the surface. 127 is neutral, and the extreme values are facing right or left (which extreme is which direction is up to the programmer; in Full Bore 255 is facing right)
- The Green channel indicates the horizontal angle of the surface. 127 is neutral, and the extreme values are facing up or down (in Full Bore 255 is facing up)
- The Blue channel is a bit different. It indicates to what degree the surface is pointing towards the viewer. 255 is pointing straight at the viewer, while 127 is pointing perpendicular to the viewer, and 0 is pointing directly away from the viewer.
For sanity’s sake, be sure familiarize yourself with how to turn editing and visibility on and off for color channels in your image editing program.
Building Normal Maps in Parts
When computing normal maps from height maps, high-detail and low-detail areas will look better when computed with different filters.
In this case it helps to split the height maps into layers and combine them post-normal-filter.
Lastly it is possible to merge normal maps in an image editing program with mostly correct results. This lets you do things like add detailed noise to a smooth normal map, or merge different shapes.
In order to do this, you have to do the following:
- Make 2 copies of the normal map you are merging in.
- In one, replace the whole blue channel with 50% grey, in the other, replace the red and green channels with pure white.
- Set the red-green layer’s blend effect to “Overlay”, and set the blue layer’s to “Multiply”
- Tweak levels as needed
By now should have enough to get you started making your own normal maps, so now…
You Need To Render Things Differently
Well, you don’t need to, but directly rendering sprites with normal mapped lighting severely limits the number of lights you can have on the screen. To get around this limitation, Full Bore uses deferred shading, a technique more often seen in 3D games but also one perfectly suited to what we’re trying to do here. When doing deferred shading, rendering happens in two stages.
First you write all the information you need to run your light shader into one or more frame buffers, this can be accomplished efficiently by using multiple render targets and appropriately written shaders (how exactly you do that depends on what library or tool you’re using.) In Full Bore, every texture has a corresponding normal and luminance texture which lines up exactly with the original color texture, so when a given sprite is drawn out, it’s trivial to draw the other data to the appropriate frame buffer by doing another texture lookup. The set of three screenshots near the beginning of the article show what Full Bore’s 3 frame buffers look like.
Second, to actually light up the game, you use your frame buffers as textures to draw your lights. Oh, did I not tell you?
Surprise, Lights Are Geometry Now
Your light shader can only be executed by drawing some geometry to the screen. There are a lot of different ways that you can go about doing this but there is some common ground.
- The light geometry needs to represent the shape of the light you’re drawing, and have appropriate UV coordinates. In Full Bore we compute the UV coordinates in the shader by diving the vertex position by the screen size. This cuts down on GPU bandwidth usage.
- Each light needs to, at least, know where the light is being emitted from. Embedding this in the geometry data is wasteful, so you will need to send this, along with color/brightness/etc in a shader uniform or as part of your geometry instance data.
- Even in a 2D game, each light will have a “height” above the screen. This parameter is useful because it behaves like you would expect it to: low lights predominantly light up edges, and high lights light things more evenly.
- Though the cost of drawing lights is low, they will be your biggest performance killer. Make sure the geometry is an appropriate shape and only as big as the area that will be illuminated, and, naturally, cull any lights that aren’t visible.
Arranging lights in an aesthetically pleasing manner is a subject for another article, but it’s safe to say there is a lot of interesting stuff to play with once you have a working lighting system.
And that’s the basics: you need to make a lot of normal maps, and you should probably use deferred shading for rendering your lighting. Have at it!
Plea for Feedback
I’ve tried to keep this as non-platform-specific as I can, and I am sure that has created some blank spots in my explanations. Please feel free to ask me questions in the comments and I will update the article as needed. If there is any interest I can write about Direct3D or OpenGL implementation details, just be sure to let me know!