Early in Full Bore’s development, we thought that we had painted ourselves in to a corner visually. We had a look that we liked, but the clean easy-to-read block visuals that we favored while making an arcade game quickly became boring to look at when transplanted into open world. Locations were in danger of looking indistinguishable from one another because we were limited in the kinds of visuals we could produce. Arrangements of blocks can only go so far to create a memorable world when the player can only see 300 at time.  But we liked our distinct look; we needed something new to help define our areas, and that something was lighting.

About this Article

This write-up is meant a conceptual tutorial- I will go over everything that needs to be in place to do lighting in Full Bore but I will not be getting into specific implementation details.  My hope is that anyone reading this who is familiar enough with their tools (be it C++ or Unity or what-have-you) will be able apply some or all of what I’ve written here to their own games.  For reference, Full Bore was written from scratch in C++ and has both Direct3D and OpenGL back-ends, but a tool or engine that gives you decent control over your rendering pipeline should be able accommodate this technique.  Of course, I have no experience using higher-level tools like Unity so please do as the internet does and tell me how wrong I am in the comments!


First, here’s a quick look at what it being rendered behind the scenes in Full Bore:





The previous three images (click to embiggen) combined with a list of lights and their properties, are used to produce a final image:

Final Lit Image

Final Lit Image

For those of you familiar with graphics programming this is pretty much a bog-standard deferred shading implementation like you would expect to see in a modern FPS game. For those of you just getting started with graphics programming this is reasonably radical departure from the more textbook method, so there are few things that need to come together for this particular technique to work.  First off…

Your Art Workload Just Doubled (Kinda)

In a 3D game,  normal maps are used to add extra detail to models without the expense of adding in extra geometry. In a 2D game, Normal maps similarly let us pretend that the scenes we’re making have depth so that we can light things in a way that reveals extra surface detail.  In Full Bore just about every sprite in the game has a normal map.

Height map, resulting normal map, normal map’s red, green and blue channels

It is possible to make all of the normal maps you need by making a greyscale height map and then using the NVIDIA Texture Tools for Photoshop or this excellent GIMP plugin to compute the resulting normal map.  These plugins all blur the source height map at some point so small per-pixel details are usually obliterated or obscured, but if you’re working on a higher resolution game like, say, Escape Goat 2, computing normal maps from height maps could very well be all you need.

But what if you are working on a low-resolution game?  Full Bore’s art is particularly low-res, so many normal maps are computed and then hand-tweaked or completely hand-drawn, and in order to do that we had to develop a good understanding of what, exactly, is encoded in a normal map.

Understanding Normal Maps

Mathematically, a normal map encodes a 3D vector that describes the direction that pixel is facing into the red, green and blue color channels.  However, looking at a normal map you may notice the image shows some human-comprehensible surface detail.  If you separate the color channels of a normal map out this becomes even more apparent.  From a purely visual point of view, a normal map is the combination of a shape lit from the right or left by a red light, from above or below by a green light, and from head-on by a blue
light. The numbers are still a bit important, so, to rephrase:

  • The Red channel indicates the horizontal angle of the surface.  127 is neutral, and the extreme values are facing right or left (which extreme is which direction is up to the programmer; in Full Bore 255 is facing right)
  • The Green channel indicates the horizontal angle of the surface.  127 is neutral, and the extreme values are facing up or down (in Full Bore 255 is facing up)
  • The Blue channel is a bit different.  It indicates to what degree the surface is pointing towards the viewer.  255 is pointing straight at the viewer, while 127 is pointing perpendicular to the viewer, and 0 is pointing directly away from the viewer.

For sanity’s sake, be sure familiarize yourself with how to turn editing and visibility on and off for color channels in your image editing program.

Building Normal Maps in Parts

When computing normal maps from height maps, high-detail and low-detail areas will look better when computed with different filters.


Height Map, 4-Sample, Sobel 5×5

In this case it helps to split the height maps into layers and combine them post-normal-filter.


Sobel 5×5, 4-sample, 4-sample, combined

Lastly it is possible to merge normal maps in an image editing program with mostly correct results.  This lets you do things like add detailed noise to a smooth normal map, or merge different shapes.


An indented ring normal map combined with a column normal map

In order to do this, you have to do the following:

  1. Make 2 copies of the normal map you are merging in.  
  2. In one, replace the whole blue channel with 50% grey, in the other, replace the red and green channels with pure white.
  3. Set the red-green layer’s blend effect to “Overlay”, and set the blue layer’s to “Multiply”
  4. Tweak levels as needed

By now  should have enough to get you started making your own normal maps, so now…

You Need To Render Things Differently

Well, you don’t need to, but directly rendering sprites with normal mapped lighting severely limits the number of lights you can have on the screen.  To get around this limitation, Full Bore uses deferred shading, a technique more often seen in 3D games but also one perfectly suited to what we’re trying to do here.  When doing deferred shading, rendering happens in two stages.

First you write all the information you need to run your light shader into one or more frame buffers, this can be accomplished efficiently by using multiple render targets and appropriately written shaders (how exactly you do that depends on what library or tool you’re using.)  In Full Bore, every texture has a corresponding normal and luminance texture which lines up exactly with the original color texture, so when a given sprite is drawn out, it’s trivial to draw the other data to the appropriate frame buffer by doing another texture lookup.  The set of three screenshots near the beginning of the article show what Full Bore’s 3 frame buffers look like.

Second, to actually light up the game, you use your frame buffers as textures to draw your lights.   Oh, did I not tell you?

Surprise, Lights Are Geometry Now

Light Geometry for a Spotlight

Your light shader can only be executed by drawing some geometry to the screen.  There are a lot of different ways that you can go about doing this but there is some common ground.

  1. The light geometry needs to represent the shape of the light you’re drawing, and have appropriate UV coordinates.  In Full Bore we  compute the UV coordinates in the shader by diving the vertex position by the screen size.  This cuts down on GPU bandwidth usage.
  2. Each light needs to, at least, know where the light is being emitted from.   Embedding this in the geometry data is wasteful, so you will need to send this, along with color/brightness/etc in a shader uniform or as part of your geometry instance data.
  3. Even in a 2D game, each light will have a “height” above the screen.  This parameter is useful because it behaves like you would expect it to: low lights predominantly light up edges, and high lights light things more evenly.
  4. Though the cost of drawing lights is low, they will be your biggest performance killer.  Make sure the geometry is an appropriate shape and only as big as the area that will be illuminated, and, naturally, cull any lights that aren’t visible.

Arranging lights in an aesthetically pleasing manner is a subject for another article, but it’s safe to say there is a lot of interesting stuff to play with once you have a working lighting system.

And that’s the basics: you need to make a lot of normal maps, and you should probably use deferred shading for rendering your lighting.  Have at it!

Plea for Feedback

I’ve tried to keep this as non-platform-specific as I can, and I am sure that has created some blank spots in my explanations.  Please feel free to ask me questions in the comments and I will update the article as needed.  If there is any interest I can write about Direct3D  or OpenGL implementation details, just be sure to let me know!


10 Responses to Lighting a 2D Game

  1. Tyler Wright says:

    Great post! This gives me a lot to think about. I’ll do more research to figure out what I’m missing, but I didn’t understand the deferred shading process very well, or what the luminance texture is. But it gives me a great starting point for research, so thank you for a great contribution!

  2. Casey Carlin says:

    Deferred shading is a pretty complicated topic that I didn’t want to get in to too much detail about because it gets very implementation specific very quickly. It’s definitely what I consider the better method for lighting a 2D game but if you have a strictly constrained number of lights it’s not necessary. The main reason to use deferred shading is so that performance scales better when you have a lot of lights. When you don’t use deferred shading, you need to draw your whole scene multiple times. This can be as bad as one-scene-draw per light depending on how complicated your shaders are. With deferred shading you render all of the information your lighting shader code needs up front, so each light you render executes your lighting code once for each pixel that is actually lit by that light. It’s a pretty classic performance-flexibility trade-off. I’ve never found a single deferred rendering writeup that I was happy with so maybe I should write a followup….

    The luminance map I completely forgot to mention. Right now, it’s just a self illumination map, for things like computer screens and such that should not be effected by external light very much or at all. Very simple :D

    • Gerald Howes says:

      I am trying to get 2D deferred lighting into my OpenGL engine. I have had little luck as everything is xna or 3d. Could you please share how you did this for your game and maybe give code samples? Thanks

      • Casey Carlin says:

        Using deferred rendering for a 2D game is actually simpler in a lot of ways, but a tutorial for a 3D game is still very applicable. You can ignore the steps where you generate the normal and position g-buffers- Instead, in order to supply normal data for sprites, you use multiple Frame Buffer Objects and shaders that write your diffuse and normal textures to those FBOs. You can then use those FBOs as textures when you draw the lights. Full Bore’s rendering code is caught up in an abstraction framework, so any code I tried to simply post would be confusing. Once the full game is released, bother me, and I might be convinced to make a proper sample.

  3. Ehab Subahi says:

    I saw a video about the game on youtube and i was impressed by the lighting.
    it is Awesome. I wanted to find out how you made it and a quick google search led me here. great work guys.

  4. Thank you for such interesting post. It would be interesting how you implemented lighting.
    Going to play your game on IndieGameStand. :)

  5. Yop says:

    I made a quick test using your assets here => http://www.yopsolo.fr/wp/2013/12/24/as3-2d-lighting-with-colormatrixfilter/

    For this test I don’t use AGAL (aka Flash shader language) but a regular bitmapfilter

  6. James says:

    Very interested in the light geometry in the picture. is the radius fall-off an exponential interpolation between the max radius and the min radius? would love to see the for-loop code for that piece specifically.

    • Casey Carlin says:

      The falloff is computed in the shader on a per-pixel basis. The two contributing variables are that pixel’s distance from the light which is being represented by the geometry and the angle encoded in the normal map for that pixel.

      If you look at the shader here: http://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model
      the diffuse component of the light (in Full Bore’s case, the only component) is the dot-product of the surface normal and the light vector:

      float NdotL = dot( normal, lightDir );

      I think that in Photoshop terms that would make the falloff spherical.

Leave a Reply