Last year I created a demo showing how CSS 3D transforms could be used to create 3D environments. The demo was a technical showcase of what could be achieved with CSS at the time but I wanted to see how far I could push things, so over the past few months I’ve been working on a new version with more complex models, realistic lighting, shadows and collision detection. This post documents how I did it and the techniques I used.
View the demo (best experienced in Safari)
Creating 3D objects
In today’s 3D engines an objects geometry is stored as a collection of points (or vertices) each having an
z property that defines its position in 3D space. A rectangle for example would be defined by four vertices, one for each corner. Each corner can be individually manipulated allowing the rectangle to be pulled into different shapes by moving its vertices along the x, y and z axis. The 3D engine will use this vertex data and some clever math to render 3D objects on your 2D screen.
With CSS transforms this is turned on its head. We can’t define arbitrary shapes using a set of points, we’re stuck with HTML elements which are always rectangular and have two dimensional properties such as
height to determine their position and size. In many ways this makes dealing with 3D easier, as there’s no complex math to deal with — just apply a CSS transform to rotate an element around an axis and you’re done!
Creating objects from rectangles seems limiting at first but you can do a surprising amount with them, especially when you start playing with PNG alpha channels. In the image below you can see the barrel top and wheel objects appear rounded despite being made up of rectangles.
<div> element. Planes can be added to assemblies, (a wrapper
<div> element) allowing the entire object to be rotated and moved as a single entity. A tube is an assembly containing planes rotated around an axis and a barrel is a tube with a top plane and another for the bottom.
The following example shows this in practice – have a look at the “JS” tab:
Lighting was by the biggest challenge in this project. I won’t lie, the math nearly broke me, but it was worth the effort because lighting brings an incredible sense of depth and atmosphere an otherwise flat and lifeless environment.
As I mentioned earlier, an object in your average 3D engine is defined by a series of vertices. To calculate lighting these vertices are used to compute a “normal” which can be used to determine how much light will hit the centre point of a surface. This poses a problem when creating 3D objects with HTML elements because this vertex data doesn’t exist. So the first challenge was to write a set of functions to calculate the four vertices (one for each corner) for an element that had been transformed with CSS so that lighting could be calculated. Once that was figured out I began to experiment with different ways to light objects. In my first experiment, I used multiple
background-images to simulate light hitting a surface by combining a
linear-gradient with an image. The effect uses a gradient that begins and ends with the same
rgba value, producing a solid block of colour. Varying the value of the alpha channel allows the underlying image to bleed through the colour block creating the illusion of shading.
To achieve the second darkest effect in the above image I apply the following styles to an element:
In practice, these styles are not predefined in a stylesheet, they are calculated dynamically and applied directly to the elements
This technique is referred to as flat shading. It’s an effective method of shading, however it does result in the entire surface having the same detail. For example, if I created a 3D wall that extended into the distance, it would be shaded identically along its entire length. I wanted something that looked more realistic.
A second stab at lighting
To simulate real lighting, surfaces need to darken as they extend beyond the range of a light source, and if multiple lights hit the same surface it should shade accordingly.
To flat shade a surface I only had to calculate the light hitting the centre point, but now I need to sample the light at various points on the surface so I can determine how light or dark each point should be. The math required to create this lighting data is identical to that used for flat shading.
Initially, I tried producing a
radial-gradient from the lighting data to use in place of the
linear-gradient in my earlier attempt. The results were more realistic but multiple light sources were still a problem because layering multiple gradients on top of each other causes the underlying texture to get progressively darker. If CSS supported image compositing and blending modes (they are coming) it may have been possible to make radial gradients work.
The solution was to use a
<canvas> element to programatically generate a new texture that could be used as a light map. With the calculated lighting data I could draw a series of black pixels, varying each ones alpha channel based on the amount of light that would hit the surface at that point. Finally the
canvas.toDataURL() method was used to encode the image and use it in place of the
linear-gradient in my first experiment. Repeating this process for each surface produces a realistic lighting effect for the entire environment.
Calculating and drawing these textures is intensive work. The basement ceiling and floor are both 1000 x 2000 pixels in size, creating a texture to cover this area isn’t practical so I only sample lights every 12 pixels, which produces a light map 12 times smaller than the surface it will cover. Setting
background-size: 100% causes the browser to scale the texture up using bilinear (or similar) filtering so it fits the surface area. The scaling effect produces a result that is almost identical to a light map generated for every single pixel.
The background style rule for applying a light map and texture to a surface looks something like this:
Which produces the final lit surface:
Settling on canvas for lighting made casting shadows possible. The logic behind shadow casting turned out to be rather easy. Ordering surfaces based on their proximity to a light source allowed me to not only produce a light map for a surface but also determine if a previous surface had already been hit by the current ray of light. If it had, I could set the relevant light map pixel to be in shadow. This technique allows one image to used for both lighting and shadows.
Collision detection uses a height map – a top down image of the level that uses colour to represent the height of objects within it. White represents the deepest and black the highest possible position the player can reach. As the player moves around the level I convert their position into 2D coordinates and use them to check the colour in the height map. If the colour is lighter than the players last position the player falls, if it’s slightly darker the player can step up or jump on to an object. If the colour is much darker the player comes to a stop – I use this for walls and fences. Currently, this image is drawn by hand but I will be looking into creating it dynamically.
Well, a game would be a natural next step for this project — it would be interesting to see how scalable these techniques are. In the short term, I’ve started working on a prototype CSS3 renderer for the excellent Three.js that uses these same techniques to render geometry and lights created by a real 3D engine.