CSS transforms make it easy to manipulate an element in 3D space without worrying about the complex maths involved. But what if you want do more than transform elements? How can you shade an element or test if two transformed elements intersect? To do that you need access to the elements vertex data — unfortunately that data doesn’t exist.
In this post I’m going to explain how to generate vertex data for elements transformed using CSS and demonstrate how to use this data to shade elements using a light source. This research made the lighting and shadow techniques in my CSS3 FPS tech demo possible. (Also, if you recently attended Hacker News London and heard my CSS 3D talk, this is the blog post as promised).
Before we can calculate anything we need to set a few ground rules. Firstly, all elements must be absolutely positioned in the centre of the viewport and can only be moved using CSS transforms. Secondly, when an element is added to the viewport I ensure its position reference and transform origins are the same — it makes transforms easier to work with. By default, elements are positioned relative to their corners (using the top
, left
bottom
or right
properties) and transforms are relative to the centre point. I prefer to work with the centre point as my reference, normalising the origins by pulling the element up by half its height and left by half its width using negative margins:
With both origins aligned we can determine the 4 vertices for the corners of the element. The convention is to define the vertices in a clockwise direction as points a
, b
, c
and d
with point a
as the top left corner of the element, b
as the top right, c
as bottom right and d
as bottom left.
The first step is to ignore any CSS transforms and calculate the corner positions of the element in its flat 2D state. To do this we need to determine the element’s width and height and halve them. These values are then used to set the x
and y
property of each vertex. Corners above and to the left of the centre will have negative values and those below and to the right and will have positive values. The z
property is always 0
as this element only exists in 2D space at the moment.
These simple calculations are handled by the following function:
If we call computeVertexData
, passing in our 300 x 200 element, it will return the following vertex data:
To test the function we can add four <div>
elements to the DOM and set their transform
properties to the calculated values above, positioning them over the corners of the element:
Accounting for transforms
Now that we have the vertex data for a 2D element we need to determine the rotation and translation of the element in 3D space by decomposing its transform
matrix. We do this by querying the transform
property of the element using window.getComputedStyle
:
The resulting value (a string) will depend on the transform that was applied to the element:
none
– no transform was applied to the elementmatrix(m11, m12, ... , m23)
– 2D transform was applied to the elementmatrix3d(m11, m12, m13, ... , m44)
– a 3D transform was applied to the element
The string is split into its component parts and converted into a 4×4 matrix (see the parseMatrix
function to see how this is achieved) which can be decomposed to determine the original rotation and translation of the element.
Here is the function that handles the matrix decomposition for translation and rotation:
Now that we can calculate the rotation and translation values of an element we can apply a CSS transform to our element and update its flat 2D vertex data with the 3D components.
Calling getTransform(document.getElementById("face"))
will return:
If we add the x
, y
and z
components of the translate
property to the x
, y
and z
components of the flat vertex data we end up with:
We do the same for rotation, albeit with more complicated maths (see demo), and voila! our a
, b
, c
and d
vertices are now real coordinates in 3D space.
Complex objects
Until now we have been working with a single element but we also have to account for nesting. Elements are transformed relative to their parent so we need to walk up the DOM tree and add parent transforms to the vertex data. Once we have accounted for ancestor transforms, our final computeVertexData
function looks like this:
The following demo shows nested transforms in action. It features an exploded cube that rotates using a CSS animation. For each animation frame the face vertex data is recalculated and repainted to reflect the transform of the parent element.
Using vertex data to shade faces
Now we have the vertex data for our element we can use established, well documented techniques for calculating light, shadows, collisions etc. I’m going to keep things simple and implement flat shading. We’re going to need a small JavaScript library to help with the vector maths (I’m using my own, vect3.js) and a tutorial to explain how to implement lighting.
Did you read the tutorial? — I didn’t think so. Well, it doesn’t matter for now. Essentially, for each element we need to determine its normal, its centre point and the direction to the light. We then take the dot product of the normal and direction vectors to determine how similar they are. The more similar, the more light the element receives. Here’s the implementation:
To shade the element I’m using a black linear-gradient
and varying the alpha channel values to control how much of the background-color
bleeds through. See my Creating 3D worlds with HTML and CSS post for more information on the technique.
Shading complex objects
Let’s shade something a little more complicated. Recently, Julian Garnier released his CSS editor Tridiv which comes with a X-Wing model — let’s use that.
The X-Wing model has 297 faces and a reasonably deep DOM tree. Shading this many elements at once brings the browser to its knees resulting in a dismal 2-3 FPS at best. I think we can do a little better than that.
It’s optimisation time!
Running a quick profile in Chrome developer tools reveals that calls to computeVertexData
are the biggest bottleneck. The function queries the DOM triggering multiple style recalculations (or reflows). It then decomposes the matrix of the element and its ancestors to determine the final transform calculations.
Instead of doing this work for every animation frame we can calculate most of what we need upfront. Pre-calculating the vertex data will remove all computeVertexData
calls from the rendering loop, eradicating the DOM reflow bottlenecks.
Now we have pre-calculated the normal and centre of each face, the render loop has to extract the transform components of the X-Wing wrapper element and add them to the pre-calculated face values. I’m also storing the last calculated light value for each face so I can check if it actually changed between frames before committing to a DOM update. These changes boost rendering performance to around 30 FPS.
There’s still room for improvement. Recalculating the normals and translations for each face, every frame quickly eats up precious processing time. We’re performing these calculations to determine the position of 279 faces relative to a light source, so why not just move the light instead? Decomposing and inverting the transforms applied to the X-Wing wrapper and applying them to the light source means we only need to perform a single translation calculation per frame.
The final change was to add a throttle to the render
function so we can bail-out after a specific amount of time — 5ms in this case. This allows the renderer to do as much work as it can yet keep frame rates as high as possible (60FPS in Chrome / Safari).
This approach means the model will always rotate and translate at fast as the browser can manage but the shading is progressively generated over 2 or 3 frames. This is a great trick to use if you’re wrestling with multiple DOM updates.
That's it
...phew! I appreciate the content of this post may not be everyone's cup of tea but if you stuck with it, thanks for reading and I hope you found it interesting.
I can't believe noone has seen this and wanted to comment.
This is great stuff. I've been doing some experimentation myself:
At HoustonJS (houstonjs.com), I gave a talk on building a 3D engine with JavaScript and SVG. Hence, it works cross-platform, and works pretty smoothly. Here is the info on the talk:
meetup.com/…/136589142 vimeo.com/74150629 matthiasak.github.io/Building-a-Rudimentary-3D-Engine-with-SVG
This was just awesome, it's inspiring to see this kind of work. Can't help but imagine what we'll be doing inside a browser in 5 years time. Thank you for that enlightening tutorial.
Hmm,
I am not quite content with the outcome on Firefox Nightly. See db.tt/Nw1O94ca.
Is it Firefox or the demo?
That's a known issue with Firefox and z sorting of transformed elements.