A while ago now, I tweeted about using cookies as a means of serving images to a browser based on the size of the device viewport. Scott Jehl has already implemented the idea into a branch of his responsive images script but now that I have a platform to document my ideas I’ve decided to write up my approach to responsive images.
CSS media queries allow us to develop flexible designs that adapt to the device rendering them. Initially designers worked in a “desktop-down” fashion designing for large screen sizes, working down to the smallest. This approach has since been challenged and now projects such as Mobile Boilerplate and ‘320 and Up‘ are encouraging a “mobile-up” approach to design.
When it comes to imagery it doesn’t matter which way you choose to implement responsive design because you’ll always end up with the same problem; the dimensions of the images used in a page will always be dictated by the largest screen resolution you designed for.
This becomes an issue as a design scales down to fit smaller screens. The high resolution image is still downloaded by the browser before it’s scaled down to fit into a smaller space. Not only is this a waste of bandwidth, downloading oversized images can also significantly delay the initial page render time. These issues are compounded if the visitor is browsing over a mobile network.
We need a way to serve a scaled version of an image based on the dimensions of the device viewport. There have been a few approaches to this issue, most of which require JavaScript to dynamically modify the src
attribute of <img>
tags based on the window size of the browser. I have a different approach to solving this problem.
Enter cookies…
Whenever a browser requests a file from a server it automatically forwards any cookie data along with the request. If we use JavaScript to populate a cookie with the current screen dimensions all subsequent requests made by the browser will pass this data to the server. In other words, the server would know the screen size of the device asking for the file.
Setting the cookie is easy. A single line of JavaScript will do the trick, but it must be added to the page <head>
to ensure the cookie is set before any images are requested from the server.
I’m storing the dimensions of the screen rather than the browser window so the server can determine the maximum size an image could be rendered at. I can’t reliably use the dimensions of the browser window because it could start out at relatively small size causing low-res images to be downloaded on a hi-res screen and these images would look terrible if the browser was later increased in size.
Working with the cookie on the server
Now the server is able to detect the dimensions of the requesting device we need a script to act on this information and send back an optimised image. For this example I’m using PHP, the code we need is listed below:
In my website’s ‘images’ folder I created ‘index.php’ and populated it with this code. I also added my high resolution image ‘test.png’ along with two smaller versions named ‘test-med.png’ and ‘test-low.png’, these we will be sent to smaller screens.
And finally we need an html document:
How it works
The html page contains the JavaScript to set the cookie and a <img>
element. The image points to the ‘index.php’ script, using the query string (the part after the ‘?’) to specify the file name of the required image. The cookie will be set before the image is requested so the screen dimensions of the device will also be passed to the php script.
The php script itself is pretty simple. It checks for the existence of the cookie and sets the $device_width
and $device_height
variables accordingly. If the cookie isn’t set these variables would evaluate to zero causing the script to return the high resolution version of the image. If $device_width
is not zero, the script then determines which image would be appropriate for the screen and, if it exists, returns it to the browser.
I purposely named the php script ‘index.php’ so I can omit it’s name from file references. I also added it to the images folder so I can toggle progressive and non-progressive images by adding / removing the ?
What if cookies or JavaScript are disabled?
If JavaScript or cookies are disabled then the device dimensions cookie will never be set, which means the high-resolution version of the file will be downloaded as a fallback. Nothing breaks, and we’re no worse off than we were before using this technique.
Are there any caveats?
Yes, but only one. Any images will need to be hosted on a cookie enabled domain.
Summary
This technique isn’t limited to <img>
elements. It could be used to serve responsive images to your CSS. In fact, it can be used for much more than serving images – any file requested from the server could, in theory, be responsive.
We could improve this technique further by appending window.devicePixelRatio
to the cookie the php script can detect for HD and Retina displays and serve higher resolution graphics. The php script could also be adapted to automatically scale down images for us.
Great to finally read a bit more about your experimentation with this, Keith!
I will certainly be experimenting with this soon.
The only problem, like you said, is for those who are using a cookie-free domain (such as static.domain.com) for serving up cookie-less assets. I suppose, you could configure the subdomain to send cookies with image extensions.
Clever approach. I do have another caveat, however, and that is with server-side caching. Each of these different-sized images all have the exact same URL, and if the images are served with far-future
Expires
headers (as they should), then edge servers like Akamai and other intermediary proxies (CDNs) will want to hold onto the response for the first image URL they see, and thereafter setting the client's cookie will have no effect on subsequent requests to that image URL from any client: each client will be served the first image encountered by the proxy.In this case, you'll need to make sure that such proxies do not cache these resources, for example via a
Cache-Control: private
response header. (Another option would be aVary: Cookie
response header, but this wouldn't work if multiple cookies are stored on the client, like session IDs, for obvious reasons.)Disallowing CDNs from caching images is a big disadvantage to this approach. Maybe there's a workaround I'm not thinking of? The
Vary: Cookie
response header could indeed be used if only thedevice_dimensions
is included in the cookie. If the page is located at www.example.com then perhaps a solution would be to limit the scope of the cookie to the (formerly) cookie-free domain at images.example.com used to load images from, e.g.:I'm not sure if the browser security model allows cookies to be set for other subdomains; this might also first need
document.domain = 'example.com';
.Or am I missing something?
The caching issue you've raised is important and is certainly something to think about.
In this example assets are processed by a PHP script and (as far as I know) PHP, like other server languages doesn't create far-future
Expires
headers for it's output so caching shouldn't be an issue if assets are requested directly from the server. That said, your point about proxies holding onto assets based on their URL could present another issue if they decide not to honour PHP cache settings.I guess some experimentation is required.
Yeah, I'm not sure if the benefits gained from device-tailored resources outweigh both client-side caching and edge-caching: the net result might be even less responsive if we don't have the latter?
You're right about client side caching. If it's not utilised, the browser will continually request the same image and the potential gains of "responsiveness" will be lost. Setting a far-future
Expires
header will work for browser caching as the device screen dimensions won't change between requests. It's proxies that present the biggest problem.Is there a way to separately set
Expires
headers for edge and client caching?I understand the conceptual advantage of making the actual resources responsive, but the drawbacks including the inability to edge cache the resources maybe too much to bear for some. In those cases, you can move the responsiveness one step up stream into the html being rendered.
This would leave you with unique filenames for each set of resource sizes that could be dynamically generated and cached as needed.
The issue I see with this approach is, the cookie won't be set until the HTML has been delivered to the client and the cookie JavaScript code has executed, but by then it's too late to dynamically set the image path. The second request to the page could contain the cookie allowing your method to work.
Keith you can easily take care of that problem without having to wait for a second page load. Even in your example your are using PHP to set the cookie, so you obviously have the data you need at a that point. This is presumably in some library file sourced early on in the execution of site code. All you need is, when you set the cookie, ALSO set those values in a temporary global variable. In your php function that checks for a cookie and outputs a path if it finds data, check first for a cookie, then if none check for settings in your global variable.
Caleb, he's not using PHP to set the cookie, only to read it. He's using JavaScript to set it. If PHP had the screen width then you wouldn't need to set a cookie in the first place, but I'm pretty sure it doesn't. As far as I know the browser doesn't send that in the HTTP request, so the only way you can get it is on the client side.
I still prefer your idea about having PHP outputting the real path, though. That avoids the faux URL and any caching issues. The fact that it doesn't kick in until the 2nd HTTP request seems like almost a non-issue. As long as the first request after the document itself isn't a large image then you're fine. 99% of the time the CSS file in the head will be loaded before any images in the body.
Er, nevermind. All the markup would already be generated before the cookie is set, so it would take effect the effects wouldn't be realized until the second page load.
I wonder if using flush() after /head would get the cookie set before the body markup is generated? Probably not, but it might be worth a shot.
Either way, I think waiting until the 2nd page request to get the smaller images is an acceptable compromise.
The comment system appears to have eaten my code instead of escaping it. Here is the function I imagined in php:
Then in the tag, you would add a call to it inside the quotes before the initial slash:
src="<?php echo scaledImagePath(); ?>/file.jpg"
I like Caleb's idea because it removes the need to manually create images at different sizes and instead transfers that job to the server which can do it dynamically, which is definitely a good thing when you consider images uploaded to a CMS by an editor without technical capabilities to edit images first.
I've been using Sencha's io src with much success. I think a combination of this technique with Sencha Src would be really impressive.
What if you leave cookies aside and instead dynamically insert a base-tag into your HTML and combine it on the server side with URL rewriting or branching? E.g.:
<base href="http://www.domain.com/low" />
+ URL rewrtitingor
<base href="http://low.domain.com" />
+ hostname-branchingThe advantages: a) should work well with CDNs and far future expires headers, b) request headers for static resources remain cookie free
And you can still have your fallback for non-JS-enabled user agents. Seems like the best solution to me.
Regards
Schepp
This is a very clever hack, I think it would solve the problem. Here's a proof of your concept: jsbin.com/…/edit
The only caveat with this one is that all responsive images must not indicate a host in their URL, but every other URL must include the host.
Nice!
You can avoid the need for full URLs by removing the base-element again before </body> or ondomready / onload, see: jsbin.com/…/10 Even though your test will not pass then anymore, still the right image gets loaded (exception: Firefox 3.6). And works for me in Safari on Windows, too.
Correction on mine: it does work in Safari, and all other browsers I can test (Firefox, Safari, Chrome, Opera): jsbin.com/…/edit
And I don't know if I like removing the element onload; it seems like this would introduce race conditions, and you'd have to have all of the JS and CSS assets still include the host name in the URL as they would load before onload, and if someone tried to click a relative link before onload, it would 404. So I don't think it's an enhancement in the end, although it is clever.
I agree, that's clever. I think the
<base>
element would have to be in the document for non-JS environments so the high res images will still download. The script could then dynamically change the href attribute.I've got another improvement which avoids having to use multiple domains and thus from having to include the host name in non-image links! You can dynamically set/override the
base.href
to a path with JS: jsbin.com/…/editI had tried this before with a
document.write()
that wrote abase
element before the staticbase
element, and HTML5 says that only the first one should be honored, but unfortunately Safari and Opera don't respect this (though Firefox and Chrome do): jsbin.com/…/editHowever, changing the
base
element dynamically seems to do the trick.OK, maybe my last iteration on your idea: jsbin.com/…/edit
This one dynamically changes the
base.href
not only when first loading the page but also on resize. This probably doesn't gain us a whole lot, but it is cool nonetheless.Just realized another caveat: it won't allow for responsive background images as the background images with relative URLs are relative to the containing stylesheet, not the host page. So in this case, you'd have to serve a different stylesheet for each different resolution.
We've got media queries for the responsive background images ;-)
I was thinking of more or less the same solution, but a couple of major things bother me about it: * Modifying the base element using script is frowned upon (blogs.msdn.com/…/optimal-html-head-ordering-to-avoid-parser-restarts-redownloads-and-improve-performance.aspx). It probably messes up the preload parser at the very least. * document.write is bad news in general, even though it might not be that bad in this context.
You can see my proposal (i.e. not a technique you can use today) on blog.yoav.ws/…/My-take-on-adaptive-images . It is based on the same idea (i.e. modifying the URL with a base), but is more flexible then a general base tag.
PS: You'd maybe also need a <link rel="canonical" /> or you need to remove the base-Tag after page load.
Different peeps were pointing me to Scott Jehl's library which already uses the base-tag: github.com/…/Responsive-Images
Nice! Just two small caveats I see there: a) it uses cookies b) it disables caching in proxies like Squid because of its query parameter usage
I see that src.sencha.io (aka tinySrc) has been alluded to - which obviously I think is a flexible & powerful alternative ;-)
But you could also take a look at my modernizr-server project which uses client property gathering in a similar way but for a far broader range of capabilities.
Suffice to say I think we are all on to something here. Thicker clients, thinner servers, and perhaps the cloud to pick up what can't be done by either.
James, I'm aware of your modernizr-server project. I see you get round the issue of setting the cookie by serving a simple page containing the JS code which then reloads itself, resulting in the server returning the original markup.
I was hoping to avoid the requirement for a page reload - especially considering the caching issues Weston brought up.
A very nice article! It's a very simple idea, but is very effective.
My only concern would be the performance of PHP for reading files. A straight HTTP request without hitting a server-side scripting language would provide optimal performance. Although I haven't looked into this enough, I'm pretty sure something could be done within an nginx location{} or an Apache .htaccess file to read the value of the cookie and rewrite to the appropriate file. This could also allow for cleaner URLs without mentioning script names.
Again, a brilliant idea. I'm just suggesting that PHP might not be the best way to output files to the client as far as performance is concerned. Cheers!
Another way to mitigate the performance issue when serving image data through a script is to use the script to respond with a redirect to the actual static image resource. This way clients and proxies can cache the resource and it also works with CDNs. The tradeoff is the additional HTTP request of course.
I like to point out to effects I noticed:
When testing your cookie approach I noticed that for example IE9 doesn't send the cookie when requesting the image. I'm not sure why that happens as script tags are supposed to block further evaluation of the document. Can you reproduce this?
Firefox (at least version 6 under Linux) shows another undesired behaviour. It fetches the image twice (first the default one and then it realizes that something has changed and re-fetches it). Firefox does so for the cookie approach as well as the base tag approach.