CakePHP's CDN/CloudFront/asset host helper

This asset host helper (in my case made specifically to help with Amazon’s CloudFront service) can be used to improve page load speed, by using dedicated asset servers.
For now the considered assets are: images, JavaScript files, and style sheets.

The whole idea was inspired by RoR asset helper:
(Please give this link a read to fully understand the purpose behind this whole thing).

If you are lazy, like me, I’ll give you a few bullet points to consider:

  • By default all assets are loaded from the server’s local filesystem
  • Using this helper you can direct CakePHP to link to assets from a dedicated asset server(s)
  • This helps to improve page load speeds and alleviate the server from dealing with static assets
  • Browsers typically open at most two simultaneous connections to a single host, which means your assets often have to wait for other assets to finish downloading
  • Setup more than one host to avoid the issue above
  • To do this, you can either setup actual hosts, or you can use wildcard DNS to CNAME the wildcard to a single asset host. You can read more about setting up your DNS CNAME records from your ISP
  • It is suggested to use host names like:,,, etc. (Depending on how heavy your traffic is, 4 hosts should be OK… add more if needed)

Now onto some features…

Obviously, be sure to download and save the helper into app/views/helpers/cf.php
(And include it in your App Controller’s helpers array)

It works exactly the same (including all options) as core Html and JavaScript helpers.
When in production mode the files are loaded from dedicated asset hosts, when in development mode the files are loaded from the local file system.
Keep the paths on the remote and local file systems the same. It will make your life so much easier and won’t break the helper ;)


//include image from sub directory
<?php echo $cf->image('icons/test.png'); ?>

//include image with some options
<?php echo $cf->image('test.png', array('id' => 'some-image')); ?>

//include multiple JS files at once
<?php echo $cf->jsLink(array('file_one.js', 'file_two.js')); ?>

//include single JS file
<?php echo $cf->jsLink('single_file.js'); ?>

//include single CSS file
<?php echo $cf->css('test.css'); ?>

//include multiple CSS files
<?php echo $cf->css(array('test.css', 'test2.css')); ?>

//include files from the view with false param (i.e. not in-line, but in the head of the page)
//CSS and JavaScript
<?php $cf->jsLink('not_inline.js', FALSE); ?>
<?php $cf->css('not_inline.css', NULL, NULL, FALSE); ?>

What about settings?

You will need provide your own dedicated asset host(s). See the helper comments or the link above to RoR API for details on how it should be set.
You will need provide a dedicated SSL asset host. At least, it is highly recommended to have one.
Be sure to force time stamps in core.php to ensure proper caching.

Please do not hesitate to ask any questions, your input and comments are greatly appreciated!

The code is relatively well documented, the helper is here:

P.S. Take a look here for more info about CloudFront and how it can help improve your app:

  • Eggo

    Your timing is impeccable! This is great, thank you yet again… Cloud Front is a great service.

  • @Eggo

    Cool. Good to hear, do let me know of any feedback… as it is quite experimental at this point. Thanks ;)

  • Eggo

    Also working on an existing upload component which will incorporate uploading files to S3 buckets, too… Will post when it’s in a better spot, would love to have your feedback.

  • @Eggo

    Sounds like a plan…

    By the way, check out this class:

    You can use that as vendor, which we do in one of our apps.
    And then write a wrapper component to it, which I can paste somewhere (as we have it), if you are interested.

    • Eggo

      That would be great and very helpful; I found that class via another site, but interested in seeing the wrapper.

      As an side, really can’t emphasize enough how helpful this blog is — thanks for your effort. It has really benefited me many times in the past, and the Cake community as a whole.

  • @Eggo

    Thanks for the kind words. I’m glad to give something back, considering that cake has saved my life :)

    Here’s the component (very simple):

    The s3 constants are defined in bootstrap.php

    While, this works… it needs a little love, to make it more “cakeable”.
    Well, hope it helps.

  • Henning

    Thanks for this great helper – it’s easy to use and works like a charm :)

    For a lightbox feature on my page I needed the pure image-url which should link to the asset servers. So I made up $cf->imageurl which is now giving me simple links/urls to my images:

    public function imgageurl($assets, $options = array()) {
    return $this->Html->url($this->setAssetPath($assets), $options);

    I thought I post it here – so maybe you could include it in future releases. It’s useful for lightboxes, galleries, etc where you just link to images.

  • Henning

    …infact the new function should read imageurl – not imgageurl.. sorry!!

  • David

    Thanks for sharing your code. After testing your helper i found that in my particular case randomly rotating the server does not make sense.(Actually in my case one server for the static content is more than enough)

    During my tests one image was loading from server1, but on page refresh it was loaded from server2,server3…. so i think it makes sense to select a server based on the asset name (something like hash, take first char code and % with number of servers), so each file will stick to one server and it will be optimally cached on the clients.

    As it is now you might benefit from concurrent downloads by throwing more cnames pointing to the cloud server, but actually the clients could be downloading the same content again and again from different urls.


  • Doug

    Thanks very much for your helper.

    Just in case this is helpful for anyone using s3 as the CF origin and wanting to leverage asset compression for user agents that support it I’ve:

    1) Taken Benjamin-Ds version
    2) Added a ‘remoteCompressedFiles’ option

    The diff of B-Ds version and mine is here (haven’t yet got up to speed with git/github):
    (there are some other minor differences to do with HtmlHelper method signatures)

    The reason for doing this is that s3 doesn’t compress on the fly for you ( so instead you need to compress your css/js assets yourself and then upload them to s3. I add .gz. like this:

    This means that for it to work you need to make sure you have a gz version available for every css/js that you have on your site. And of course you’ll need to update these if they change in development.

    To create gz files in a batch command in gnu linux (eg. ubuntu) I used:
    cd path_to_my_css_js_files
    for a in `ls -1`;
    gzip -c $a > ${a%.*}.gz.${a##*.}


  • teknoid


    Thanks so much for sharing. Hopefully readers will find this improvement very helpful.

  • Doug

    You’re very welcome.
    Just an update, Benjamin-Ds merged it with his code, so no need to look at my diff.

    Thanks teknoid and Bejamin-Ds :)

  • Graham

    Thanks for publishing this. I know it’s a “relatively” old post, but it certainly helped me.

    I just wanted to check something with you.

    When using Asset.timestamp, a timestamp is appended to the asset (example.css?12345678) which changes when the file changes. That will invalidate the cache in the browser and cause it to fetch the example.css file again.

    This should work fine when serving content from an un-cached location (i.e. static asset server) but what would happen when using it in a CDN scenario where the asset is cached on several edge nodes?

    When using a CDN, my understanding is that there are three ways to purge the cache for a file stored on an edge node:

    1. TTL expires and edge node requests from origin.
    2. Rename the file.
    3. Purge cache which causes edge nodes to request all files from origin.

    How would you address this using your helper (if at all) as all it does is tell the browser that it needs to fetch a new version, but as the filename is the same on the CDN, it will just fetch the original version?

  • teknoid


    Your point 2, is exactly what the timestamp does… it effectively changes the file name thus invalidating the cache. Works perfectly well on Amazon’s CloudFront, been in production for years now.

    • Graham


      Thanks for the reply. I don’t think I explained it correctly…

      I tested the following using Rackspace Cloud Files (Mosso):

      1. Upload example.jpg to storage account.
      2. Publish example to CDN.
      3. Browse to http://static.cdn.url/example.jpg?12345678
      4. Browser shows example.jpg
      5. Delete example.jpg from storage account.
      6. Browse to http://static.cdn.url/example.jpg?12345678
      7. Browser shows example.jpg so it’s cached on the CDN.
      8. Upload entirely different image but call it example.jpg to storage account.
      9. Browse to http://static.cdn.url/example.jpg?87654321
      10. Browser shows original example.jpg from the CDN cache.

      In the scenario above, although my browser was expecting a different file because the filename had “effectively” changed, the CDN still served the original cached version because the TTL had not expired and the filename was the same.

      The only way I will see the new image is if the TTL expires or if I manually purge the CDN cache.

      I hope that explains my question a bit better. Also, apologies if I am misunderstanding something.

  • teknoid


    Thanks for the clarification. To me it seems like that is a problematic behavior with CDN. For one… even without timestamp uploading a different file with the same name should invalidate the cache via some bit by bit comparison.

    Secondly having the timestamp should signal that you are looking for a new asset, why serve the old one?

    That being said, if the only way to purge the cache is to trigger it through some other method… then you’d have to look for an additional solution to the problem

%d bloggers like this: