[SOLVED] Improving Javascript Load Times – Concatenation vs Many + Cache

Issue

I’m wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail.

A key thing for me here is that the site I’m working on is not on the public net; it’s a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors).

First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data.

The system I currently have is something like this

  • All javascript files are compressed and loaded at the bottom of the page
  • All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time
  • Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required

Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I’m still not having any additional requests on most pages (recalling from above that almost all users have populated caches).

In addition to this, not loading the JS means that the browser doesn’t have to interpret or execute all this additional code which it isn’t going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine.

Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit).

I’m also looking at using LabJS to allow for parallel loading of the JS when it’s not cached.

Specific questions

  • If there are many tags, but all files are being loaded from the local cache, and less javascript is being loaded overall, is this going to be faster than one tag which is also being loaded from the cache, but contains all the javascript needed anywhere on the site, rather than an appropriate subset?
  • Are there any other reasons to prefer one over the other?
  • Does similar thinking apply to CSS? (I’m currently using a much more monolithic approach to CSS)

Solution

2021 Edit:
As this answer has had some recent upvotes, do notice that with http 2.0 things changed a lot. You don’t get the per-request hit as you now multiplex over a single TCP connection. You also get server-push. While most of the answer is still valid, do take it as how things were previously done.


I would say that the most important thing to focus on is the perception of speed.

First thing to take into consideration, there is no win-win formula out there but a threshold where a javascript file grows into such a size that it could (and should) be split.

GWT uses this and they call it DFN (Dead-for-now) code. There isn’t much magic here. You just have to manually define when you’ll need a need a new piece of code and, should the user need it, just call that file.

How, when, where will you need it?
Benchmark. Chrome has a great benchmarking tool. Use it extensivelly. See if having just a small javascript file will greatly improve the loading of that particular page. If it does by all means do start DFNing your code.

Apart from that it’s all about the perception.

Don’t let the content jump!
If your page has images, set up their widths and heights up front. As the page will load with the elements positioned right where they are supposed to be, there will be no content fitting and adjusting the user’s perception of speed will increase.

Defer javascript!
All major libraries can wait for page load before executing javascript. Use it. jQuery’s goes like this $(document).ready(function(){ ... }). It doesn’t wait for parsing the code but makes the parsed code fire exactly when it should. After page load, before image load.

Important things to take into consideration:

  1. Make sure js files are cached by the client (all the others stand short compared to this one)
  2. Compile your code with Closure Compiler
  3. Deflate your code; it’s faster than Gziping it (on both ends)

Apache example of caching:

// Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf|js|css)$">
    ExpiresDefault A2629744
    Header append Cache-Control "public, proxy-revalidate"
    Header append Vary "Accept-Encoding: *"
</FilesMatch>

Apache example of deflating:

// compresses all files for faster transfer
LoadModule deflate_module modules/mod_deflate.so
AddOutputFilterByType DEFLATE text/html text/plain text/xml font/opentype font/truetype font/woff
<FilesMatch "\.(js|css|html|htm|php|xml)$">
   SetOutputFilter DEFLATE
</FilesMatch>

And last, and probably least, serve your Javascript from a cookie-less domain.

And to keep your question in focus, remember that when you have DFN code, you’ll have several smaller javascript files that, precisely for being split, won’t have the level of compression Closure can give you with a single one. The sum of the parts isn’t equal to the whole in this scenario.

Hope it helps!

Answered By – Frankie

Answer Checked By – Robin (BugsFixing Admin)

Leave a Reply

Your email address will not be published. Required fields are marked *