10 Rules to Make Your Website Load Lightning Fast [ Ultimate Guide ]

10 Rules to Make Your Website Load Lightning Fast Ultimate Guide
By Aditya Samanta Updated

Success is measured in performance. Making high-performance web application is challenging. Web applications are nothing like traditional desktop applications. They run in the browser and need to be downloaded every time they are used.

The next generation of web applications is pushing the modern browsers to their limit with increasing amounts  of rich multimedia content and heavy use of JavaScript and AJAX. Today we reveal the rules you need to know to deliver high-performance next generation web applications.

1. High-Performance Ajax

AJAX applications have different architecture.  They are a series of communication between the browser and the server using an expressive shared language. The shared language is most often JSON.

The work is divided between two systems- the browser and the server. Sharing the workload between the two systems is the key to performance.

The browser sends a packet of data to the server which then sends another packet in exchange. Both packets are in JSON format. For best performance results, the packets should be as small as possible. Smaller packets mean less work for the server, less work for the browser and less time needed to transfer the data.

2. Responsive User Experience in Web Application

Load times don’t give the complete performance measure of web applications. Along with fast load times, the application must respond to user actions as fast as possible.  The page will feel sluggish or may completely freeze if the JavaScript takes too long to execute.

Browsers handle user events one at a time in a first come first serve manner. Responding to user events means processing JavaScript. Any time the browser is processing JavaScript, it can’t respond to other user events. So, for responsive web applications, JavaScript must execute as fast as possible.


The best way to test how fast the code executes is to use tools called profilers. Google Chrome has one built in. Such tools are used to find the amount of time needed to execute functions.

Profilers help us to find the bottleneck in the code i.e. the slowest executing piece of code.  Always target the slowest piece of code as there is much more to be gained.

Any piece of code that is already executing fast is not worth optimizing as there is nothing noticeable to be gained.

Web Workers

Some tasks like complex graphics rendering are time-consuming and simply need a long time to complete. There is nothing that a developer can do to make them fast. Such tasks are best performed using web workers.

Web workers are means to execute long running scripts in a background thread. The background thread does not interfere with the user interface which keeps the user experience responsive.

A worker is an object that runs a named JavaScript file.  Once a worker thread is spawned, communication between the calling script and the spawned thread takes place using postMessage API call and onmessage event handler.

Although web workers don’t block the user interface, they still need hardware resources and may slow down the system.

3. Non-Blocking JavaScript

Script tags have serious performance issues. As soon as a script tag is encountered, the browser hands over the control to the JavaScript run-time. DOM construction will stop and continue only when the script has been completely executed.

If the script is external, then the browser also has to wait before the script is fetched from cache or remote server. Script tags block parallel downloads and stop the browser from the doing anything else.

The reason browsers block parallel script downloads and execution is that scripts may change the DOM which will affect whatever that follows below.

Though the browser does not know what the script is planning to do, the developers do! The developers can tell the browser that the script does not need to be executed at the exact moment it is referenced. Doing so will let the browser carry on DOM construction and parallel downloads with pleasure.

All modern browsers support async and defer attributes for the script tag which offer major performance benefits.

  • async – Enables parallel downloads for script tags. Once the script is downloaded, the DOM construction will be paused as the script gets executed. Once the script execution is complete, DOM construction will continue. Please note that scripts using async attribute may not execute in order that they appear in the document.
  • defer – Also enables parallel downloads for script tags. The difference from async is that the script is executed only after the DOM construction is complete. Scripts with the defer attribute are guaranteed to execute in the order that they appear in the document.

4. Script and Styles Position

Script and style position is very important for performance. Both external and inline scripts block DOM construction and parallel downloads. This delays the page rendering.

Placing all scripts at the top stops all rendering. It is best to place all scripts near the end of the document just before the closing body tag.

The main reason to place scripts at the bottom is to enable progressive rendering. The browser can render the content above the script tags sooner. If that content happens to be the first thing the user sees (above the fold content) then it creates an illusion that the page is loading faster than it actually is.

Another reason to place script tags at the end is to benefit from parallel downloads. Placing scripts at the end of the document considerably speeds up the time needed to render the web page as the browser can carry on parallel downloads without any pause.

Just like scripts should be placed at the bottom of the document, all external stylesheets and style tags should be placed in the head section. That’s because browsers block web page rendering until all external stylesheets have been downloaded.

Placing all external style sheets and style tags in the head section makes sure that all style sheet is downloaded and parsed first. This means the browser can perform progressive rendering of the page content.

5. Efficient JavaScript

Modern web browsers are highly efficient at processing JavaScript. But even so, modern web applications are pushing them to the limit. Complex web applications execute thousands of lines of JavaScript code each time the user interacts. Writing JavaScript as efficiently as possible will make sure that the user experience is not compromised.

JavaScript Scopes and Closures

Scope refers to the current context of the code. JavaScript scopes can be local or global. A global scope is created on page load. As each function executes, they create a new local scope.

Understanding how scopes work in JavaScript is very important. When the browser resolves properties and variables, each level of the scope chain needs to be checked. If a variable is available in the local scope, access to it will be fastest.

var foo = 'foo';

function functionWithClosure() {
    var bar = 'bar';
    return function() {
        var baz = 'baz';
        return {
            foo: foo,
            bar: bar,
            baz: baz

var f = functionWithClosure();

In the above example, when function f is invoked, accessing foo is slower than accessing bar which in turn is slower than accessing baz.

A closure is a special object which contains a function and an environment containing all local variables that were present when the closure was created.

Avoid creating closure functions. An inner function without a closure is way faster than creating a closure. The fastest option, however, is a static function.

Here is an example from Google developers

function setupAlertTimeout() {
  var msg = 'Message to alert';
  window.setTimeout(function() { alert(msg); }, 100);

is slower than:

function setupAlertTimeout() {
  window.setTimeout(function() {
    var msg = 'Message to alert';
  }, 100);

which is slower than:

function alertMsg() {
  var msg = 'Message to alert';

function setupAlertTimeout() {
  window.setTimeout(alertMsg, 100);

Avoid With Statement

The with statement modifies the scope chain. This makes accessing identifiers which are not a part of the specified object slower inside a with block.

With is also a known source of some confusing bugs and compatibility issues. It is best to leave out with statements entirely.

Optimizing Loops

Loops are the most common source of performance issues. Optimizing loops can significantly reduce execution time. The goal should always be to minimize the amount of work done in the loop. The less amount of work the loop has to do, the faster it will execute.

Look at the simple for loop below

for (var i = 0; i < arr.length; i++) {

Each time the above loop executes, it calculates the length of the array. Simply caching the variable will reduce the amount of work that the loop needs to do.

for (var i = 0, len = arr.length; i < len; i++) {

The ++ operator is a tricky one, it creates a temporary variable and returns the incremented value of the temporary variable. By replacing the ++ operator with the += operator, we can speed up the loop further.

for (var i = 0, len = arr.length; i < len; i+=1) {

If the order of the loop does not matter, then we can perform a reverse loop, completely removing the evaluation of the value of i from the loop. The loop will automatically terminate when the value of i reaches 0 (zero) as 0 in JavaScript in considered false.

for (var i = arr.length; i--;) {

The final step is to replace the — operator with -= operator. The following loop will have the fastest execution time.

for (var i = arr.length; i-=1;) {

Here is a complete benchmark test you can run in the browser console to see the difference. Run the tests in Firefox to see the difference. Even on Chrome with the V8 engine(a beast of an engine) you will still notice the performance gain.

'use strict';

var arr, time;
arr = [];

for (var i = 100000; i-=1;) {

time = Number(new Date());
for (var i = 0; i < arr.length; i++) {
console.log( Number(new Date()) - time );

time = Number(new Date());
for (var i = 0, len = arr.length; i < len; i++) {
console.log( Number(new Date()) - time );

time = Number(new Date());
for (var i = 0, len = arr.length; i < len; i+=1) {
console.log( Number(new Date()) - time );

time = Number(new Date());
for (var i = arr.length; i--;) {
console.log( Number(new Date()) - time );

time = Number(new Date());
for (var i = arr.length; i-=1;) {
console.log( Number(new Date()) - time );

Limit DOM interaction

DOM manipulation is very time expensive JavaScript process. Reducing DOM manipulation will drastically reduce JavaScript run-time.

Running DOM manipulation inside loops is probably the worst thing you can do. Always try and avoid DOM manipulation inside loops.

Cache, cache, cache…

Caching variables and selectors is a very simple way to improve the performance of your JavaScript code. Personally, I cache anything that I use more than once. Caching is also highly effective when using a library like jQuery to select DOM elements.

There are other caching techniques too. For example, we can cache a variable further up the chain locally. When used inside a loop, this technique is extremely useful. Run the following benchmark test in Firefox to see the performance gain.

'use strict';

var foo = 'foo-bar-baz',

function bar() {
    var result;
    for (var i = 100000; i -= 1;) {
        result += foo;
    return result;

function baz() {
    var local_foo = foo,
    for (var i = 100000; i -= 1;) {
        result += local_foo;
    return result;

time = Number(new Date());
console.log(Number(new Date()) - time);

time = Number(new Date());
console.log(Number(new Date()) - time);

6. Enable GZIP

The response time can be significantly improved by reducing the response file size. This is where GZIP kicks in. GZIP is an extremely popular and highly efficient compression standard developed by the GNU project. On average, GZIP reduces the file size by about 70%.

All modern browsers come with GZIP support. When performing an HTTP request, the client tells the web server that it supports GZIP using the Accept-Encoding header

Accept-Encoding: gzip, deflate

When the web server sees this header, it may compress the response using one of the methods listed. The server uses the Content-Encoding header to inform the client of the compression used.

Content-Encoding: gzip

GZIP works great on all text-based files like HTML, CSS and JavaScript files. It also worth compressing JSON and XML files. SVG files are XML based, so you can GZIP SVG files too.

GZIP needs to be enabled on the server side. So you must have access to server configuration files to enable GZIP.

7. Image Optimization

Images are normally the most byte heavy resources of any web page. This makes them an easy target to improve site performance.

Optimized images will give you the highest byte savings you can get. The lower the number of bytes that the browser has to download, the faster the site will load.

Removing and Replacing Images

A website on a diet will always be faster. The first thing you should decide is whether the image you are adding is actually needed or not. If you can remove the image entirely, then it is always the best strategy.

If removing the image is not an option, then how replacing it?

  • You can use CSS effects like gradients, shadows and animations to create resolution independent assets which look sharp on all devices and at all zoom levels. The added benefit is that they consume only a fraction of the bytes needed by an image.
  • You can web fonts to create beautiful typography while retaining the ability to select, search and resize text instead on normal images.

Vector vs. Raster images

Vector images consist of points, lines and polygons. Raster images are represented by encoding values of individual pixels.

Vector images look sharp at all resolutions and zoom levels. They are best suited for simple shapes like logos, icons, text and so on.

Though vector images look beautiful on high-resolution displays, they are not suitable for complex graphics. Complex graphics are best stored in raster image formats like GIF, PNG and JPEG.

Raster images, on the other hand, do not scale up. If you zoom the image beyond its original size, it will look jagged and blurry.

Optimizing vector images

ALL modern browsers come with support for SVG, an XML based image file format. SVG can be embedded directly into the web page or as an external asset.

SVG files contain lots of metadata, comments and other information which are not needed to render it. Such information can be safely removed using a tool like svgo.

As SVG is an XML based image format, file size can be further reduced using GZIP compression. Configure your server to enable GZIP support for SVG files.

Optimizing Raster Images

To understand how to optimize raster images, we first need to understand a few things about how raster images work.

How channels work

Raster images are a grid of pixels arranged in 2 dimensions. Each pixel can store RGBA values: R – RED, G – Green, B – Blue and A – Alpha (transparency).

Browsers allocate 8 bits per channel which translate to 256 values ( 2^8 = 256 ). Since there are four channels – R, G, B and A – each pixel takes 4 bytes to represent ( 8 bits x 4 channels = 32 bits = 4 bytes ).

Since each channel can store 256 colors, a 24 bit RGB channel image can store 16777216 (256 x 256x 256) colors in total. That means each pixel takes 3 bytes.

By reducing the number of colors, we can directly reduce the amount of bits needed to represent the channels. For example, only 8 bits are needed to represent 256 colors. That means each pixel can be represented using 1 byte. That gives a savings of 3 bytes per pixel when compared to 32 bit RGBA channel.

Because of the 4 bytes requirement of every pixel in RGBA channel, be careful when loading large images on small memory devices. They may run out of memory and freeze entirely.


Large images when downloaded over the internet, are drawn one row at a time from top to bottom. Some image formats support interlacing which improves the user experience. Interlaced images are shown in successive samples, each improving upon the previous one. The user sees a rough version of the image first while waiting for more details.

Lossless vs lossy image compression

Raster image optimization can be performed in two ways

  • Lossy – Image is compressed by removing extra pixel data.
  • Lossless – Image is compressed by compressing pixel data.

The algorithms depend on the image format being used. This is one of the main differences between image formats like GIF, JPEG and PNG. They all differ in the selection of algorithms they use or omit when performing lossy or Lossless compression on images.

Selecting the right raster image format

Apart from compression algorithms, different image formats also offer different features such as animation support and alpha channel support. Selecting the right image format comes down to the required visual looks and functional features.

Currently, there are three universally supported image formats – GIF, JPEG, PNG.

  • GIF – The only universally supported image format with animation support. The color palette is limited to 256 colors so they are not suitable for storing most images. PNG-8 offers better compression than GIF for images with a small color palette. GIF should only be used when animation support is needed.
  • PNG – The only universally supported image format with lossless compression and alpha channel support. If you have images with transparency or you want to preserve fine details in the image, then you need to save your images in the PNG format. Be careful when storing large images as the file size will be huge.
  • JPEG – If you are optimizing a color-rich photo, then use JPEG. You will need to play with the quality settings to find the best match between quality and file size.

Newer browsers offer support for WebP and JPEG XR which provide better compression and features. You should consider adding alternate versions encoded in this formats for browsers that support them.

Tools to optimize raster images

No tool or settings are suitable for all image formats and all tasks. Best results will depend on the image itself and the technical requirements. Here are the tools you can use

  • gifsicle – Create, edit and optimize GIF images.
  • jpegtran – JPEG optimizer.
  • optipng – PNG Optimizer with lossless compression.
  • pngquant – PNG Optimizer with lossy compression.

Scaled images

The image file size is simply the number of pixels in the image multiplied by the number of bytes needed to store each pixel. Reducing the number of pixels is the easiest way to decrease the file size of the image.

Serving images at the resolution they are displayed is a simple yet highly effective optimization technique. It reduces the number of bytes to be transferred and also relieves the CPU of the extra overhead needed to scale down a large image.

Don’t miss this optimization opportunity. Make sure that you are serving images as close as possible to the display size.

Optimizing Image Sprites

An image sprite is a number of images combined into a single image. Image sprites rely on CSS to display the needed part of the image.

The main goal of sprites is to reduce the number of HTTP requests. An HTTP request is extremely expensive. Any reduction in the number of HTTP requests will help improve the page load time.

Google search page has been using image sprites for a long time to reduce the number of HTTP requests and faster page load times.

Google Sprite website load fast

8. Domain Sharding

Domain sharding is also known as Hostname Parallelization. It is a process that splits up the resources to multiple domains. This reduces the page load time.

Why Domain Sharding Works

Web browsers open a limited number of connections for each domain. When the number of resources to be downloaded is more than the connection limit, then there is a backlog of resources waiting to be downloaded.

Browsers reset the connection limit for each domain. That means each additional domain or sub-domain allows additional connections to be created. This allows the browser to download more resources in parallel.

Improve load time with domain sharding

Web browsers treat each domain as a different server even if they resolve to the same IP address. This allows domain sharding on a single server. As long as the server has enough resource to serve concurrent requests, creating additional CNAME records is sufficient to enable parallel downloads using domain sharding.

Negative impact of Domain Sharding

Way too many concurrent connections may result in performance loss. This is because each additional domain adds extra DNS lookup time.

HTTP/2 will make Domain Sharding obsolete

When HTTP/2 gains mainstream popularity, domain sharding will be obsolete. Later, in this article, we will take a look at the HTTP/2 protocol.

9. Browser Repaint and Reflow

Repaint occurs when changes made to an element affect the visual looks but does not affect the position of the element. Such changes include changing the color, opacity, visibility and so on. The browser applies the new styles and repaints the element.

Reflow happens when changes made to an element affects the elements position or dimension in the document. This is a very expensive process. Changing a single element can affect all children, ancestors or worse the entire document.

Some changes that cause the browser to reflow are:

  • DOM manipulation
  • Adding CSS classes
  • Changing the computed style
  • Resizing the browser window
  • CSS3 animations
  • By user action which triggers :hover pseudo class

Different browsers are good at different things. Some operations are more expensive than others. Keeping this in mind, the following guidelines will help you to minimize reflow in general.

  • Keep the DOM as simple as possible. Changes at one level of the DOM tree may cause changes at all levels of the tree – top root to the inner most child.
  • Keep your CSS rules simple and remove unused CSS rules. If you use CSS frameworks like Bootstrap, import the only modules that you need instead of using the entire library.
  • Perform complex rendering changes like animations out of the flow, using position absolute or position fixed. This makes sure that such changes do not affect the parent elements.
  • Elements with fixed position cause repaint to occur whenever users scrolls. Don’t use position fixed unless needed.

10. HTTP/2

After more than a decade and a half, in May 2015, the HTTP protocol was finally updated to 2.0. HTTP/2 is built on top of Googles SPDY protocol. Support for HTTP/2 has already landed on all major web servers (Apache and Nginx) and all modern web browsers.

Let’s look at the benefits of using HTTP/2

Single Multiplexed Connection

HTTP/2 uses a single, multiplexed connection per domain. It uses this single connection to serve all files instead of creating a new connection per file. This saves valuable connection setup time.

Header compression

On an HTTP/1.1 connection, every time the browser requests an asset, it includes an HTTP header with the request. When the server sends back the response, it also includes an HTTP header. As HTTP/1.1 does not provide header compression, lots of bandwidth is wasted as resource request count increase.

In an HTTP/2 all header information is compressed using HPACK, a compression format specifically built for efficiently compressing HTTP headers.

Server Push

Server push is a new feature available in HTTP/2 which allows web servers to send assets back to the browser even before the browser requests them. The files send will be stored in the browser cache.

For example, when a browser requests an HTML page, the server may push all CSS, JavaScript, images and other assets used in that page. This way, the assets will be already available in the cache when the browser needs them.

Stream Priority

Stream priority another new feature only available in HTTP/2. This is a prioritization mechanism which allows browsers to request the most needed files first.

This is a great feature for the modern web which serves a lot of mixed contents – CSS, JavaScript, images and multi-media. Browsers can request some assets first to render parts of the page first while waiting for heavy resources to load later.

Faster TLS – Secure Web

The single connection setup is especially beneficial with TLS as creating a TLS connection is more expensive compared to a normal connection. HTTP/2 only needs a single TLS handshake. Multiplexing makes the most of this.

By improving TLS performance, HTTP/2 allows more web applications to take benefit of TLS security making the web more secure.

Simple Web Applications

HTTP/2 multiplexing makes many of the optimization techniques used for HTTP/1.1 obsolete. This simplifies the development process and saves lots of extra work.

All of the following optimizations aim at reducing number of HTTP request.

  • Combining CSS and JavaScript files
  • Using image sprites to combine multiple small images into a single large image
  • Using Domain sharding to increase parallel downloads
  • Using inline assets like base-64 encoded versions of images

There’s still a long way before best development practices for HTTP/2 are established. Still, HTTP/2 apps should be written to take maximum advantage of caching instead of optimizing connections using above techniques.


True performance is more than just load time. HTML, CSS, JavaScript, server side optimization all must play together to provide the user experience expected from a next generation web application.

Browsers like Chrome and Firefox offer lots of tools like profilers, debuggers, network analysis, timeline analysis – which you can use to measure the performance of your website.

Remember, you will still need to optimize your server side code too including database queries to gain maximum performance.