Apr 30, 2013

Client-Tier Performance Tuning of Single Page Applications

JavaScript (Browser-based/Single Page) applications are new generation applications that tap on the browser’s processing power to provide rich user experience and fast user interaction. However, these applications do have slower initial page load then traditional applications. As  client-tier is handling more processing in these apps, it plays a bigger role in application responsiveness; client-factors like browser, network, processing power can slow down a request even before it ever reaches the server, even a well-performing back-end, might give a bad performance to certain clients. Along with server optimization ( lean architecture), these applications require client-tier performance tuning to have fast application response time. 


A browser’s main function is to get requested web resources, parse the received content, execute JavaScript and display the content in its window. It has a browser engine, a rendering engine, a JavaScript engine and networking component to fetch content from server resources via HTTP requests. The HTTP request to fetch a resource involves a DNS name lookup, SSL negotiations, setting up TCP connections, transmitting HTTP requests, content download, and fetching resources from cache. Browser processing (rendering/JavaScript execution) is a single threaded process, except network operation, which is done in parallel. However, browsers do have a limit on the number of parallel network connections to a domain, varies from 2 -6 connections. Browsers have their own implementation of these components which is why response time and rendering of same content differs based on browsers.

In order for JavaScript to be executed on the browser it first needs to be transferred JavaScript source code from the server to the browser. This not only cause delays due to network latency, but also leads to synchronous execution of the page. 

When browser processes a downloaded JavaScript resource (script tag), it blocks all other JavaScript/CSS resources till that particular JavaScript is parsed and executed. 
This becomes a major issue when using AJAX toolkit that have large JavaScript libraries.
Also, browser-based applications have large application specific JavaScript code. Usually this code is modularized into smaller more manageable JS files. This means there is a large JavaScript code, distributed in small JS files that need to be transferred from the server to the browser.

So download and processing of JavaScript code itself increases network latency and can be a major bottleneck during page load.

The most efficient solution for this is to combine multiple JavaScript/CSS files into a single large file, which is compressed and minified before being transferred to the browser. 

There are several frameworks available for this, we used JAWR. JAWR provides server side configuration to combine multiple files (JS or CSS) into a single file called bundle. These bundles are generated during server startup and are invoked by calling JAWR tag libs replacing <script> tags. JAWR also applies minification and compression to the bundles. 

We bundled all of the application JS code together with ExtJS into a single JS bundle. 
We created another bundle for our client-side translation code: I8N JavaScript resource bundles and ExtJS locales. The bundle for each language had to be defined separately, otherwise properties would be overwritten. JAWR does have an I8N message generator that can optimize translation bundling, but we did not use it because we had a custom solution for client-side translation. 
We used OWA for client-side event tracking. It had few JavaScript files that need to be included in the application pages to allow OWA to track the events on that page and send the events to a centralized OWA server. We had to do some custom coding, but were able to include OWA JS in our application JAWR bundles. 
We had a single CSS bundle for all the CSS files in the application. This reduced the number of sever calls from the page and also there was less blocking for JS execution. 
We saw a major performance boost to application page load with this change.

The next performance tuning was to manage the number of HTTP requests from the page. As browsers have limit to maximum parallel requests that it can send to a single domain, this can block the AJAX requests on the page and cause delays.

Typically, most of resources on a web page are images, so we need to find ways to reduce network calls for images. The easiest approach is to cache images, because they rarely change. We configured cache control header to cache images and other non-changing static content on the browser. We also use Content Delivery Network –Akamai as a Web Proxy cache and for Geo-accelerates content delivery.

For no caching (first application run) situation, one of the approaches  to reduce the number of images related HTTP calls that we tried was JAWR Image Sprite, we faced few UI complexities with this approach and did not use it.

We were severing static content and dynamic content from the same domain, so we tried to move the images into a separate domain (cookie-less and non-SSL). This lead to mixed content (HTTPS/HTTP) issue, which is not secure and certain browser presents a user prompt for this. Changing image's domain to be secure (HTTPS) increased the SSL negotiation time which negated any gain that we got by increasing parallel network calls from the browser.

We finally implemented a solution to  perloaded/ pre-cache the images in a previous page (login page). The images are preloaded and avaliable in the browser cache before the page that displayed them was called. This reduced the number of HTTP calls on the main page. Also we moved the images to an un-authenticated host, which reduced load on the back-end authenticated server and we reduced a network hop. This gave us a major performance improvement, especially in IE browsers. This might not a generic solution, but I guess using Image Sprites should give the same results.

The images need to be optimized during UX design time, to have smaller size and proper format. However, at times, one might get more latency for smaller images than for larger ones. This can happen when the client gets asymmetric bandwidth and the content response size is not proportionately bigger than the request size as expected by asymmetric (upload: download) ratio.

Along with these changes, we had also tuned back-end (HTTPD/tcServer) to handle large number of concurrent HTTP requests by configuring KeepAlive, Connection timeout and maxThreads/maxClients. On tcServer, we configure Non-Blocking IO connector which is optimized to handle large number of concurrent HTTP connections.

With all these optimizations, our large scale enterprise web application now has a high-performing and rich front-end with lean and scalable back-end.

Note: Checkout Google Chrome Frame if you need to provide new functionlity on legacy browsers.