Even Faster Web Sites

Steve Souders

Mentioned 14

Provides information on ways to optimize a Web site's perfomance, covering such topics as loading scripts without blocking, writing efficient JavaScript, using Comet, using Iframes, and simplifying CSS selectors.

More on Amazon.com

Mentioned in questions and answers.

I have seen both ways, both implementation work just the structures are a bit different. In your experience, which work better and why?

The problem with writing scripts at the head of a page is blocking. The browser must stop processing the page until the script is download, parsed and executed. The reason for this is pretty clear, these scripts might insert more into the page changing the result of the rendering, they also may remove things that dont need to be rendered, etc.

Some of the more modern browsers violate this rule by not blocking on the downloading the scripts (ie8 was the first) but overall the download isn't the majority of the time spent blocking.

Check out Even Faster Websites, I just finished reading it and it goes over all of the fast ways to get scripts onto a page, Including putting scripts at the bottom of the page to allow rendering to complete (better UX).

This question is sort of a tangent to Which browsers support <script async="async" />?.

I've seen a few scripts lately that do something like this:

var s = document.createElement('script');
s.type = 'text/javascript';
s.async = true;
s.src = 'http://www.example.com/script.js';
document.getElementsByTagName('head')[0].appendChild(s);

This is a common way to add a script to the DOM dynamically, which, IIRC from Steve Souders's book "Even Faster Web Sites," prompts all modern browsers to load the script asynchronously (i.e., not blocking page rendering or downloading of subsequent assets).

If I'm correct in that, does the s.async = true statement have any use? Wouldn't it be redundant, even for the browser(s) that support that property, since dynamically appended a script should already trigger asynchronous downloading?

Interesting - I think it turns out that I was wrong in my assumptions.

Based on this thread in the jQuery developers' forum:

http://forum.jquery.com/topic/jquery-ajax-async-vs-html5-script-async

it looks like the async property has been discovered to have an effect on dynamically-appended scripts, at least in Firefox (and potentially Opera, though it doesn't yet support the property).

The forum thread also cites Google's asynchronous tracking code implementation, which, although it appears to make use of the async property in the appropriate context, actually appears to get the syntax wrong. Google uses:

ga.async = true;

when apparently that doesn't work; the proper method would be to use either:

ga.async = 'async';

or

ga.setAttribute('async', 'async');

So, based on my current understanding, not all browsers will actually execute dynamically-appended scripts immediately upon their insertion into the DOM in all cases; Firefox (and eventually Opera) will need the async property to be set to ensure that this always happens.

More info on Firefox's implementation of async here:

https://bugzilla.mozilla.org/show_bug.cgi?id=503481

Most web frameworks and "best practices" are not suitable for very high performance sites and the whitepapers from vendors out there ain't worth the paper they are printed on.

So where should someone look to find books, tutorials or other resources on this subject?

I have done quite a bit of search on this.Finally boiled down to these three books.

  1. High Performance Websites

  2. Even faster websites

  3. The art of scalability

Have a look at Cal Henderson's, 'Building Scalable Websites' by O'Reilly

http://www.amazon.com/Building-Scalable-Web-Sites-Applications/dp/0596102356

he's the guy behind Flickr.

Also have a look at highscalability.com, They have some of the architectures of the most loaded sites out there.

Sorry if this is waaayyy to basic of a question to be asked here. But here goes...

Ok so in another question something was being discussed, and this link was mentioned:

https://developer.mozilla.org/en/Writing_Efficient_CSS

In that article, they say some things I didn't know, but before I ask about them, I should ask this... Does that apply to CSS interpreted by Firefox? Forgive my noobness, but I wasn't sure what they meant by Mozilla UI. (don't hurt me!)

If it does apply, when they say:

Avoid the descendant selector!

The descendant selector is the most expensive selector in CSS. It is dreadfully expensive, especially if a rule using the selector is in the tag or universal category. Frequently what is really desired is the child selector. The use of the descendant selector is banned in UI CSS without the explicit approval of your skin's module owner.

* BAD - treehead treerow treecell { }
* BETTER, BUT STILL BAD (see next guideline) - treehead > treerow > treecell { }

The descendant selector is just a space? And then what would the difference be between child and descendant? Child is an element inside another, but isn't that the same as descendant? OH! Shit as I'm writing I think I might have figured it out. A descendant could be a child/grandchild/great-grandchild/etc? And child is only one deep?

Sorry again for the stupid level of my question... just wondering, because I have been constantly using descendants in my CSS for my site. But yeah, if this isn't about Firefox then this whole question is pointless...

If its not about Firefox, does anyone have a link to an article explaining efficiency for Firefox or Browsers in general?

O'Reillys "Even Faster Web Sites" has a whole chapter on this entitled "Simplifying CSS Selectors". It references your link on Mozilla.

I think two points are worth bearing in mind.

  1. Yes, if you did this as far as possible, your HTML and CSS would be a mess of styles and possibly even more inefficient due to added file size. It is up to the developer to pick the best balance. Don't agonize over optimizing every line as you write it, get it working then see what can be beneficial.

  2. As another commenter noted, it takes the browser milliseconds to figure it out how to apply your styles on page load. However, where this can have much bigger impact is with DHTML. Every time you change the DOM, the browser re-applies your whole style sheet to the page. In this scenario many inefficient selectors could make a visible impact to your page (perceived lagginess/ unresponsiveness).

I'm into my first 3 months of web development and I have been dabbling with some server side scripting in the form of ColdFusion, along with some Javascript, JQuery and CSS.

I have read about CSS optimization and would like to know what are the other pertinent factors contributing to the better performance of a website. What all factors can a developer profile and optimize?

How much part does picking (or rather I should say recommending) a particular browser play in this performance hunt?

cheers

I'd recommend reading Best Practices for Speeding Up Your Web Site and all of the content on Yahoo!'s Exceptional Performance page.

If you like books, you may be interested in High Performance Websites (note that a lot of the content in this is in the Best Practices for Speeding Up Your Web Site article) and Even Faster Websites.

Here are a few of my favourite rules from Best Practices for Speeding Up Your Web Site:

  • Minimize HTTP Requests
  • Add an Expires or a Cache-Control Header
  • Gzip Components
  • Make JavaScript and CSS External
  • Minify JavaScript and CSS

Also, smush.it is good for compressing images (which have a big impact on how fast a webpage loads).

As far as browsers go, Safari 4 claims it is "the world's fastest browser", and I can say that the Mac version is certainly nice and fast (not to mention elegant!). However, the above suggestions will make much more of a difference than which browser you're using.

Steve

I'm involving in kind of work that i think knowledge about large scale application, large scale web will help me much. What do you think i should take? I mean books to read, courses to take... etc... Thanks in advance for any suggestion.

PS: maybe applications i mean are not large enough :D, kind of social network for >100k users or realtime game online for 5000 ccu. architecture and design, things to consider when you build these kind of applications

security: http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

scalability (db layer): http://oreilly.com/catalog/9780596003067 (half of the book is very relevant for any database)

scalability (app layer): http://www.javaconcurrencyinpractice.com/ (half of the book is very relevant for any language with shared state and threads)

front end: http://www.amazon.com/High-Performance-Web-Sites-Essential/dp/0596529309/ and http://www.amazon.com/Even-Faster-Web-Sites-Performance/dp/0596522304

Also, just to get a grasp of how difficult things are sometimes, you should start reading that those 2 blogs:

Are there any good books on the subject worth reading and still up-to-date with current technologies?

I'm mostly interested in back-end architecture and things I should consider choosing clustering and database solution as I plan to use GWT for the front-end therefore won't be able to control a lot there.

I'm looking for a book which will answer questions like: How to choose load balancing strategy? What DB model to choose? How to scale data? How to scale request handling? What are common problems when building web application able to handle huge traffic?

About GWT: Google Web Toolkit Applications.

In general Even faster web sites performance and Building scalable web sites are very nice.

I have heard good words on The Art of Capicity Planning too, but i don't have it, so i cannot say from first-hand experience.

Perfomance Analysis for Java Web Sites by Stacey Joines et al?

My take is that Ajax doesn't fundamentally affect the overall approach to scalability. It may place even greater emphasis on the intelligent use of caching, but overall everything we knew about scalabilty remains true.

I am looking at facebook news feed/ticker right now and I am wondering what technology/architecture it uses to pull in data asynchronously when any of my connections make an update. One possibility that I can think of is a javascript setInterval on a function that aggressively polls the server for new data.

I wonder how efficient that is.

Another possible technology that I can think of is something like Comet/NodeJS architecture that pings the client when there is an update on the server. I am not too familiar with this technology.

If I wanted to create something similar to this. What should I be looking into? Is the first approach the preferred way to do this? What technologies are available out there that will allow me to do this?

There are several technologies to achieve this:

  • polling: the app makes a request every x milliseconds to check for updates
  • long polling: the app makes a request to the server, but the server only responds when it has new data available (usually if no new data is available in X seconds, an empty response is sent or the connection is killed)
  • forever frame: a hidden iframe is opened in the page and the request is made for a doc that relies on HTTP 1.1 chunked encoding
  • XHR streaming: allows successive messages to be sent from the server without requiring a new HTTP request after each response
  • WebSockets: this is the best option, it keeps the connection alive at all time
  • Flash WebSockets: if WS are not natively supported by the browser, then you can include a Flash script to enhance that functionality

Usually people use Flash WebSockets or long-polling when WebSockets (the most efficient transport) is not available in the browser.

A perfect example on how to combine many transport techniques and abstract them away is Socket.IO.

Additional resources:

http://en.wikipedia.org/wiki/Push_technology
http://en.wikipedia.org/wiki/Comet_(programming))
http://www.leggetter.co.uk/2011/08/25/what-came-before-websockets.html
Server polling with JavaScript
Is there a difference between long-polling and using Comet
http://techoctave.com/c7/posts/60-simple-long-polling-example-with-javascript-and-jquery
Video discussing different techniques: http://vimeo.com/27771528

The book Even Faster Websites has a full chapter (ch. 8) dedicated to 'Scaling with Comet'.

I read High Performance Web Sites: Essential Knowledge for Front-End Engineers and in it the author suggests that all JavaScript code should be externalized and put at the bottom of the page instead of putting it in the head.

This is illustrated in this example. The external script tag blocks both downloading and progressive rendering of a page, so the solution was to put it at the bottom of the page.

However, in his second book Even Faster Web Sites: Performance Best Practices for Web Developers he talks about Inline JavaScript tags.

Inline scripts also blocks downloading and rendering of a page, so he suggests moving them also to the bottom of the page. However, this acts still blocks the rendering of the page entirely as illustrated in this example

Why do moving external scripts to the bottom of the page lets the page render progressively while moving inline scripts blocks rendering completely till the script is executed?


PS:

The question isn't about why add JavaScript to the bottom of the page instead of putting them in the head. It's about why bottom inline scripts block rendering while bottom external scripts don't.

In the inline script, the time is taken up running the script, which might change the DOM. Trying to render the DOM while it's mutating is a recipe for a mess. So rendering only happens at points when the JS is stalled, and therefore the DOM is stable.

While waiting for an external script to download, the running of scripts is stalled, so the DOM can be rendered safely. The downloaded JS won't be run until the rendering is complete.

I am currently a placement student (web developer) working for a university, and I have been assigned a few big web projects. The projects include a total revamp of the university i.t. help site which draws around 14k hits a month from on the uni campus and around 4k externally. I also have a second project which is a mobile version of the first project. The projects will be sharing some resources.

To generalise this question so the answers could be useful to more people:

  • I have two websites that will share some resources, lets say, index.php, functions.js and style.css, and these scripts will be used on almost all pages on the websites.
  • I have two audiences to cater for (in terms of download speed), the users within the same network that the sites hosted on (100mb/s aprx) and external users.

I would like to know what would be the best way to cache each kind of script (.js, .css, .php) and examples of how this would be done with their pros and cons over other methods if possible. By caching I mean locally, network and server caching.

Note: index.php is a dynamic page which should be refreshed from cache every 2 hours. It would be cool if you start your answer with .js, .css, .php or a combination so I can easily see what type of script you are talking about caching.

Thanks All!

Performance tuning through cachine could be categorized into multi-layers:

Good introduction and practical code examples can be found in Chapter 9 (Performance) - Developing Large Web Applications. It will talk about Caching CSS, Javascript, Modules, Pages, Ajax and Expire headers.

If we need to keep things simple on server-side do the following:

  1. Install APC extension which will make PHP faster for you through the so called opcode caching. No special configuration, it will work silently for you.
  2. Cache the full page for two-hours using this simple Pear library PEAR::Cache_Lite.
  3. For each database SELECT query cache the result in APC with a TTL of 5 Min, md5 hash the SELECT statement and use it as key for APC cache. Docs

In future if you have multiple servers and the performance becomes to be crucial before then you will need to look at:

  1. Shared memory caching between servers. Check Memecache or even Membase
  2. You need a reverse proxy solution: this basically layer between your user and server server so that it will serve the HTTP requests instead of your server. You can use for that Varnish, Squid or Apache Traffic Server.
  3. Mysql innoDB engine is slow, you may need to go for faster engine such as XtraDB
  4. Then maybe you will find that rational databases are stil slow for you. Then you will go for the key-value solution such as MongoDB.

Finally as references in web application performance check:

  1. Front-end Performance: High Performance Web Sites, Even Faster Web Sites and High Performance JavaScript.
  2. Back-end Performance: Pro PHP Application Performance and High Performance MySQL

This is an example of what I am talking about. Doesn't this just make it more complicated, or is that the point?

var ga = "appendChild",
        ha = "shift",
        ia = "exec",
        ja = "width",
        s = "replace",
        ka = "concat",
        la = "charAt",
        ma = "match",

EDIT:

This is where I found the code.

https://apis.google.com/js/platform.js

Google is simply minifying their JS files. There are two main reasons to minify: bandwidth savings and lower TCP round trips.

Bandwidth Savings: Minifying a file can more than halve its size. So bandwidth savings are a given.

TCP Round Trips: I'm not going to go into too much detail about TCP round trips (read Even Faster Web Sites by Steve Souders). JS files in the head of a page are extremely important. JS files can block the rest of the page's assets from being downloaded. Minifying the page allows for less round trips. Platform JS could transfer with as little as 2 round trips. The standard version could take an upwards of 6 round trips.

In short, minifying is a smart thing to do. It's very difficult to edit minified files. So, be sure to edit the original file, minify the file and then upload them to your server. In addition to minifying JS, you should consider minifying your HTML and CSS as well.

Javascript/CSS: YUI Compressor

HTML: PageSpeed Insights

I am interested in adding a landsacpe footer on my website but the image size is 115KB and will load on every page... Is there any effective way to load an huge image such as this one:

http://gyazo.com/5b1b7312ec4370873368df0181e41b13.png

Here's a few things that may help you:

EDIT: I tried the second tip in the list below (tinypng.com) and it reduced the size of your image with 71% to 39.1 KB. So that's a big win.

  1. Make sure to set the cache headers on your webserver so that the browser can cache the file. Also use the same URL for all other times you use the image. Doing these two simple things will make sure that the image will only get downloaded the first time the browser requests it. All other times it will be loaded from the browser's cache.

  2. Make sure to check if the image is as small as it can be. If you use a PNG then use tools like https://tinypng.com/ to squash all metadata out of the image. If you use a JPEG then maybe lower its quality. If you use Photoshop make sure to "save the image for web". This will also reduce the size. For photographs you are mostly better of using JPEGs, for text or other images that need to be lossless use PNG or GIF.

  3. Loading images will not really slow down your page that much. Not like JavaScript anyway. Loading Javascript will block the rendering of the page until the JS file is downloaded unless you use special loading techniques. That is not the case for images: the page will continue being rendered and the user can start using the page.

  4. If the image is placed using an IMG tag the make sure to set the width and the height of the image in the CSS (or using the img width and height attributes). That will make sure that the browser does not need to reflow the page when the image is downloaded. It will know what size it needs to be even before the image is downloaded.

  5. There is a maximum number of parallel requests per domain that the browser will do. If the image has a very low priority you could postpone its loading and wait for the onLoad event. This will make sure the other resources (with a a higher prio) will be downloaded first. This will require some JavaScript, but not that much (Use an image lazy loader, there are many).

  6. As I said in the previous item the are a maximum number of requests PER DOMAIN. This is why you could also create a new (sub)domain and load some content from there. It will increase the total number of resources that will be downloaded in parallel. using a CDN will also help here because they also have a separate domain (and they are optimised as well).

  7. If you want to read some REALLY GOOD books about this kind of optimising, read these: http://www.amazon.com/High-Performance-Web-Sites-Essential/dp/0596529309 http://www.amazon.com/Even-Faster-Web-Sites-Performance/dp/0596522304

i am a new learner of javascript, from a book, i know there are three parts to put into javascript in html. one is in <head>...</head>. two is under <body>. the last is under</body>. which is best? what's difference between them?

The current popular convention is to place scripts before the closing </body> tag.

In general, you may want to place a script in the head that will modify the html/css behavior, libraries such as modernizr, html5shiv, etc. are meant to supplant the behavior of CSS, so should be in the <head>. Those are the only scripts I would place in the head section of the document.

Designing your page/site so that scripting isn't needed (to as much of an extent as possible) is the best guideline you can have for a public facing site that may rely on search traffic. If you are developing an intranet/extranet/saas application it isn't as important. What will go along with this is placing scripts at the bottom before the closing body tag, so that all other downloads on the page occur first.

Beyond any of this, having your scripts limited (through minification, and merging) to 6 on a page will allow for scripts to be downloaded simultaneously on browsers that support that behavior. There are tools/techniques that improve on this. Some will bind to the window/load or document/ready events so can be placed anywhere. Having your scripts, css and images on separate CDN domains can help (or spread across a few CDN domains). Avoid having more than 4 total domains in use on a given page as much as possible (excluding common ones such as google analytics, which is likely to already be in the local DNS cache.

I would suggest reading "Even Faster Websites" by Steve Souders for more coverage on this area. Optimizing performance is about more than just where to put your JS.

I'm compiling Great Ideas how to optimize and improve your CSS. What are your best practices that you would like to share? Do you use frameworks? Do you re-use your CSS? Do you document your CSS?

Please share. Thanks!

Additional Question,

How do you generated IDs and Class in naming your markups... are they generic or specific? Are most of your project has a similar class and ids?

Thanks!

Be sure to read Steve Souders's book "Even Faster Web Sites." There's at least one chapter in there that deals with the performance aspects of CSS selectors. It may surprise you which kinds of selectors are "high performance" and which are not - it's completely unintuitive.