Skip navigation
Help

Steve Souders

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Steve Souders

Our favorite ex-Yahoo at-Google web performance fast-driving all-around guru Steve Souders took a look at @font-face performance recently:

There have been a number of great posts about @font-face performance issues:

* Paul Irish: Fighting the @font-face FOUT
* Stoyan Stefanov: Gzip your @font-face files
* Zoltan Hawryluk (again): More @font-face fun

This blog post summarizes Paul, Stoyan, and Zoltan’s findings plus some very important discoveries of my own.

Among these discoveries is:

* IE doesn’t render anything in the page until the font file is done downloading.
* In all major browsers, ...no files were blocked [by font downloads].
* Busy indicators... are triggered [differently in each] browser

Steve also found that IE and Chrome didn't time out in their attempts to download a font, meaning in the case of the former that the page never displays while waiting for the font, and in the latter that the text doesn't display.

Steve's conclusions are interesting:

* Only use @font-face is you’re absolutely certain you need it.
* If you have multiple font files, consider sharding them across multiple domains.
* Don’t include unused @font-face declarations - IE will download them whether they’re used or not.
* Gzip the font files and give them a future Expires header.
* Consider lazy loading the font files, at least in IE.

0
Your rating: None

Steve Souders posted on Runtime Page Optimizer a tool that you can think of as a performance proxy. It sits on the server side, and cleans up content before it is sent back to the browser.

What can it do? Steve let us know:

RPO automatically implements many of the best practices from my book and YSlow, so the guys from Aptimize contacted me and showed me an early version. Here are the performance improvements RPO delivers:

  • minifies, combines and compresses JavaScript files
  • minifies, combines and compresses stylesheets
  • combines images into CSS sprites
  • inlines images inside the stylesheet
  • turns on gzip compression
  • sets far future Expires headers
  • loads scripts asynchronously

RPO reduces the number of HTTP requests as well as reducing the amount of data that is transmitted, resulting in a page that loads faster. In doing this the big question is, how much overhead does this add at runtime? RPO caches the resources it generates (combined scripts, combined stylesheets, sprites). The primary realtime cost is changing the HTML markup. Static pages, after they are massaged, are also cached. Dynamic HTML can be optimized without a significant slowdown, much less than what’s gained by adding these performance benefits.

Steve had another couple of interesting posts recently:

  • Say no to IE6 discusses how we need to do something to help upgrade IE6 users (to IE7 is fine!)
  • Raising the bar talks about results from Steve's UA Profiler tests and how new browsers are pushing forward
0
Your rating: None

AJAX Libraries API

I just got to announce the Google AJAX Libraries API which exists to make Ajax applications that use popular frameworks such as Prototype, Script.aculo.us, jQuery, Dojo, and MooTools faster and easier for developers.

Whenever I wrote an application that uses one of these frameworks, I would picture a user accessing my application, having 33 copies of prototype.js, and yet downloading yet another one from my site. It would make me squirm. What a waste!

At the same time, I was reading research from Steve Souders and others in the performance space that showed just how badly we are doing at providing these libraries. As developers we should setup the caching correctly so we only send that file down when absolutely necessary. We should also gzip the files to browsers that accept them. Oh, and we should probably use a minified version to get that little bit more out of the system. We should also follow the practice of versioning the files nicely. Instead, we find a lot of jquery.js files with no version, that often have little tweaks added to the end of the fils, and caching is not setup well at all so the file keeps getting sent down for no reason.

When I joined Google I realised that we could help out here. What if we hosted these files? Everyone would see some instant benefits:

  • Caching can be done correctly, and once, by us... and developers have to do nothing
  • Gzip works
  • We can serve minified versions
  • The files are hosted by Google which has a distributed CDN at various points around the world, so the files are "close" to the user
  • The servers are fast
  • By using the same URLs, if a critical mass of applications use the Google infrastructure, when someone comes to your application the file may already be loaded!
  • A subtle performance (and security) issue revolves around the headers that you send up and down. Since you are using a special domain (NOTE: not google.com!), no cookies or other verbose headers will be sent up, saving precious bytes.

This is why we have released the AJAX Libraries API. We sat down with a few of the popular open source frameworks and they were all excited about the idea, so we got to work with them, and now you have access to their great work from our servers.

Details of what we are launching

You can access the libraries in two ways, and either way we take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

The first way to access the scripts is simply be using a standard <script src=".."> tag that points to the correct place.

For example, to load Prototype version 1.6.0.2 you would place the following in your HTML:

PLAIN TEXT
HTML:

  1.  
  2. <script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.2/prototype.js"></script>
  3.  

The second way to access the scripts is via the Google AJAX API Loader's google.load() method.

Here is an example using that technique to load and use jQuery for a simple search mashup:

PLAIN TEXT
HTML:

  1.  
  2. <script src="http://www.google.com/jsapi"></script>
  3. <script>
  4.   // Load jQuery
  5.   google.load("jquery", "1");
  6.  
  7.   // on page load complete, fire off a jQuery json-p query
  8.   // against Google web search
  9.   google.setOnLoadCallback(function() {
  10.     $.getJSON("http://ajax.googleapis.com/ajax/services/search/web?q=google&;v=1.0&;callback=?",
  11.  
  12.       // on search completion, process the results
  13.       function (data) {
  14.         if (data.responseDate.results &&
  15.             data.responseDate.results.length>0) {
  16.           renderResults(data.responseDate.results);
  17.         }
  18.       });
  19.     });
  20. </script>
  21.  

You will notice that the version used was just "1". This is a smart versioning feature that allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. For instance, consider a set of versions: 1.9.1, 1.8.4, 1.8.2.

Specifying a version of "1.8.2" will select the obvious version. This is because a fully specified version was used. Specifying a version of "1.8" would select version 1.8.4 since this is the highest versioned release in the 1.8 branch. For much the same reason, a request for "1" will end up loading version 1.9.1.

Note, these versioning semantics work the same way when using google.load and when using direct script urls.

By default, the JavaScript that gets sent back by the loader will be minified, if there is a version supported. Thus, for the example above we would return the minified version of jQuery. If you specifically want the raw JavaScript itself, you can add the "uncompressed" parameter like so:

PLAIN TEXT
JAVASCRIPT:

  1.  
  2. google.load("jquery", "1.2", {uncompressed:true});
  3.  

Today we are starting with the current versions of the library, but moving forward we will be archiving all versions from now onwards so you can be sure they are available.

For a full listing of the currently supported libraries, see the documentation.

Here I am, talking about what we are doing in two short slides:

The Future

This is just the beginning. We obviously want to add more libraries as you find them useful. Also, if you squint a little you can see how this can extend even further.

If we see good usage, we can work with browser vendors to automatically ship these libraries. Then, if they see the URLs that we use, they could auto load the libraries, even special JIT'd ones, from their local system. Thus, no network hit at all! Also, the browser could have the IP addresses for this service available, so they don't have the hit of a DNS lookup. Longer lived special browser caches for JavaScript libraries could also use these URLs.

The bottom line, and what I am really excited about, is what this could all mean for Web developers if this happens. We could be removed of the constant burden of having to re-download our standard libraries all the time. What other platform makes you do this?! Imagine if you had to download the JRE everytime you ran a Java app! If we can remove this burden, we can spend more time flushing out functionality that we need, and less time worrying about the actual download bits. I am all for lean, but there is more to life.

Acknowledgements

I want to acknowledge the other work that has been done here. Some libraries such as jQuery and Dean Edwards Base were already kind of doing this by hot linking to their Google Code project hosting repository. We thought this was great, but we wanted to make it more official, and open it up to libraries that don't use our project hosting facilities.

Also, AOL does a great job of hosting Dojo already. We recommend using them for your Dojo needs, but are proud to also offer the library. Choice is good. Finally, Yahoo! placed the YUI files on their own CDN for all to use.

0
Your rating: None