Skip navigation

Front Page

warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

Steve Souders posted on Runtime Page Optimizer a tool that you can think of as a performance proxy. It sits on the server side, and cleans up content before it is sent back to the browser.

What can it do? Steve let us know:

RPO automatically implements many of the best practices from my book and YSlow, so the guys from Aptimize contacted me and showed me an early version. Here are the performance improvements RPO delivers:

  • minifies, combines and compresses JavaScript files
  • minifies, combines and compresses stylesheets
  • combines images into CSS sprites
  • inlines images inside the stylesheet
  • turns on gzip compression
  • sets far future Expires headers
  • loads scripts asynchronously

RPO reduces the number of HTTP requests as well as reducing the amount of data that is transmitted, resulting in a page that loads faster. In doing this the big question is, how much overhead does this add at runtime? RPO caches the resources it generates (combined scripts, combined stylesheets, sprites). The primary realtime cost is changing the HTML markup. Static pages, after they are massaged, are also cached. Dynamic HTML can be optimized without a significant slowdown, much less than what’s gained by adding these performance benefits.

Steve had another couple of interesting posts recently:

  • Say no to IE6 discusses how we need to do something to help upgrade IE6 users (to IE7 is fine!)
  • Raising the bar talks about results from Steve's UA Profiler tests and how new browsers are pushing forward
Your rating: None

Michael Carter et al have been working on, a client library that gives you networking, including Comet like support, via JavaScript.

The low level work can sit upon Comet APIs, and in the future, Web Sockets, and you get high level APIs to protocols such as:

  • amqp
  • imap
  • irc
  • ldap
  • smtp
  • ssh
  • stomp
  • telnet
  • xmpp

There are some demos such as LiveHelp that uses Orbited as the backend.

Your rating: None

Michal Zalewski, of Google, has released ratproxy, a tool to test your Web application against attacks such as XSS and XSRF:

Ratproxy is a semi-automated, largely passive web application security audit tool. It is meant to complement active crawlers and manual proxies more commonly used for this task, and is optimized specifically for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments. The approach taken with ratproxy offers several important advantages over more traditional methods:

What about other solutions?

There are numerous alternative proxy tools meant to aid security auditors - most notably WebScarab, Paros, Burp, ProxMon, and Pantera. Stick with whatever suits your needs, as long as you get the data you need in the format you like.

That said, ratproxy is there for a reason. It is designed specifically to deliver concise reports that focus on prioritized issues of clear relevance to contemporary web 2.0 applications, and to do so in a hands-off, repeatable manner. It should not overwhelm you with raw HTTP traffic dumps, and it goes far beyond simply providing a framework to tamper with the application by hand.

Ratproxy implements a number of fairly advanced and unique checks based on our experience with these applications, as well as all the related browser quirks and content handling oddities. It features a sophisticated content-sniffing functionality capable of distinguishing between stylesheets and Javascript code snippets, supports SSL man-in-the-middle, on the fly Flash ActionScript? decompilation, and even offers an option to confirm high-likelihood flaw candidates with very lightweight, a built-in active testing module.

Last but not least, if you are undecided, the proxy may be easily chained with third-party security testing proxies of your choice.

Your rating: None

Steven Levithan has been flagrant by creating a simple way to remove nested patterns with a while loop and a replace:


  2. var str = "abc&lt;1&lt;2<>3>4>def";
  4. while (str != (str = str.replace(/<[^<>]*>/g, "")));
  6. // str -> "abcdef"

Notice that the regex in this one-liner doesn't try to deal with nested patterns at all. The while loop's condition replaces instances of <…> (where angled brackets are not allowed in the inner pattern) with an empty string. This repeats from the inside out, until the regex no longer matches. At that point, the result of the replacement is the same as the subject string, and the loop ends.

You can use a similar approach to grab nested patterns rather than delete them, as shown below.


  2. var str = "abc(d(e())f)(gh)ijk()",
  3.     re = /\([^()]*\)/,
  4.     output = [],
  5.     match, parts, last;
  7. while (match = re.exec(str)) {
  8.     parts = match[0].split("\uFFFF");
  9.     if (parts.length <2)
  10.         last = output.push(match[0]) - 1;
  11.     else
  12.         output[last] = parts[0] + output[last] + parts[1];
  13.     str = str.replace(re, "\uFFFF");
  14. }
  16. // output -> ["(d(e())f)", "(gh)", "()"]
Your rating: None

AJAX Libraries API

I just got to announce the Google AJAX Libraries API which exists to make Ajax applications that use popular frameworks such as Prototype,, jQuery, Dojo, and MooTools faster and easier for developers.

Whenever I wrote an application that uses one of these frameworks, I would picture a user accessing my application, having 33 copies of prototype.js, and yet downloading yet another one from my site. It would make me squirm. What a waste!

At the same time, I was reading research from Steve Souders and others in the performance space that showed just how badly we are doing at providing these libraries. As developers we should setup the caching correctly so we only send that file down when absolutely necessary. We should also gzip the files to browsers that accept them. Oh, and we should probably use a minified version to get that little bit more out of the system. We should also follow the practice of versioning the files nicely. Instead, we find a lot of jquery.js files with no version, that often have little tweaks added to the end of the fils, and caching is not setup well at all so the file keeps getting sent down for no reason.

When I joined Google I realised that we could help out here. What if we hosted these files? Everyone would see some instant benefits:

  • Caching can be done correctly, and once, by us... and developers have to do nothing
  • Gzip works
  • We can serve minified versions
  • The files are hosted by Google which has a distributed CDN at various points around the world, so the files are "close" to the user
  • The servers are fast
  • By using the same URLs, if a critical mass of applications use the Google infrastructure, when someone comes to your application the file may already be loaded!
  • A subtle performance (and security) issue revolves around the headers that you send up and down. Since you are using a special domain (NOTE: not!), no cookies or other verbose headers will be sent up, saving precious bytes.

This is why we have released the AJAX Libraries API. We sat down with a few of the popular open source frameworks and they were all excited about the idea, so we got to work with them, and now you have access to their great work from our servers.

Details of what we are launching

You can access the libraries in two ways, and either way we take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

The first way to access the scripts is simply be using a standard <script src=".."> tag that points to the correct place.

For example, to load Prototype version you would place the following in your HTML:


  2. <script src=""></script>

The second way to access the scripts is via the Google AJAX API Loader's google.load() method.

Here is an example using that technique to load and use jQuery for a simple search mashup:


  2. <script src=""></script>
  3. <script>
  4.   // Load jQuery
  5.   google.load("jquery", "1");
  7.   // on page load complete, fire off a jQuery json-p query
  8.   // against Google web search
  9.   google.setOnLoadCallback(function() {
  10.     $.getJSON(";v=1.0&;callback=?",
  12.       // on search completion, process the results
  13.       function (data) {
  14.         if (data.responseDate.results &&
  15.             data.responseDate.results.length>0) {
  16.           renderResults(data.responseDate.results);
  17.         }
  18.       });
  19.     });
  20. </script>

You will notice that the version used was just "1". This is a smart versioning feature that allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. For instance, consider a set of versions: 1.9.1, 1.8.4, 1.8.2.

Specifying a version of "1.8.2" will select the obvious version. This is because a fully specified version was used. Specifying a version of "1.8" would select version 1.8.4 since this is the highest versioned release in the 1.8 branch. For much the same reason, a request for "1" will end up loading version 1.9.1.

Note, these versioning semantics work the same way when using google.load and when using direct script urls.

By default, the JavaScript that gets sent back by the loader will be minified, if there is a version supported. Thus, for the example above we would return the minified version of jQuery. If you specifically want the raw JavaScript itself, you can add the "uncompressed" parameter like so:


  2. google.load("jquery", "1.2", {uncompressed:true});

Today we are starting with the current versions of the library, but moving forward we will be archiving all versions from now onwards so you can be sure they are available.

For a full listing of the currently supported libraries, see the documentation.

Here I am, talking about what we are doing in two short slides:

The Future

This is just the beginning. We obviously want to add more libraries as you find them useful. Also, if you squint a little you can see how this can extend even further.

If we see good usage, we can work with browser vendors to automatically ship these libraries. Then, if they see the URLs that we use, they could auto load the libraries, even special JIT'd ones, from their local system. Thus, no network hit at all! Also, the browser could have the IP addresses for this service available, so they don't have the hit of a DNS lookup. Longer lived special browser caches for JavaScript libraries could also use these URLs.

The bottom line, and what I am really excited about, is what this could all mean for Web developers if this happens. We could be removed of the constant burden of having to re-download our standard libraries all the time. What other platform makes you do this?! Imagine if you had to download the JRE everytime you ran a Java app! If we can remove this burden, we can spend more time flushing out functionality that we need, and less time worrying about the actual download bits. I am all for lean, but there is more to life.


I want to acknowledge the other work that has been done here. Some libraries such as jQuery and Dean Edwards Base were already kind of doing this by hot linking to their Google Code project hosting repository. We thought this was great, but we wanted to make it more official, and open it up to libraries that don't use our project hosting facilities.

Also, AOL does a great job of hosting Dojo already. We recommend using them for your Dojo needs, but are proud to also offer the library. Choice is good. Finally, Yahoo! placed the YUI files on their own CDN for all to use.

Your rating: None