Skip navigation
Help

W3C

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
(author unknown)

Three years ago in these pages, ALA technical editor Ethan Marcotte fired the shot heard ’round the web. ALA designer Mike Pick thought it might be fun to celebrate the third anniversary of “Responsive Web Design” (A List Apart Issue No. 306, May 25, 2010) by secreting an Easter Egg in the original article; our illustrator, Kevin Cornell, rose to the challenge.

To see it in action, visit alistapart.com/article/responsive-web-design, grab the edge of the browser window (device permitting), and perform the responsive resize mambo. (ALA’s Tim Murtaugh, who coded the Easter Egg, has provided a handy video demo of what you’ll see.)

0
Your rating: None
Original author: 
(author unknown)

We’ve all been there: that bit of JavaScript functionality that started out as just a handful of lines grows to a dozen, then two dozen, then more. Along the way, a function picks up a few more arguments; a conditional picks up a few more conditions. And then one day, the bug report comes in: something’s broken, and it’s up to us to untangle the mess.

As we ask our client-side code to take on more and more responsibilities—indeed, whole applications are living largely in the browser these days—two things are becoming clear. One, we can’t just point and click our way through testing that things are working as we expect; automated tests are key to having confidence in our code. Two, we’re probably going to have to change how we write our code in order to make it possible to write tests.

Really, we need to change how we code? Yes—because even if we know that automated tests are a good thing, most of us are probably only able to write integration tests right now. Integration tests are valuable because they focus on how the pieces of an application work together, but what they don’t do is tell us whether individual units of functionality are behaving as expected.

That’s where unit testing comes in. And we’ll have a very hard time writing unit tests until we start writing testable JavaScript.

Unit vs. integration: what’s the difference?

Writing integration tests is usually fairly straightforward: we simply write code that describes how a user interacts with our app, and what the user should expect to see as she does. Selenium is a popular tool for automating browsers. Capybara for Ruby makes it easy to talk to Selenium, and there are plenty of tools for other languages, too.

Here’s an integration test for a portion of a search app:

def test_search
  fill_in('q', :with => 'cat')
  find('.btn').click
  assert( find('#results li').has_content?('cat'), 'Search results are shown' )
  assert( page.has_no_selector?('#results li.no-results'), 'No results is not shown' )
end

Whereas an integration test is interested in a user’s interaction with an app, a unit test is narrowly focused on a small piece of code:

When I call a function with a certain input, do I receive the expected output?

Apps that are written in a traditional procedural style can be very difficult to unit test—and difficult to maintain, debug, and extend, too. But if we write our code with our future unit testing needs in mind, we will not only find that writing the tests becomes more straightforward than we might have expected, but also that we’ll simply write better code, too.

To see what I’m talking about, let’s take a look at a simple search app:

Srchr

When a user enters a search term, the app sends an XHR to the server for the corresponding data. When the server responds with the data, formatted as JSON, the app takes that data and displays it on the page, using client-side templating. A user can click on a search result to indicate that he “likes” it; when this happens, the name of the person he liked is added to the “Liked” list on the right-hand side.

A “traditional” JavaScript implementation of this app might look like this:

var tmplCache = {};

function loadTemplate (name) {
  if (!tmplCache[name]) {
    tmplCache[name] = $.get('/templates/' + name);
  }
  return tmplCache[name];
}

$(function () {

  var resultsList = $('#results');
  var liked = $('#liked');
  var pending = false;

  $('#searchForm').on('submit', function (e) {
    e.preventDefault();

    if (pending) { return; }

    var form = $(this);
    var query = $.trim( form.find('input[name="q"]').val() );

    if (!query) { return; }

    pending = true;

    $.ajax('/data/search.json', {
      data : { q: query },
      dataType : 'json',
      success : function (data) {
        loadTemplate('people-detailed.tmpl').then(function (t) {
          var tmpl = _.template(t);
          resultsList.html( tmpl({ people : data.results }) );
          pending = false;
        });
      }
    });

    $('<li>', {
      'class' : 'pending',
      html : 'Searching &hellip;'
    }).appendTo( resultsList.empty() );
  });

  resultsList.on('click', '.like', function (e) {
    e.preventDefault();
    var name = $(this).closest('li').find('h2').text();
    liked.find('.no-results').remove();
    $('<li>', { text: name }).appendTo(liked);
  });

});

My friend Adam Sontag calls this Choose Your Own Adventure code—on any given line, we might be dealing with presentation, or data, or user interaction, or application state. Who knows! It’s easy enough to write integration tests for this kind of code, but it’s hard to test individual units of functionality.

What makes it hard? Four things:

  • A general lack of structure; almost everything happens in a $(document).ready() callback, and then in anonymous functions that can’t be tested because they aren’t exposed.
  • Complex functions; if a function is more than 10 lines, like the submit handler, it’s highly likely that it’s doing too much.
  • Hidden or shared state; for example, since pending is in a closure, there’s no way to test whether the pending state is set correctly.
  • Tight coupling; for example, a $.ajax success handler shouldn’t need direct access to the DOM.

Organizing our code

The first step toward solving this is to take a less tangled approach to our code, breaking it up into a few different areas of responsibility:

  • Presentation and interaction
  • Data management and persistence
  • Overall application state
  • Setup and glue code to make the pieces work together

In the “traditional” implementation shown above, these four categories are intermingled—on one line we’re dealing with presentation, and two lines later we might be communicating with the server.

Code Lines

While we can absolutely write integration tests for this code—and we should!—writing unit tests for it is pretty difficult. In our functional tests, we can make assertions such as “when a user searches for something, she should see the appropriate results,” but we can’t get much more specific. If something goes wrong, we’ll have to track down exactly where it went wrong, and our functional tests won’t help much with that.

If we rethink how we write our code, though, we can write unit tests that will give us better insight into where things went wrong, and also help us end up with code that’s easier to reuse, maintain, and extend.

Our new code will follow a few guiding principles:

  • Represent each distinct piece of behavior as a separate object that falls into one of the four areas of responsibility and doesn’t need to know about other objects. This will help us avoid creating tangled code.
  • Support configurability, rather than hard-coding things. This will prevent us from replicating our entire HTML environment in order to write our tests.
  • Keep our objects’ methods simple and brief. This will help us keep our tests simple and our code easy to read.
  • Use constructor functions to create instances of objects. This will make it possible to create “clean” copies of each piece of code for the sake of testing.

To start with, we need to figure out how we’ll break our application into different pieces. We’ll have three pieces dedicated to presentation and interaction: the Search Form, the Search Results, and the Likes Box.

Application Views

We’ll also have a piece dedicated to fetching data from the server and a piece dedicated to gluing everything together.

Let’s start by looking at one of the simplest pieces of our application: the Likes Box. In the original version of the app, this code was responsible for updating the Likes Box:

var liked = $('#liked');

var resultsList = $('#results');


// ...


resultsList.on('click', '.like', function (e) {
  e.preventDefault();

  var name = $(this).closest('li').find('h2').text();

  liked.find( '.no-results' ).remove();

  $('<li>', { text: name }).appendTo(liked);

});

The Search Results piece is completely intertwined with the Likes Box piece and needs to know a lot about its markup. A much better and more testable approach would be to create a Likes Box object that’s responsible for manipulating the DOM related to the Likes Box:

var Likes = function (el) {
  this.el = $(el);
  return this;
};

Likes.prototype.add = function (name) {
  this.el.find('.no-results').remove();
  $('<li>', { text: name }).appendTo(this.el);
};

This code provides a constructor function that creates a new instance of a Likes Box. The instance that’s created has an .add() method, which we can use to add new results. We can write a couple of tests to prove that it works:

var ul;

setup(function(){
  ul = $('<ul><li class="no-results"></li></ul>');
});

test('constructor', function () {
  var l = new Likes(ul);
  assert(l);
});

test('adding a name', function () {
  var l = new Likes(ul);
  l.add('Brendan Eich');

  assert.equal(ul.find('li').length, 1);
  assert.equal(ul.find('li').first().html(), 'Brendan Eich');
  assert.equal(ul.find('li.no-results').length, 0);
});

Not so hard, is it? Here we’re using Mocha as the test framework, and Chai as the assertion library. Mocha provides the test and setup functions; Chai provides assert. There are plenty of other test frameworks and assertion libraries to choose from, but for the sake of an introduction, I find these two work well. You should find the one that works best for you and your project—aside from Mocha, QUnit is popular, and Intern is a new framework that shows a lot of promise.

Our test code starts out by creating an element that we’ll use as the container for our Likes Box. Then, it runs two tests: one is a sanity check to make sure we can make a Likes Box; the other is a test to ensure that our .add() method has the desired effect. With these tests in place, we can safely refactor the code for our Likes Box, and be confident that we’ll know if we break anything.

Our new application code can now look like this:

var liked = new Likes('#liked');
var resultsList = $('#results');



// ...



resultsList.on('click', '.like', function (e) {
  e.preventDefault();

  var name = $(this).closest('li').find('h2').text();

  liked.add(name);
});

The Search Results piece is more complex than the Likes Box, but let’s take a stab at refactoring that, too. Just as we created an .add() method on the Likes Box, we also want to create methods for interacting with the Search Results. We’ll want a way to add new results, as well as a way to “broadcast” to the rest of the app when things happen within the Search Results—for example, when someone likes a result.

var SearchResults = function (el) {
  this.el = $(el);
  this.el.on( 'click', '.btn.like', _.bind(this._handleClick, this) );
};

SearchResults.prototype.setResults = function (results) {
  var templateRequest = $.get('people-detailed.tmpl');
  templateRequest.then( _.bind(this._populate, this, results) );
};

SearchResults.prototype._handleClick = function (evt) {
  var name = $(evt.target).closest('li.result').attr('data-name');
  $(document).trigger('like', [ name ]);
};

SearchResults.prototype._populate = function (results, tmpl) {
  var html = _.template(tmpl, { people: results });
  this.el.html(html);
};

Now, our old app code for managing the interaction between Search Results and the Likes Box could look like this:

var liked = new Likes('#liked');
var resultsList = new SearchResults('#results');


// ...


$(document).on('like', function (evt, name) {
  liked.add(name);
})

It’s much simpler and less entangled, because we’re using the document as a global message bus, and passing messages through it so individual components don’t need to know about each other. (Note that in a real app, we’d use something like Backbone or the RSVP library to manage events. We’re just triggering on document to keep things simple here.) We’re also hiding all the dirty work—such as finding the name of the person who was liked—inside the Search Results object, rather than having it muddy up our application code. The best part: we can now write tests to prove that our Search Results object works as we expect:

var ul;
var data = [ /* fake data here */ ];

setup(function () {
  ul = $('<ul><li class="no-results"></li></ul>');
});

test('constructor', function () {
  var sr = new SearchResults(ul);
  assert(sr);
});

test('display received results', function () {
  var sr = new SearchResults(ul);
  sr.setResults(data);

  assert.equal(ul.find('.no-results').length, 0);
  assert.equal(ul.find('li.result').length, data.length);
  assert.equal(
    ul.find('li.result').first().attr('data-name'),
    data[0].name
  );
});

test('announce likes', function() {
  var sr = new SearchResults(ul);
  var flag;
  var spy = function () {
    flag = [].slice.call(arguments);
  };

  sr.setResults(data);
  $(document).on('like', spy);

  ul.find('li').first().find('.like.btn').click();

  assert(flag, 'event handler called');
  assert.equal(flag[1], data[0].name, 'event handler receives data' );
});

The interaction with the server is another interesting piece to consider. The original code included a direct $.ajax() request, and the callback interacted directly with the DOM:

$.ajax('/data/search.json', {
  data : { q: query },
  dataType : 'json',
  success : function( data ) {
    loadTemplate('people-detailed.tmpl').then(function(t) {
      var tmpl = _.template( t );
      resultsList.html( tmpl({ people : data.results }) );
      pending = false;
    });
  }
});

Again, this is difficult to write a unit test for, because so many different things are happening in just a few lines of code. We can restructure the data portion of our application as an object of its own:

var SearchData = function () { };

SearchData.prototype.fetch = function (query) {
  var dfd;

  if (!query) {
    dfd = $.Deferred();
    dfd.resolve([]);
    return dfd.promise();
  }

  return $.ajax( '/data/search.json', {
    data : { q: query },
    dataType : 'json'
  }).pipe(function( resp ) {
    return resp.results;
  });
};

Now, we can change our code for getting the results onto the page:

var resultsList = new SearchResults('#results');

var searchData = new SearchData();

// ...

searchData.fetch(query).then(resultsList.setResults);

Again, we’ve dramatically simplified our application code, and isolated the complexity within the Search Data object, rather than having it live in our main application code. We’ve also made our search interface testable, though there are a couple caveats to bear in mind when testing code that interacts with the server.

The first is that we don’t want to actually interact with the server—to do so would be to reenter the world of integration tests, and because we’re responsible developers, we already have tests that ensure the server does the right thing, right? Instead, we want to “mock” the interaction with the server, which we can do using the Sinon library. The second caveat is that we should also test non-ideal paths, such as an empty query.

test('constructor', function () {
  var sd = new SearchData();
  assert(sd);
});

suite('fetch', function () {
  var xhr, requests;

  setup(function () {
    requests = [];
    xhr = sinon.useFakeXMLHttpRequest();
    xhr.onCreate = function (req) {
      requests.push(req);
    };
  });

  teardown(function () {
    xhr.restore();
  });

  test('fetches from correct URL', function () {
    var sd = new SearchData();
    sd.fetch('cat');

    assert.equal(requests[0].url, '/data/search.json?q=cat');
  });

  test('returns a promise', function () {
    var sd = new SearchData();
    var req = sd.fetch('cat');

    assert.isFunction(req.then);
  });

  test('no request if no query', function () {
    var sd = new SearchData();
    var req = sd.fetch();
    assert.equal(requests.length, 0);
  });

  test('return a promise even if no query', function () {
    var sd = new SearchData();
    var req = sd.fetch();

    assert.isFunction( req.then );
  });

  test('no query promise resolves with empty array', function () {
    var sd = new SearchData();
    var req = sd.fetch();
    var spy = sinon.spy();

    req.then(spy);

    assert.deepEqual(spy.args[0][0], []);
  });

  test('returns contents of results property of the response', function () {
    var sd = new SearchData();
    var req = sd.fetch('cat');
    var spy = sinon.spy();

    requests[0].respond(
      200, { 'Content-type': 'text/json' },
      JSON.stringify({ results: [ 1, 2, 3 ] })
    );

    req.then(spy);

    assert.deepEqual(spy.args[0][0], [ 1, 2, 3 ]);
  });
});

For the sake of brevity, I’ve left out the refactoring of the Search Form, and also simplified some of the other refactorings and tests, but you can see a finished version of the app here if you’re interested.

When we’re done rewriting our application using testable JavaScript patterns, we end up with something much cleaner than what we started with:

$(function() {
  var pending = false;

  var searchForm = new SearchForm('#searchForm');
  var searchResults = new SearchResults('#results');
  var likes = new Likes('#liked');
  var searchData = new SearchData();

  $(document).on('search', function (event, query) {
    if (pending) { return; }

    pending = true;

    searchData.fetch(query).then(function (results) {
      searchResults.setResults(results);
      pending = false;
    });

    searchResults.pending();
  });

  $(document).on('like', function (evt, name) {
    likes.add(name);
  });
});

Even more important than our much cleaner application code, though, is the fact that we end up with a codebase that is thoroughly tested. That means we can safely refactor it and add to it without the fear of breaking things. We can even write new tests as we find new issues, and then write the code that makes those tests pass.

Testing makes life easier in the long run

It’s easy to look at all of this and say, “Wait, you want me to write more code to do the same job?”

The thing is, there are a few inescapable facts of life about Making Things On The Internet. You will spend time designing an approach to a problem. You will test your solution, whether by clicking around in a browser, writing automated tests, or—shudder—letting your users do your testing for you in production. You will make changes to your code, and other people will use your code. Finally: there will be bugs, no matter how many tests you write.

The thing about testing is that while it might require a bit more time at the outset, it really does save time in the long run. You’ll be patting yourself on the back the first time a test you wrote catches a bug before it finds its way into production. You’ll be grateful, too, when you have a system in place that can prove that your bug fix really does fix a bug that slips through.

Additional resources

This article just scratches the surface of JavaScript testing, but if you’d like to learn more, check out:

  • My presentation from the 2012 Full Frontal conference in Brighton, UK.
  • Grunt, a tool that helps automate the testing process and lots of other things.
  • Test-Driven JavaScript Development by Christian Johansen, the creator of the Sinon library. It is a dense but informative examination of the practice of testing JavaScript.
0
Your rating: None
Original author: 
(author unknown)

Great design is a product of care and attention applied to areas that matter, resulting in a useful, understandable, and hopefully beautiful user interface. But don’t be fooled into thinking that design is left only for designers.

There is a lot of design in code, and I don’t mean code that builds the user interface—I mean the design of code.

Well-designed code is much easier to maintain, optimize, and extend, making for more efficient developers. That means more focus and energy can be spent on building great things, which makes everyone happy—users, developers, and stakeholders.

There are three high-level, language-agnostic aspects to code design that are particularly important.

  1. System architecture—The basic layout of the codebase. Rules that govern how various components, such as models, views, and controllers, interact with each other.
  2. Maintainability—How well can the code be improved and extended?
  3. Reusability—How reusable are the application’s components? How easily can each implementation of a component be customized?

In looser languages, specifically JavaScript, it takes a bit of discipline to write well-designed code. The JavaScript environment is so forgiving that it’s easy to throw bits and pieces everywhere and still have things work. Establishing system architecture early (and sticking to it!) provides constraints to your codebase, ensuring consistency throughout.

One approach I’m fond of consists of a tried-and-true software design pattern, the module pattern, whose extensible structure lends itself to a solid system architecture and a maintainable codebase. I like building modules within a jQuery plugin, which makes for beautiful reusability, provides robust options, and exposes a well-crafted API.

Below, I’ll walk through how to craft your code into well-organized components that can be reused in projects to come.

The module pattern

There are a lot of design patterns out there, and equally as many resources on them. Addy Osmani wrote an amazing (free!) book on design patterns in JavaScript, which I highly recommend to developers of all levels.

The module pattern is a simple structural foundation that can help keep your code clean and organized. A “module” is just a standard object literal containing methods and properties, and that simplicity is the best thing about this pattern: even someone unfamiliar with traditional software design patterns would be able to look at the code and instantly understand how it works.

In applications that use this pattern, each component gets its own distinct module. For example, to build autocomplete functionality, you’d create a module for the textfield and a module for the results list. These two modules would work together, but the textfield code wouldn’t touch the results list code, and vice versa.

That decoupling of components is why the module pattern is great for building solid system architecture. Relationships within the application are well-defined; anything related to the textfield is managed by the textfield module, not strewn throughout the codebase—resulting in clear code.

Another benefit of module-based organization is that it is inherently maintainable. Modules can be improved and optimized independently without affecting any other part of the application.

I used the module pattern for the basic structure of jPanelMenu, the jQuery plugin I built for off-canvas menu systems. I’ll use that as an example to illustrate the process of building a module.

Building a module

To begin, I define three methods and a property that are used to manage the interactions of the menu system.

var jpm = {
    animated: true,
    openMenu: function( ) {
        …
        this.setMenuStyle( );
    },
    closeMenu: function( ) {
        …
        this.setMenuStyle( );
    },
    setMenuStyle: function( ) { … }
};

The idea is to break down code into the smallest, most reusable bits possible. I could have written just one toggleMenu( ) method, but creating distinct openMenu( ) and closeMenu( ) methods provides more control and reusability within the module.

Notice that calls to module methods and properties from within the module itself (such as the calls to setMenuStyle( )) are prefixed with the this keyword—that’s how modules access their own members.

That’s the basic structure of a module. You can continue to add methods and properties as needed, but it doesn’t get any more complex than that. After the structural foundations are in place, the reusability layer—options and an exposed API—can be built on top.

jQuery plugins

The third aspect of well-designed code is probably the most crucial: reusability. This section comes with a caveat. While there are obviously ways to build and implement reusable components in raw JavaScript (we’re about 90 percent of the way there with our module above), I prefer to build jQuery plugins for more complex things, for a few reasons.

Most importantly, it’s a form of unobtrusive communication. If you used jQuery to build a component, you should make that obvious to those implementing it. Building the component as a jQuery plugin is a great way to say that jQuery is required.

In addition, the implementation code will be consistent with the rest of the jQuery-based project code. That’s good for aesthetic reasons, but it also means (to an extent) that developers can predict how to interact with the plugin without too much research. Just one more way to build a better developer interface.

Before you begin building a jQuery plugin, ensure that the plugin does not conflict with other JavaScript libraries using the $ notation. That’s a lot simpler than it sounds—just wrap your plugin code like so:

(function($) {
    // jQuery plugin code here
})(jQuery);

Next, we set up our plugin and drop our previously built module code inside. A plugin is just a method defined on the jQuery ($) object.

(function($) {
    $.jPanelMenu = function( ) {
        var jpm = {
            animated: true,
            openMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            closeMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            setMenuStyle: function( ) { … }
        };
    };
})(jQuery);

All it takes to use the plugin is a call to the function you just created.

var jpm = $.jPanelMenu( );

Options

Options are essential to any truly reusable plugin because they allow for customizations to each implementation. Every project brings with it a slew of design styles, interaction types, and content structures. Customizable options help ensure that you can adapt the plugin to fit within those project constraints.

It’s best practice to provide good default values for your options. The easiest way to do that is to use jQuery’s $.extend( ) method, which accepts (at least) two arguments.

As the first argument of $.extend( ), define an object with all available options and their default values. As the second argument, pass through the passed-in options. This will merge the two objects, overriding the defaults with any passed-in options.

(function($) {
    $.jPanelMenu = function(options) {
        var jpm = {
            options: $.extend({
                'animated': true,
                'duration': 500,
                'direction': 'left'
            }, options),
            openMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            closeMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            setMenuStyle: function( ) { … }
        };
    };
})(jQuery);

Beyond providing good defaults, options become almost self-documenting—someone can look at the code and see all of the available options immediately.

Expose as many options as is feasible. The customization will help in future implementations, and flexibility never hurts.

API

Options are terrific ways to customize how a plugin works. An API, on the other hand, enables extensions to the plugin’s functionality by exposing methods and properties for the implementation code to take advantage of.

While it’s great to expose as much as possible through an API, the outside world shouldn’t have access to all internal methods and properties. Ideally, you should expose only the elements that will be used.

In our example, the exposed API should include calls to open and close the menu, but nothing else. The internal setMenuStyle( ) method runs when the menu opens and closes, but the public doesn’t need access to it.

To expose an API, return an object with any desired methods and properties at the end of the plugin code. You can even map returned methods and properties to those within the module code—this is where the beautiful organization of the module pattern really shines.

(function($) {
    $.jPanelMenu = function(options) {
        var jpm = {
            options: $.extend({
                'animated': true,
                'duration': 500,
                'direction': 'left'
            }, options),
            openMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            closeMenu: function( ) {
                …
                this.setMenuStyle( );
            },
            setMenuStyle: function( ) { … }
        };

        return {
            open: jpm.openMenu,
            close: jpm.closeMenu,
            someComplexMethod: function( ) { … }
        };
    };
})(jQuery);

API methods and properties will be available through the object returned from the plugin initialization.

var jpm = $.jPanelMenu({
    duration: 1000,
    …
});
jpm.open( );

Polishing developer interfaces

With just a few simple constructs and guidelines, we’ve built ourselves a reusable, extensible plugin that will help make our lives easier. Like any part of what we do, experiment with this structure to see if it works for you, your team, and your workflow.

Whenever I find myself building something with a potential for reuse, I break it out into a module-based jQuery plugin. The best part about this approach is that it forces you to use—and test—the code you write. By using something as you build it, you’ll quickly identify strengths, discover shortcomings, and plan changes.

This process leads to battle-tested code ready for open-source contributions, or to be sold and distributed. I’ve released my (mostly) polished plugins as open-source projects on GitHub.

Even if you aren’t building something to be released in the wild, it’s still important to think about the design of your code. Your future self will thank you.

0
Your rating: None
Original author: 
(author unknown)

I know you’ve been asked this plenty of times already, but: no new vendor prefixes, right? Right?

Nope, none! They’re great in theory but turns out they fail in practice, so we’re joining Mozilla and the W3C CSS WG and moving away them. There’s a few parts to this.

Firstly, we won’t be migrating the existing -webkit- prefixed properties to a -chrome- or -blink- prefix, that’d just make extra work for everyone. Secondly, we inherited some existing properties that are prefixed. Some, like -webkit-transform, are standards track and we work with the CSS WG to move ahead those standards while we fix any remaining issues in our implementation and we’ll unprefix them when they’re ready. Others, like -webkit-box-reflect are not standards track and we’ll bring them to standards bodies or responsibly deprecate these on a case-by-case basis. Lastly, we’re not introducing any new CSS properties behind a prefix.

Pinky swear?

Totes. New stuff will be available to experiment with behind a flag you can turn on in about:flags called “Experimental Web Platform Features”. When the feature is ready, it’ll graduate to Canary, and then follow its ~12 week path down through Dev Channel, Beta to all users at Stable.

The Blink prefix policy is documented and, in fact, WebKit just nailed down their prefix policy going forward. If you’re really into prefix drama (and who isn’t!) Chris Wilson and I discussed this a lot more on the Web Ahead podcast [37:20].

How long before we can try Blink out in Chrome?

Blink’s been in Chrome Canary as of the day we announced it. The codebase was 99.9% the same when Blink launched, so no need to rush out and check everything. All your sites should be pretty much the same.

Chrome 27 has the Blink engine, and that’s available on the beta channel for
Win, Mac, Linux, ChromeOS and Android. (See the full beta/stable/dev/canary
view
).

While the internals are apt to be fairly different, will there be any radical changes to the rendering side of things in the near future?

Nothing too alarming, layout and CSS stuff is all staying the same. Grid layout is still in development, though, and our Windows text rendering has been getting a new backend that we can hook up soon, greatly boosting the quality of webfont rendering there.

We’re also interested in better taking advantage of multiple cores on machines, so the more we can move painting, layout (aka reflow), and style recalculation to a separate thread, but the faster everyone’s sites will render. We’re already doing multi-threaded painting on ChromeOS and Android, and looking into doing it on Mac & Windows. If you’re interested in these experimental efforts or watching new feature proposals, take a look at the blink-dev mailing list. A recent proposed experiment is called Oilpan, where we’ll look into the advantages of moving the implementation of Chrome’s DOM into JavaScript.

Will features added to Blink be contributed back to the WebKit project? Short term; long term?

Since Blink launched there’s been a few patches that have been landed in both Blink and WebKit, though this is expected to decline in the long-term, as the code bases will diverge.

When are we likely to start seeing Blink-powered versions of Chrome on Android? Is it even possible on iOS, or is iOS Chrome still stuck with a Safari webview due to Apple’s policies?

Blink is now in the Chrome Beta for Android. Chrome for iOS, due to platform limitations, is based on the WebKit-based WebView that’s provided by iOS.

Part of this move seems to be giving Google the freedom to remove old or disused features that have been collecting dust in WebKit for ages. There must be a few things high on that list—what are some of those things, and how can we be certain their removal won’t lead to the occasional broken website?

A few old ’n crusty things that we’re looking at removing: the isindex attribute, RangeException, and XMLHttpRequestException. Old things that have little use in the wild and just haven’t gotten a spring cleaning from the web platform for ages.

Now, we don’t want to break the web, and that’s something that web browser engineers have always been kept very aware of. We carefully gauge real-world usage of things like CSS and DOM features before deprecating anything. At Google we have a copy of the web that we run queries against, so we have a pretty OK idea of what CSS and JavaScript out there is using.

Blink also has over 32,000 tests in its test suite, and manual confirmation that over 100 sites work great before every release ships. And we’re working closely with the W3C and Adobe to share tests and testing infrastructure across browsers, with the goals of reducing maintenance burden, improving interoperability, and increasing test coverage. Eventually we’d like all new features to ship with shared conformance tests, ensuring interoperability even as we add cutting-edge stuff.

Still, any deprecation has to be done responsibly. There’s now a draft Blink process for deprecating features which includes:

  • Anonymous metrics to understand how much any specific feature is used “in the wild”
  • ”Intent to deprecate” emails that hit blink-dev months before anything is
    removed
  • Warnings that you’ll find in your DevTools console if you’re using anything
    deprecated
  • Mentions on the Chromium blog like this Chrome 27
    wrap-up
    .

Did part of the decision to branch away from WebKit involve resistance to adding a Dart VM? WebKit’s goals explicitly mention JavaScript, and Apple representatives have been fairly vocal about not seeing a need.

Nope, not at all. The decision was made by the core web platform engineers. Introducing a new VM to a browser introduces considerable maintenance cost (we saw this with V8 and JavaScriptCore both in WebKit) and right now Dart isn’t yet ready to be considered for an integration with Blink. (more on that in a sec). Blink’s got strong principles around compatibility risk and this guides a lot of the decisions around our commitments to potential features as they are proposed. You can hear a more complete answer here from Darin Fisher, one of the Chrome web platform leads.

Have any non-WebKit browsers recently expressed an interest in Dart? A
scripting language that only stands to work in one browser sounds a little
VBScript-y.

Not yet, but since Dart compiles to JavaScript and runs across the modern web, it’s not gated by other browsers integrating the VM. But it’s still early days, Dart has not yet reached a stable 1.0 milestone and that there are still technical challenges with the Dart VM around performance and memory management. Still, It’s important to point out that Dart is an open source project, with a bunch of external contributors and committers.

Let me take a moment to provide my own perspective on Dart. :) Now, as you know, I’m a JavaScript guy, so early on, I took a side and and considered Dart an enemy. JavaScript should win; Dart is bad! But then I came to realize the Dart guys aren’t just setting out to improve the authoring and scalability of web application development. They also really want the web to win.  Now I’ve recently spoke about how The Mobile Web Is In Trouble, and clarified that my priorities are seeing it provide a fantastic user experience to everyone. For me, seeing the mobile web be successful trumps language wars and certainly quibbling over syntax. So I’m happy to see developers embrace the authoring advantages of Coffeescript, the smart subset of JavaScript strict mode, the legendary Emscripten & asm.js combo, the compiler feedback of TypeScript and the performance ambitions of Dart. It’s worth trying out technologies that can leapfrog the current expectations of the user experience that we can deliver. Our web is worth it.

Will Opera be using the Chromium version of Blink wholesale, as far as you know? Are we likely to see some divergence between Opera and Chrome?

As I understand it, Opera Mobile, Opera Desktop, and Opera Mini will all be based on Chromium. This means that they’ll not only share the exact version of Blink that Chrome uses, but also the same graphics stack, JavaScript engine, and networking stack. Already, Opera has contributed some great things to Blink and we’re excited about what’s next.

Why the name “Blink,” anyway?

Haha. Well… it’s a two parter. First, Blink evokes a certain feeling of speed and simplicity—two core principles of Chrome. Then, Chrome has a little tradition of slightly ironic names. Chrome itself is all about minimizing the browser chrome, and the Chromebook Pixel is all about not seeing any pixels at all. So naturally, it fits that Blink will never support the infamous <blink> tag. ;)

<3z

0
Your rating: None
Original author: 
(author unknown)

Ilya Grigorik discusses in detail how to construct a mobile website that loads as quickly as possible. A site that not only renders in 1 second, but one that is also visible in 1 second. With hard statistics as evidence to show why this matters, Ilya discusses techniques to deliver a 1000 millisecond experience.

0
Your rating: None
Original author: 
(author unknown)

In this 60-minute video from An Event Apart Boston, Scott Berkun tackles designer disempowerment. He discusses how power actually works, and why developing salesmanship skills is a must, even if your job isn’t public-facing.

0
Your rating: None
Original author: 
(author unknown)

Web maps have come a long way. Improved data, cleaner design, better performance, and more intuitive controls have made web maps a ubiquitous and critical component of many apps. They’ve also become one of the mobile space’s most successful transplants as more and more apps are powered by location-aware devices. The core web map UI paradigm itself—a continuous, pannable, zoomable surface—has even spread beyond mapping to interfaces everywhere.

Despite all this, we’ve barely begun to work web maps into our design practice. We create icon fonts, responsive grids, CSS frameworks, progressive enhancement strategies, and even new design processes. We tear down old solutions and build new ones, and even take an extra second to share battle stories in prose and in person. Yet nearly five years since Paul Smith’s article, “Take Control of Your Maps,” web maps are still a blind spot for most designers.

Have you ever taken apart a map? Worked with a map as a critical part of your design? Developed tricks, hacks, workarounds, or progressive enhancements for maps?

This article is a long overdue companion to Paul’s piece. Where he goes on a whirlwind survey of the web mapping stack at 10,000 feet, we’re going to walk through a single design process and implement a modern-day web map. By walking this path, I hope to begin making maps part of the collective conversation we have as designers.

Opinionated about open

Paul makes a strong case for why you might want to use open mapping tools instead of the established incumbent. I won’t retread his reasons here, but I would like to expand on his last: Open tools are the ones we hack best.

There is nothing mysterious about web maps. Take any spatial plane, split it up into discrete tiles, position them in the DOM, and add event handlers for panning and zooming. The basic formula can be applied to Portland, Mars, or Super Mario Land. It works for displaying large street maps, but nothing stops us from tinkering with it to explore galleries of art, create fictional game worlds, learn human anatomy, or simply navigate a web page. Open tools bare the guts of this mechanism to us, allowing us to see a wider range of possibilities.

 character navigation, Mars, and Super Mario Land.
The mechanics of web maps are not limited to street maps.

We should know the conditions under which map images are loaded and destroyed; we should argue whether map tiles are best positioned with CSS transforms or not; and we should care whether vector elements are drawn with SVG or Canvas. Open tools let us know and experiment with these working details of our maps. If you wouldn’t have it any other way with your HTML5, CSS, or JavaScript libraries, then you shouldn’t settle for less when it comes to maps.

In short, we’ll be working with a fully open mapping stack. MapBox, where I work, has pulled together several open source libraries into a single API that we publish under mapbox.js. Other open mapping libraries that are worth your time include Leaflet and D3.js.

Starting out

I’m a big fan of Sherlock Holmes. Between the recent Hollywood movies starring Robert Downey Jr. and the BBC’s contemporary series, I’m hooked. But as someone who has never been to London, I know I’m missing the richness of place and setting that Sir Arthur Conan Doyle meant to be read into his short stories.

A typical approach would be to embed a web map with pins of various locations alongside one of the Sherlock stories. With this approach the map becomes an appendix—a dispensable element that plays little part in Doyle’s storytelling. Instead, we’re going to expand the role of our map, integrating it fully into the narrative. It will set the stage, provide pace, and affect the mood of our story.

Comparing a map used as embedded media versus one used as a critical design element.

A tale of places

To establish a baseline for our tale, I restructured The Adventure of the Bruce-Partington Plans to be told around places. I picked eight key locations from the original text, pulled out the essential details of the mystery, and framed them out with HTML, CSS, and JavaScript.

Text only demo.
A Sherlock Holmes story in text only. View Demo 1.

  • The story is broken up into section elements for each key location. A small amount of JavaScript implements a scrolling flow that highlights a single section at a time.
  • Our page is not responsive yet, but it contains scaffolding to guard against bad choices that could thwart us. The main text column is fluid at 33.33% and pins to a min-width: 320px. If our content and design flow reasonably within these constraints, we’re in good shape.

Next, we’ll get started mapping. Initially we’ll work on our map separately from our story page to focus on learning key elements of a new technology.

Maps are data

The mapping equivalent of our abridged Sherlock story is a dataset of eight geographic points. GeoJSON, a format for describing geographic data in JSON, is the perfect starting point for capturing this data:

{
    "geometry": { "type": "Point", "coordinates": [-0.15591514, 51.51830379] },
    "properties": { "title": "Baker St." }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.07571203, 51.51424049] },
    "properties": { "title": "Aldgate Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.08533793, 51.50438536] },
    "properties": { "title": "London Bridge Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [0.05991101, 51.48752939] },
    "properties": { "title": "Woolwich Arsenal" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.18335806, 51.49439521] },
    "properties": { "title": "Gloucester Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.19684993, 51.5033856] },
    "properties": { "title": "Caulfield Gardens" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.10669358, 51.51433123] },
    "properties": { "title": "The Daily Telegraph" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.12416858, 51.50779757] },
    "properties": { "title": "Charing Cross Station" }
}

Each object in our JSON array has a geometry—data that describe where this object is in space—and properties—freeform data of our own choosing to describe what this object is. Now that we have this data, we can create a very basic map.

Basic web mapping demo.
The basics of web mapping. View Demo 2.

  • Note that the coordinates are a pair of latitude and longitude degrees. In the year 2013, it is still not possible to find a consistent order for these values across mapping APIs. Some use lat,lon to meet our expectations from grade-school geography. Others use lon,lat to match x,y coordinate order: horizontal, then vertical.
  • We’re using mapbox.js as our core open source mapping library. Each map is best understood as the key parameters passed into mapbox.map():
    1. A DOM element container
    2. One or more Photoshop-like layers that position tiles or markers
    3. Event handlers that bind user input to actions, like dragging to panning
  • Our map has two layers. Our tile layer is made up of 256x256 square images generated from a custom map on MapBox. Our spots layer is made up of pin markers generated from the GeoJSON data above.

This is a good start for our code, but nowhere near our initial goal of using a map to tell our Sherlock Holmes story.

Beyond location

According to our first map, the eight items in our GeoJSON dataset are just places, not settings in a story full of intrigue and mystery. From a visual standpoint, pins anonymize our places and express them as nothing more than locations.

To overcome this, we can use illustrations for each location—some showing settings, others showing key plot elements. Now our audience can see right away that there is more to each location than its position in space. As a canvas for these, I’ve created another map with a custom style that blends seamlessly with the images.

Web map with illustrations.
Illustrations and a custom style help our map become part of the storytelling. View Demo 3, and then read the diff.

  • The main change here is that we define a custom factory function for our markers layer. The job of the factory function is to take each GeoJSON object and convert it to a DOM element—an a, div, img, or whatever—that the layer will then position on the map.
  • Here we generate divs and switch from using a title attribute in our GeoJSON to an id. This provides us with useful CSS classes for displaying illustrations with our custom markers.

Bringing it all together

Now it’s time to combine our story with our map. By using the scroll events from before, we can coordinate sections of the story with places on the map, crafting a unified experience.

Web map coordinated to story text by a scroll handler.
As the user reads each section, the map pans to a new location. View Demo 4, then read the diff.

  • The bridge between the story and the map is a revamped setActive() function. Previously it only set an active class on a particular section based on scrolling position. Now it also finds the active marker, sets an active class, and eases the map to the marker’s location.
  • Map animation uses the easey library in the mapbox.js API, which implements animations and tweening between geographic locations. The API is dead simple—we pass it the lon,lat of the marker we want to animate to, and it handles the rest.
  • We disable all event handlers on our map by passing an empty array into mapbox.map(). Now the map can only be affected by the scrolling position. If users wanted to deviate from the storyline or explore London freeform, we could reintroduce event handlers—but in this case, less is more.

Displaying our fullscreen map together with text presents an interesting challenge: our map viewport should be offset to the right to account for our story on the left. The solution I’m using here is to expand our map viewport off canvas purely using CSS. You could use JavaScript, but as we’ll see later, a CSS-only approach gives us elegant ways to reapply and adjust this technique on mobile devices.

Using an off-canvas map width to offset the viewport center.

At this stage, our map and story complement each other nicely. Our map adds spatial context, visual intrigue, and an interesting temporal element as it eases between long and short distances.

Maps in responsive design

The tiled, continuous spatial plane represented by web maps is naturally well-suited to responsive design. Web maps handle different viewport sizes easily by showing a bit more or a bit less map. For our site, we adjust the layout of other elements slightly to fit smaller viewports.

Adding a responsive layout.
Tweaking layout with web maps. View Demo 5, then read the diff.

  • With less screen real estate, we hide non-active text sections and pin the active text to the top of the screen.
  • We use the bottom half of the screen for our map and use media queries to adjust the map’s center point to be three-fourths the height of the screen, using another version of our trick from Demo 4.

With a modest amount of planning and minimal adjustments, our Sherlock story is ready to be read on the go.

Solve your own case

If you’ve been following the code between these steps, you’ve probably noticed at least one or two things I haven’t covered, like the parameters of ease.optimal(), or how tooltips picked up on the title attribute of our GeoJSON data. The devil’s in the details, so post to this GitHub repository, where you will find the code and the design.

You should also check out:

  • The MapBox site, which includes an overview of tiling and basic web map concepts, and MapBox.js docs and code examples.
  • Leaflet, another powerful open source mapping library.
  • D3.js, a library for powering data-driven documents that has a broad range of applications, including mapping.

This example shows just one path to integrating web maps into your designs. Don’t stick to it. Break it apart. Make it your own. Do things that might be completely genius or utterly stupid. Even if they don’t work out, you’ll be taking ownership of maps as a designer—and owning something is the only way we’ll improve on it.

0
Your rating: None
Original author: 
(author unknown)

For ideal typography, web designers need to know as much as possible about each user’s reading environment. That may seem obvious, but the act of specifying web typography is currently like ordering slices of pizza without knowing how large the slices are or what toppings they are covered with.

If someone asked me how many slices of pizza I wanted for lunch, I would probably say it depends on how large the slices are. Then—even if they told me that each slice was one eighth of a whole pie, or that they themselves were ordering two slices, or even that the slices were coming from Joe’s Pizza—any answer I might give would still be based on relative knowledge and inexact assumptions.

Such is the current situation with the physical presentation of responsive typography on the web. The information at a designer’s disposal for responsive design is virtually nonexistent outside the realm of software. Very little knowledge about the physical presentation of content is available to inform the design. The media query features of today can only relay a very fragmented view of the content’s actual presentation, and related terms from CSS are confusing if not downright misleading.

The immeasurable pachyderm

Among all the physical qualities of web typography, the elephant in the room is the issue of size. I’m not talking about em or rem or “reference pixels” ¹ or even device pixels. I’m talking about real, actual, physical, bona fide, measurable, size!

It’s ridiculous that we can send robots to Mars yet it’s still virtually impossible to render a glyph on a web page and say with confidence: “If you measure this glyph on your screen with a ruler, it will be exactly 10 millimeters wide.” Although actual physical size isn’t always the most important factor in web design, in some cases it is critical. For example, consider content for partially-sighted or low-vision readers: the ability to tweak designs according to physical sizes would enable designers to make conscious design decisions with much more sensitivity to how the type is actually being seen. And even where physical sizing is secondary to relative sizing, why shouldn’t we nevertheless be able to factor in physical size when establishing the relationships between different elements?

Physical considerations ≠ print design

I don’t believe web typography should be a screen-based imitation of print typography. One of the greatest benefits of web typography, and web design in general, is that it is flexible, adaptable, fluidly adjustable, without being locked into any one specific configuration. However(!), that doesn’t mean web designers should be forced to design without any means to address the issues of physical presentation. On the contrary, responsive design will not reach its full potential until it allows the ability to respond to the very important physical variables of digital media.

Please pardon the cliché, but when it comes to typography, on screens or otherwise, size matters. Physical size affects optical issues that change how the eye and brain process typographic images. Not surprisingly, typographers and typeface designers have been compensating for optical size-related issues as far back as Gutenberg.

You can’t expect a paragraph of type with the same relative line-height, column width, letter-spacing, and glyph proportions to function just as well on two different displays that have the same number of pixels but completely different physical sizes. It’s great that designers can adjust proportions between typographic elements if the canvas varies in relative size, but any such compensation is still based on guesswork and assumptions about the physical size of that canvas. When people disagree about the size or spacing of type on a website, there’s a very good chance that their opinions are based on completely different physical manifestations of the same content, even if their software and settings are identical.

Resolute resolution, absolute absolution

One of the most crucial factors in the size equation is resolution. And when I say resolution, I don’t just mean “how many pixels is this?”, or even “how many device pixels is this?”, but also “how large are these pixels?”

This is very different from the W3C’s “resolution” media feature in the current draft of the Media Queries Level 4 spec. You will note that the spec refers to resolution in terms of “CSS ‘inches’”—the quotes around “inches” are theirs, implying that they are not actually inches at all.

For an example of why physical resolution matters, imagine you are rendering text on a digital billboard with a physical resolution of one pixel per inch (1 PPI). Now imagine you are rendering the same text on a 200 PPI mobile device display. Even if you knew the actual number of device pixels that would be used to render your type (which itself is difficult to do with confidence these days), you would want to treat the two compositions very differently, both in terms of the typeface as well as typographic layout. The billboard type would likely require less space between letters. The letterforms themselves would benefit from narrower proportions, and could endure a higher ratio between thick and thin strokes. The type might even require different colors to optimize contrast at that size. These are all basics of typography and typeface design.

Unfortunately, in the current landscape of media query features, there is no way to know the difference between 16 device pixels on a crude LED billboard and 16 device pixels on a high-density mobile display. Heck, there isn’t even a reliable way to know if your type is 16 device pixels at all, regardless of how large the pixels are!

Pixels still rule, for better or worse

I know what some em-based enthusiasts might be thinking: “But you shouldn’t be specifying type sizes in pixel units to start with! All type sizes should be spec’d abstractly in relation to each other or a base font size!” However, in the current world of web typography, no matter what unit of measure you use to spec your onscreen type sizes—em, rem, px, pt, in, %, vh, or whatever else—at the end of the line, your specification is being mapped to pixels. Even if you leave the base size of your document to the defaults and specify everything else with em, there is still a base size which all other sizes will ultimately refer to, and it is defined in pixels.

This is because, currently, the only unit of measure that can be rendered onscreen by any operating system with absolute confidence is the lowly pixel. Until we have media query features that allow us to spec for situations like:

@media (physical-resolution: 1device-pixels-per-physical-inch) { … }

or:

@media (device-width: 10physical-centimeters) { … }

… any compensation for physical size is based entirely on rough guesses about the devices our content will be presented on.²

It’s a complete fallacy that the official CSS spec allows so-called “absolute” units of measure like inches, points, and centimeters to be mapped to anything but actual physical units. Ironically, previous versions of CSS treated these things as you would hope and expect, but a change was made “because too much existing content relies on the assumption of 96dpi, and breaking that assumption breaks the content.” Call me idealistic if you will, but I am more of the mind that a spec should be written based on what is best for the future, not to cater to things that were made in the past.³

Getting physical

Any ability to leverage physical variables for web design will require a joint effort by several groups:

  • Device manufacturers will need to provide APIs that can inform the operating system—and, by extension, web browsers and web designers—of the actual physical properties of the hardware being used to present content to the user. Some device APIs are already beginning to show up in the world, but there is a long way to go before functionality and adoption are anywhere near dependable.
  • Standards organizations—the W3C in particular—will need to establish specifications for how to reference physical properties when formatting content. They will need to update (or at least augment) their existing “absolute” units of measure to be more meaningful, so they are more than just multipliers of sizeless pixels.
  • Software manufacturers will need to implement support for new specs relating to physical media features. Browsers are the most obvious software that will need to implement support, but the biggest challenge might be in getting native support for device APIs in operating system software.
  • Type manufacturers and type services will need to provide more diverse ranges of typefaces that have been optimized for a variety of physical properties. Ideally, many of the needed variations could even be provided on the fly using a broader approach to the ideas of font hinting.
  • Web designers and developers, last but not least, will need to build their sites to respond to physical properties, leveraging all variables to the benefit of their users.

Size and resolution are just the tip of the iceberg of physical variables that could be considered when improving web typography. Things like viewing distance, ambient light, display luminance, contrast ratio, black levels, etc., etc., could all be factored in to improve the reading experience. Even the ability to know some variables within the realm of software, like the user’s rendering engine or the presence of subpixel positioning, would go a long way toward helping web typographers design a better reading experience.

In the meantime, I’d love to see more of the players mentioned above start to at least experiment with what’s possible when physical features can be specified, detected, and factored into responsive designs in structured, meaningful, and predictable ways. Until we can do that, we’re all just ordering pizza without knowing exactly what will end up on our plate.

0
Your rating: None

The W3C, the group that oversees the development of HTML and other web standards, is moving beyond dry, boring specifications with a new venture into developer documentation. The W3C has just launched an alpha preview of Web Platform Docs, a community-driven site the W3C is hoping will become the go-to source for learning how to build the web.

In other words the W3C is competing with Webmonkey.

We’re okay with that because there’s no such thing as too much high-quality, accurate info on the latest in HTML5, CSS 3 and other W3C standards. And quite frankly, HTML and CSS have grown considerably since the days when Webmonkey’s cheat sheets could tell you everything you needed to know.

Now there’s a plethora of resources for learning how to build the web — the Mozilla Developer Network, the Opera Developer Network and Microsoft’s developer site, to say nothing of the thousands of tutorials on developer blogs. But while there’s plenty out there to teach you what you need to know, finding exactly what you need to know can be challenging. You’re a web developer; you should spend your time building a better web, not searching Google for accurate info on web standards.

The goal of the W3C’s Web Platform Docs is twofold: get tons of great documentation all under one roof, and then — the most challenging part — make sure that it stays up to date. Solving the second problem is no small task. The web is currently littered with great tutorials on CSS Flexbox, which are, unfortunately, all wrong now that the Flexbox spec has been changed. The same is true of Web Sockets tutorials, Indexdb tutorials and any other tutorial on a spec that has changed or might change in the future.

These days keeping up with the rapidly changing world of web standards is a full-time job and who better to tell you what’s going on, how you can use new tools and when browsers will support them than the people actually writing standards and building browsers?

The W3C has managed to bring together some of the biggest names on the web to help create Web Platform Docs. Representatives from Opera, Adobe, Facebook, Google, Microsoft, Mozilla and Nokia will all be lending their expertise to the new site.

While the list of participating companies is impressive, Web Platform Docs is also a wiki that anyone, not just W3C reps, can edit. As the W3C’s Douglas Schepers tells Webmonkey, “anyone can contribute and everyone is on equal footing.”

If your head is bursting with tutorials, code snippets, examples or solutions to common development problems, head on over to the site and sign up so you can contribute. Bear in mind that this is an alpha release. While the site looks great and has the basic features of a wiki up and running, many of the features Schepers mentions in the intro blog post aren’t available just yet. “In the spirit of ‘release early, release often’, we decided to announce the site at the earliest possible point and improve it in public,” writes Schepers.

Right now the content of the site is primarily documentation, but the plan is to include tips and best practices along with up-to-date information on standardization progress and browser support for individual features. Other cool planned features include a way to share code snippets and an API for some aspects of the site.

For an overview of the site and to learn a bit more about where the W3C plans to go with Web Platform Docs, check out the video below.

0
Your rating: None