Skip navigation


warning: Creating default object from empty value in /var/www/vhosts/ on line 33., simple video chat in your browser.
Image: Screenshot/Webmonkey.

WebRTC is changing the web, making possible things which just a few short months ago would have been not just impossible, but nearly unthinkable. Whether it’s a web-based video chat that requires nothing more than visiting a URL, or sharing files with your social networks, WebRTC is quickly expanding the horizons of what web apps can do.

WebRTC is a proposed standard — currently being refined by the W3C — with the goal of providing a web-based set of tools that any device can use to share audio, video and data in real time. It’s still in the early stages, but WebRTC has the potential to supplant Skype, Flash and many device-native apps with web-based alternatives that work on any device.

Cool as WebRTC is, it isn’t always the easiest to work with, which is why the Mozilla Hacks blog has partnered with developers at &yet to create, a demo that shows off a number of tools designed to simplify working with WebRTC. is a working group voice chat app. All you need to do is point your WebRTC-enabled browser to the site, give your chat room a name and you can video chat with up to 6 people — no logins, no joining a new service, just video chat in your browser.

Currently only two web browsers support the WebRTC components necessary to run, Chrome and Firefox’s Nightly Channel (and you’ll need to head to about:config in Firefox to enable the media.peerconnection.enabled preference). As such, while is a very cool demo, WebRTC is in its infancy and working with it is sometimes frustrating — that’s where the libraries behind the demo come in.

As &yet’s Henrik Joreteg writes on the Hacks blog, “the purpose of is two fold. First, it’s a useful communication tool…. Second, it’s a demo of the SimpleWebRTC.js library and the little signaling server that runs it, signalmaster.”

Both tools, which act as wrappers for parts of WebRTC, are designed to simplify the process of writing WebRTC apps — think jQuery for WebRTC. Both libraries are open source (MIT license) and available on GitHub for tinkering and improving.

If you’d like to learn more about SimpleWebRTC and signalmaster and see some example code, head on over to the Mozilla Hacks blog for the full details.

Your rating: None

How to Animate Transitions Between Multiple Charts

Sometimes one chart just isn’t good enough. Sometimes you need more.

Perhaps the story you are telling with your visualization needs to be told from different perspectives and different charts cull out these different angles nicely. Maybe, you need to support different types of users and different plots appeal to these separate sets. Or maybe you just want to add a bit of flare to your visualization with a chart toggle.

In any case, done right, transitioning between multiple chart types in the same visualization can add more insights and depth to the experience. Smooth, animated transitions make it easier for the user to follow what is changing and how the data presented in different formats relates to one another.

This tutorial will use D3.js and its built in transitioning capabilities to nicely contort our data into a variety of graph types.

If you're a FlowingData member, you might be familiar with creating these chart types in R.We will focus on visualizing time series data and will allow for transitioning between 3 types of charts: Area Chart, Stacked Area Chart, and Streamgraph. The data being visualized will be New York City 311 request calls around the time hurricane Sandy hit the area.

Check out the demo to see what we will be creating, then download the source code and follow along!

The Setup

Before we dive in, let’s take a look at the ingredients that will go into making this visualization.

D3 v3

Recently, the third major version of D3.js was released: d3.v3.js. Updates include tweaks and improvements to how transitions work, making them easier overall to use.

If you have used D3.js previously, one significant change you will want to know about is that the signature of the callback function for loading data has changed. Specifically, when you load data, you now get passed any errors that have occurred during the data request first, and then the actual data array. So this:

d3.json('data', (data) -> console.log(data.length))

Becomes this:

d3.json('data', (error, data) -> console.log(data.length))

Not a huge deal, as the old API is still supported (though deprecated), but one that gives you enough advantages that you should start using it.

For More on D3.v3, check out the 3.0 upgrading guide on the D3.js wiki.

A Big Cup of CoffeeScript

And again, please feel free to compile to javascript if that will make you happy. But before you do, give CoffeeScript 5 minutes of your time – who knows, you just might fall in love.As in my previous tutorial on interactive networks , I’ll be writing the code in CoffeeScript . I recommend going back to that tutorial if you aren’t familiar with CoffeeScript to get some notes on its syntax. But just in case you don’t want to click that link, here’s the 3 second version:

functions look like this:

functionName = (input1, input2) ->
  console.log('hey! I'm a function')

We see that white space matters – the indentation indicates the lines of code inside a function, loop, or conditional statement. Also, semicolons are left off, and parentheses are sometimes optional, though I usually leave them in.

A Little Python Web Server

The README in the source code also has instructions for using Ruby.Because of how D3.js loads data, we need to run it from a web server, even when developing on our own local machine. There are lots of web servers out there, but probably the easiest to use for our development purposes is Python’s built in Simple Server.

From the Terminal, first navigate to the source code directory for this tutorial. Then check which version of Python you have installed using:

python --version

If it is Python 3.x then use this line to start the server:

python -m http.server

If it is Python 2.x then use

python -m SimpleHTTPServer

In either case, you should have a basic web server that can serve up any file from the directory you are in to your web browser, just by navigating to

The simple python web server running on my machinepython server running

But What About Windows

On Linux or Mac systems, you will already have python installed. However, it takes some moxie to get it working on a Windows machine. I would suggest looking over this blog post, to make sure you don't overlook something.

If you aren't in the mood for some python wrangling, you might take the advice of this getting started with D3 guide and try out EasyPHP. With this installed and running, you can host your D3 projects out of the www/ directory in the root of the installation location.

A Dash of Bootstrap

While we will use D3.js for the actual visualization implementation, we will take advantage of Twitter’s Bootstrap framework to make our vis just a bit more attractive.

Mostly, it will be used to make a nice toggle button that will used to transition between charts. This might not be the most efficient method for getting a decent looking toggle on a site, but it is very easy to implement and will give you a chance to check out Bootstrap, if you haven’t already. It is quite lovely.


Before we start using them, lets talk a bit about what D3 transitions are and how they work.

Think of a transition as an animation. The staring point of this animation is the current state of whatever you are transitioning. Its position, color, etc. When creating a new transition, we tell it what the elements should end up looking like. D3 fills in the gap from the current state to the final one.

D3 is built around working with selections. Selections are arrays of elements that you work with as a group. For example, this code selects all the circle elements in the SVG and colors them red:

  .attr("fill", "red")

It might then come as little surprise that transitions in D3 are a special kind of selection, meaning you can effect a group of multiple elements on a page concisely within a single transition. This is great because if you are already familiar with selections, then you already know how to create and work with transitions.

There are a few more differences between selections and transitions – mainly due to the fact that some element attributes cannot be animated.The main difference between regular selections and transitions is that selections modify the appearance of the elements they contain immediately. As soon as the .attr("fill", "red") code is executed, those circles become red. Transitions, on the other hand, smoothly modify the appearance over time.

Here is an example of a transition that changes the position and color of the circles in a SVG:

# First we set an initial position and color for these circles.
# This is NOT a transition
  .attr("fill", "red")
  .attr("cx", 40)
  .attr("cy", height / 2) 

# Here is the transition that changes the circles
# position and color.
  .attr("fill", "green")
  .attr("cx", 500)
  .attr("cy", (d, i) -> 100 * (i + 1))

I’ve coded up a live version of this demo (in JavaScript), to get a better feel for what is going on.

The functions called on the transition can be separated into 2 groups: those modifying the transition itself, and those indicating what the appearance of the selected elements should be when the transition completes.

The delay() and duration() functions are in the former category. They indicate how long to wait to start the transition, and how long the transition will take.

The attr() calls on the transition are in the later category. They indicate that once the animation is done, the circles should be green, and they should be in new positions. As you can see from the live example, D3 does the hard work of interpolating between starting and ending appearance in the duration you’ve provided.

There are lots of interesting details you can learn about transitions. For a more through introduction, I’d recommend Jerome Cukier’s introduction on

Custom interpolation, start and end triggers, transition life cycles, and more await you in this great guide!To really rip off the covers, check out Mike Bostock’s Transitions Guide , which exposes more of the nitty gritty details of transitions and is required reading once you start needing their more advanced capabilities.

For now, let’s stop with the prep work and get going on more of the specifics of how this visualization works.

A Peak at the Data

When I discovered the NYC OpenData site provided access to raw 311 service request data, I had visions of recreating the classic 311 streamgraph from Wired Magazine originally created by Pitch Interactive.

Alas, my dreams were dashed upon the realization that the times reported for all the requests was set to midnight! I assume some sort of bug in the export process is currently preventing the time from being encoded correctly.

Not wanting to give up on this interesting dataset, I decided to switch gears and instead look at daily aggregation of requests during an interesting period of recent New York history: hurricane Sandy. This tells, I think, an interesting, if not surprising, story. Priorities change when a natural disaster strikes.

Here is what the data looks like:

    "key": "Heating",
    "values": [
        "date": "10/14/12",
        "count": 428
        "date": "10/15/12",
        "count": 298
      // ...
    "key": "Damaged tree",
    "values": [
      // ...
  // ...

In words, our array of data is organized by 311 request type. Each request object has a key string and then an array called values. Values has an entry for each day in the visualization. Each day object has a string representation of the date as well as the number of this type of request for that day, stored in count.

You could use d3.nest to convert a simple table into a similar array of objects, but that is a tutorial for another day.This format was chosen to match up with how the visualization will be built. As we will see, the root-level request objects will be represented as SVG groups. Inside each group, the values array will be converted into line and area paths.

A Static Starting Point

To create movement, one must begin with stillness. How’s that for sage advice? Not great? Well, it will work well enough for us in this tutorial.

Transitions don’t deal with the creation of new elements. An element needs to exist already in order to be animated. So to begin our visualization, we will create a starting point from which the visualization can transition from.

Layouts and Generators

First let’s setup the generators and layout we will use to create the visualization. We will be using an area generator to create the areas of each chart, a line generator for the detail on the regular area chart, and the stack layout for the streamgraph and stacked area chart, as well as some scales for x, y, and color.

Here is what the initialization code looks like:

x = d3.time.scale()
  .range([0, width])

y = d3.scale.linear()
  .range([height, 0])

color = d3.scale.category10()

# area generator to create the
# polygons that make up the
# charts
area = d3.svg.area()
    .x((d) -> x(

# line generator to be used
# for the Area Chart edges
line = d3.svg.line()
    .x((d) -> x(

# stack layout for streamgraph
# and stacked area chart
stack = d3.layout.stack()
  .values((d) -> d.values)
  .x((d) ->
  .y((d) -> d.count)
  .out((d,y0,y) -> d.count0 = y0)

The stack layout could use a bit more explanation.

Unlike what its name might imply, this layout doesn’t actually move any elements itself – that would be very un-D3 like. Instead, its main purpose in this visualization is to calculate the location of the baseline – which is to say the bottom – of the area paths. It computes the baseline for all the elements in the values array based on the stack’s offset() algorithm.

The out() function allows us to see this calculated baseline value and capture it in an attribute of our value objects. In the code above, we assign count0 to this baseline value. After the stack is executed on a set of data, we will be able to use count0 along with the area generator to create areas in the right location.

Loading the Data

Ok, we need to load the JSON file that contains all our data.

This is done in D3 by using d3.json:

$ ->
  d3.json("data/requests.json", display)

Load the requests.json file, then call the display function with the results.

Here is display:

display = (error, rawData) ->
  # a quick way to manually select which calls to display. 
  # feel free to pick other keys and explore the less frequent call types.
  filterer = {"Heating": 1, "Damaged tree": 1, "Noise": 1, "Traffic signal condition": 1, "General construction":1, "Street light condition":1}

  data = rawData.filter((d) -> filterer[d.key] == 1)

  # a parser to convert our date string into a JS time object.
  parseTime = d3.time.format.utc("%x").parse

  # go through each data entry and set its
  # date and count property
  data.forEach (s) ->
    s.values.forEach (d) -> = parseTime(
      d.count = parseFloat(d.count)

    # precompute the largest count value for each request type
    s.maxCount = d3.max(s.values, (d) -> d.count)

  data.sort((a,b) -> b.maxCount - a.maxCount)


The requests.json file has data for every request type, which would overload our visualization. Here we perform a basic filter to cherry pick some interesting types.

d3.time.format and the other time formatting capabilities of D3.js are great for converting strings into JavaScript Date objects. Here, our parser is expecting a date string in the %m/%d/%y format (which is what %x is shorthand for. We use this formatter when we iterate through the raw data to convert each string into a date and save it back in the object.

Then we call start() to get the display ball rolling.

The Start of the Visualization

Finally, we are ready to create the elements needed to get our charts going. Here is the start() function which sets up these elements:

start = () ->
  # x domain setup
  minDate = d3.min(data, (d) -> d.values[0].date)
  maxDate = d3.max(data, (d) -> d.values[d.values.length - 1].date)
  x.domain([minDate, maxDate])

  # I want the starting chart to emanate from the
  # middle of the display. 
  area.y0(height / 2)
    .y1(height / 2)

  # now we bind our data to create
  # a new group for each request type
  g = svg.selectAll(".request")

  requests = g.append("g")
    .attr("class", "request")

  # add some paths that will
  # be used to display the lines and
  # areas that make up the charts
    .attr("class", "area")
    .style("fill", (d) -> color(d.key))
    .attr("d", (d) -> area(d.values))

    .attr("class", "line")
    .style("stroke-opacity", 1e-6)

  # default to streamgraph display

We still haven’t drawn anything, but we are getting close.

The data array is bound to the empty .request selection. Then, as mentioned in the data section above, a g element is created for each request type.

Finally, two path elements are appended to the group. One of which is for drawing the areas of the three charts. The other, with the class .line, will be used to draw lines in the regular area chart.

Without this, the first transition will just cause the areas to appear immediately.
As a little detail, I’ve started the .area paths in the center of the display, so the first transition to the first chart will grow out from the center.

A Movement in Three Parts

Now that we have the basic visualization framework, we can focus on developing the code for each chart.

We want the user to be able to switch back and forth between all the graph styles, in a non-linear manner. To accomplish this, the functions implementing each chart needs to accomplish 3 things:

  1. Recompute values that might get changed by switching to the other charts.
  2. Reset shared layouts and scales to handle the selected chart.
  3. Create a new transition on the elements making up each chart.

With this consistent structure in mind, let’s start coding up some charts.


The initial streamgraph displayStreamgraph display

We will start with the streamgraph – because of my original dreams to emulate Wired, and because it is pretty easy to create with the stack layout.

streamgraph = () ->
  # 'wiggle' is streamgraph offset

  # reset our y domain and range so that it 
  # accommodates the highest value + offset
  y.domain([0, d3.max(data[0] -> d.count0 + d.count))])
    .range([height, 0])

  # setup the area generator to utilize
  # the count0 values created from the layout
  area.y0((d) -> y(d.count0))
    .y1((d) -> y(d.count0 + d.count))

  # here we create the transition
  t = svg.selectAll(".request")
  # D3 will take care of the details of transitioning"path.area")
    .style("fill-opacity", 1.0)
    .attr("d", (d) -> area(d.values))

Its all a bit anticlimactic, right? The shape of the path is defined by the attribute d. See the MDN tutorial if you aren’t familiar with SVG paths.Look at that. We didn’t even have to get our hands dirty with creating SVG paths. The area generator did it all for us. Nor did we have to deal with any of the animation from current state to final streamgraph. The transition helped us out there. So what did we do?

The initial call to stack(data) causes the stack layout to run on our data. Its setup to use wiggle as the offset, which is the offset to use for streamgraphs.

The y scale needs to be updated to ensure the tallest ‘stream’ is accounted for in its calculation.

Again, check out that Transition Guide for more clarity on how this works.The last section of the streamgraph function is the transition. We create a new transition selection on the .request groups. Then we select the .area path’s inside each group and set the path and opacity they should end up using the attr() calls.

D3 will interpolate the path’s values smoothly over the duration of the transition to end up at a nice looking streamgraph for our data. The great thing is that this same code will work for transitioning from the initial blank display as well as from the other chart types!

Stacked Area Chart

The stacked area chart provides a new view with little code.Stacked area chart

I’m not going to go over the code for the stacked area chart – as it is near identical to the streamgraph.

The only real difference is that the offset used for the stack layout calculations is switched from wiggle to zero. This modifies the count0 values when the stack is executed on the data, which then adjusts the area paths to be stacked instead of streamed.

Area Chart

With the overlapping area chart, we reduce opacity to prevent from obscuring ‘short’ areasArea chart

Our last chart is a basic overlapping area chart. This one is a little different, as we won’t need to use the stack layout for area positioning. Also, we will finally get to use that .line path we created during the setup.

Here is the relevant code for this chart:

areas = () ->
  g = svg.selectAll(".request")

  # as there is no stacking in this chart, the maximum
  # value of the input domain is simply the maximum count value,
  # which we precomputed in the display function 
  y.domain([0, d3.max( -> d.maxCount))])
    .range([height, 0])

  # the baseline of this chart will always
  # be at the bottom of the display, so we
  # can set y0 to a constant.
    .y1((d) -> y(d.count))

  line.y((d) -> y(d.count))

  t = g.transition()

  # partially transparent areas"path.area")
    .style("fill-opacity", 0.5)
    .attr("d", (d) -> area(d.values))

  # show the line"path.line")
    .style("stroke-opacity", 1)
    .attr("d", (d) -> line(d.values))

The main difference between this chart and the previous two is that we are not using the count0 values in any of the area layouts. Instead, the bottom line of the areas is set to the height of the visualization, so it will always stay at the bottom of the display.

The .line is adjusted in the other charts too (just not shown in these snippets). It is just always set to be invisible in the transition.In the transition, we set the opacity of the area paths to be 0.5 so that all the areas are still visible. Then we do another selection to set the .line path so that it appears as the top outline of our areas.

Switching Back and Forth

As each of these charts is contained in its own function, transitioning between charts becomes as easy as just executing the right function.

Here is the code that does just that when the toggle button is pushed:

transitionTo = (name) ->
  if name == "stream"
  if name == "stack"
  if name == "area"

Each of these functions creates and starts a new transition, meaning switching to a new chart will halt any transition currently running, and then immediately start the new transition from the current element locations.

The Little Details

There are some finishing touches that I’ve made to the visualization that I won’t go into too much depth on. D3’s axis component was used to create the background lines marking every other day.

Shameless plug: Check out my tutorial on small multiples if you want to take a deeper look into the implementation of this great pieceA little legend, inspired by the legend in the Manifest Destiny visualization. It is also an SVG element and the mouseover event causes a transition that shifts the key into view. The details are in the code.

Finally, like I mentioned above, the toggle button to switch between charts was created using bootstrap. Checkout the button documentation for the details.

Wrapping Up

Well hopefully now you have a better grasp on using transitions to switch between different displays for your data. We can really see the power of D3 in how little code it takes to create these different charts and interactively move between them.

Thanks again to Mike Bostock, the creator of D3. His presentation on flexible transitions served as the main inspiration for this tutorial.

Now get out there and start transitioning! Let me know when you create your own face melting (and functional) animations.

Your rating: None

Image: Webmonkey

HTML5′s native audio and video tools promise to eventually make it possible to create sophisticated audio and video editing apps that run in the browser. Unfortunately much of that promise has thus far been marred by a battle over audio and video codecs. Right now what works in one browser on one operating system will not necessarily work on another.

Until the codec battle plays itself out, developers looking to build native HTML audio apps are in a bit of a bind. One way around the problem is to bypass the browser and provide your own decoder.

That’s exactly what the developers at Labs have been hard at work doing. The latest impressive release is FLAC.js, a FLAC audio decoder written in pure JavaScript. FLAC.js joins the group’s earlier efforts, which include decoders for MP3, AAC and ALAC.

Used in conjunction with the nascent Web Audio API, the new FLAC decoder means you could serve up high-quality, lossless audio to browsers that support HTML5 audio. But beyond just playback the Web Audio API opens the door to a whole new range of audio applications in the browser — think GarageBand on the web or DJ applications.

To that end Labs has been working a framework it calls Aurora.js (CoffeeScript) to help make it easier to build audio applications for the web.

If you’d like to experiment with Aurora.js or check out the new FLAC decoder, head on over to’s GitHub account where you’ll find all the code available under an MIT license.

Your rating: None

Announcing CreateJS

We’ve been working really hard on a lot of great stuff over the last couple months, and I’m thrilled finally to be able to share it with the world.


We’re going to be releasing EaselJS and a number of companion libraries under the new umbrella name “CreateJS”. CreateJS will be a suite of modular libraries and tools which work together to enable rich interactive content on open web technologies (aka HTML5). These libraries will be designed so that they can work completely independently, or you can mix and match as suits your needs. The initial offerings will be: EaselJS, TweenJS, SoundJS, and PreloadJS.

Along with the new name, we’ll also be launching a new site at which will consolidate demos, docs, tutorials, community, and showcases for all of the libraries and tools. If you have a project or tutorial you’d like to see featured, tweet it to us: @createjs.


EaselJS provides a display list and interactive model for working with rich graphics on top of the HTML5 canvas (and beyond). It provides an API that is familiar to Flash developers, but embraces javascript sensibilities.

We’re planning a minor v0.4.1 release soon, which includes bug fixes and some minor feature additions. Following that, work will commence on v0.5, which will focus on some larger features, API clean up and consistency, and improved documentation. If you have features you’d like to see in v0.5, add them to the issue list, or tweet them to @createjs, and we’ll see what we can do.

Along with the CreateJS site launch, we will be releasing much improved examples, and links to resources and tutorials developed by the community. Again, let us know if you’ve written a tutorial, or have something cool you’ve built with EaselJS you’d like us to showcase.


TweenJS is a tweening and animation library that works with EaselJS or independently. It offers a deceptively simple interface, and a huge amount of power with support for delays, easing, callbacks, non-numeric properties, sequencing, and plugins.

TweenJS v0.2 will be tagged soon. It will incorporate some fixes and tweaks, along with a full plugin model. After v0.2 our next focus will be on performance and providing better demos and documentation in preparation for the CreateJS launch.


Audio is currently a mess in HTML5, but SoundJS works to abstract away the problems and makes adding sound to your games or rich experiences much easier.

We have a huge v0.2 release in testing right now. It is a ground up rewrite that incorporates a target plugin model that allows you to prioritize what APIs you’d like to use to play audio. For example, you could choose to prioritize WebAudio, then audio tags, then Flash audio. You can query for capabilities (depending on the plugin that is used), and it offers seamless progressive enhancement (for example, panning will work in WebAudio, but will be quietly ignored in HTML audio). Following v0.2 our focus will move to fixing bugs, and delivering plugins for mobile and application platforms like PhoneGap and Win8 Metro for a v0.2.1 release.


The newest addition to the suite, PreloadJS will make it easy to preload your assets: images, sounds, JS, data, or others. It will use XHR2 to provide real progress information when available, or fall back to tag loading and eased progress when it isn’t. It allows multiple queues, multiple connections, pausing queues, and a lot more. We’re hoping to get a v0.1 build out in the next couple weeks for people to start playing with, and then will focus on fixing bugs, improving documentation, and just generally maturing things for v0.1.1.


Zoë is an open source AIR application that converts SWF animations to sprite sheets. It supports some advanced features, like configurable frame reuse and multi-image sheets (important for mobile).

For Zoë v0.2 we’re planning to add support for reading the symbols in a SWF, and letting you select multiple symbols to include in your exported sprite sheet. It’s also worth mentioning here that Flash Pro CS6 will include direct support for exporting sprite sheets for EaselJS, offering a more integrated workflow than Zoë can provide.

Toolkit for CreateJS

We’ve partnered with Adobe to build a fantastic tool for leveraging all of the existing Flash Pro skill that’s out there to build amazing HTML5 experiences. The Toolkit for CreateJS is an extension for Flash Pro that allows you to publish content (including symbols, vectors, animations, bitmaps, sound, and text) for CreateJS & HTML5 as a library of scriptable, instantiable objects.

We’ve worked really hard to develop a workflow that makes sense, and to generate code that is completely human readable, and very small (in some cases the output is smaller than SWF when zlib compressed). You can even write JS code on the Flash timeline, and it will be injected into your published tweens.

Exciting times! If you’d like to stay on top of CreateJS updates, please follow @createjs on Twitter. | gBlog - News and views on Flash, Flex, ActionScript, and web technologies

Your rating: None

US Wind Patterns

Nicolas Garcia Belmonte, author of the JavaScript InfoVis Toolkit, mapped 72 hours of weather data from 1,200 stations across the country.

Gathering the data from the National Weather Service was pretty interesting, I didn’t know USA had so many weather stations! The visualization shows wind direction encoded in line angles, wind speed encoded in line lengths and disk radius, and temperature encoded in hue.

Press play, and watch it glow. You can also easily switch between symbols: discs with lines, lines only, or filled circles.

[Nicolas Garcia Belmonte via @janwillemtulp]

Your rating: None

As Max for Live has matured, this tool for extending the functionality of Ableton Live has played host to a growing wave of brilliant custom tools – enough so that it can be hard to keep track. This month saw a few that deserve special mention. In particular, two tools help make MIDI mapping and automation recording easier in Live, and point the way for what the host itself could implement in a future update. (Live 9, we’re looking at you.) And in a very different vein, from Max for Live regular Protofuse, we see an intriguing alternative approach to sequencing.

Clip Automation does something simple: it patches a limitation in Live itself, by allowing you to record mapped automation controls directly in the Session View clips. (As the developer puts it, it grabs your “knob-twisting craziness in Session View.”) The work of Tête De Son (Jul), it’s an elegant enough solution that I hope the Abletons take note.

Clip Automation

Mapulator goes even further, re-conceiving how mapping in general works in Ableton – that is, how Live processes a change in an input (like a knob) with a change in a parameter (like a filter cutoff). Live does allow you to set minimum and maximum mappings, and reverse direction of those mappings. But the interpolation between the two is linear. Mapulator allows you to ramp in curves or even up and down again.

There’s more: you can also control multiple parameters, each at different rates. And that can be a gateway into custom devices, all implemented in control mappings. BentoSan writes:

For example, if you wanted to create a delay effect that morphs into a phaser, then cuts out and finally morphs into a reverb with an awesome freeze effect, you would be able to do this with just a single knob…

Again, this seems to me not just a clever Max for Live hack, but an illustration of how Ableton itself might work all the time, in that it’s a usable and general solution to a need many users have. Sometimes the itch Max for Live patchers scratch is an itch other people have, too.

Lots of additional detail and the full download on the excellent DJ TechTools:
Mapulator: An Advanced MIDI Mapping Tool for Ableton

Protoclidean We’ve seen Euclidean rhythms many times before, but this takes the notion of these evenly-spaced rhythmic devices to a novel sequencer. Developed by Julien Bayle, aka artist Protofuse, the Max for Live device is also a nice use of JavaScript in Max patching. See it in action in the video above. There are custom display options for added visual feedback, and whereas we’ve seen Euclidean notions in use commonly with percussion, the notion here is melodic gestures. Additional features:

  • Eight channels
  • Independent pitch, velocity, and offset controls
  • Scale mapping
  • For percussion, map to General MIDI drum maps (Eep – darn you, English, we’re using the word “map” a lot!)
  • Randomization
  • MIDI thru, transport sync, more…

More information:

Also, if you’re looking for more goodness to feed your Live rig, Ableton has added a new section to their own site called Library. You can find specific Max for Live content in that area, as well:

This is in addition to the community-hosted, community-run, not-officially-Ableton Max for Live library, which is the broadest resource online for Max for Live downloads:


Your rating: None

Google has released an experimental version of the Chromium web browser with support for the company’s new Dart programming language. Dart, which is Google’s attempt to improve on JavaScript, has thus far not enjoyed much support outside of Google, but the company continues to push forward with its own efforts.

The new development preview version of the Chromium browser, the open source version of Google’s Chrome browser, contains the Dart Virtual Machine. This release, which Google is calling “Dartium,” can be downloaded from the Dart language website. At the moment it’s available only for Mac OS X and Linux. Google says a Windows version is “coming soon.” Keep in mind that this is a preview release and intended for developer testing, not everyday use.

Google originally created Dart to address the shortcomings of JavaScript and ostensibly speed up the development of complex, large-scale web applications.

While there is much programmers might like about Dart, it is, like Microsoft’s VBScript before it, a nonstandard language from a single vendor created without any regard for the existing web standards process. The new Dartium release is the first browser to include a Dart Virtual Machine and, based on the response from other browser makers to the initial release of Dart, likely the only browser that will ever ship with a Dart VM. For its part Google says it plans to incorporate the experimental Dart VM into Chrome proper in the future.

The company also has a plan for all those browsers that aren’t jumping on the Dart bandwagon — a compiler that translates Dart to good old JavaScript. In this scenario Dart ends up somewhat like CoffeeScript, a JavaScript abstraction that makes more sense to some programmers.

For more details on the new Dartium browser and the latest improvements to the Dart VM, be sure to check out the Google Code Blog announcement.

Your rating: None

A handful of the many screens your site needs to handle. Photo: Ariel Zambelich/

This week’s Consumer Electronics Show in Las Vegas has seen the arrival of dozens of new devices from tablets to televisions. Some of these newfangled gadgets will soon be in the hands of consumers who will use them to access your website. Will your site work? Or will it end up mangled by a subpar web browser, odd screen size or slow network connection?

No one wants to rewrite their website every time a new device or browser hits the web. That’s why approaches like responsive design, and the even broader efforts of the future-friendly group, are trying to develop tools and techniques for building adaptable websites. That way, when a dozen new tablets suddenly appear on the scene, you can relax knowing your site will look and perform as intended, no matter which devices your audience is using.

Even if you aren’t a gadget lover, CES should help drive home the fundamental truth of today’s web — devices, they are a comin’. Webmonkey has compiled helpful resources for creating responsive design in the past, but the field is new and evolving rapidly so here’s an updated list of links to help you get started responsive, future-friendly sites that serve your audience’s needs whether they’re browsing with a tiny phone, a huge television or the web-enabled toaster of tomorrow.


  • Use @media to scale your layout for any screen, but remember that this alone isn’t really responsive design.
  • Use liquid layouts that can accommodate any screen size. Don’t simply design one look for 4-inch screens, one for 7-inch, one for 10-inch and one for desktop. Keep it liquid, otherwise what happens when the 11.4-inch screen suddenly becomes popular?
  • Roll your own grids based on the specifics of your site’s content. Canned grid systems will rarely fit the bill. The problem with canned grids is that they don’t fit your unique content. Create layouts from the content out, rather than the canvas (or grid) in.
  • Start small. Start with the smallest size screen and work your way up, adding @media rules to float elements into the larger windows of tablet and desktop browsers. Start with a narrow, single-column layout to handle mobile browsers and then scale up from there rather than the other way around. Starting with the smallest screen and working your way up means it’s the desktop browsers that need to handle @media, make sure older browsers work by using polyfills like Respond.
  • Forget Photoshop, build your comps in the browser. It’s virtually impossible to mock up liquid layouts in Photoshop, start in the browser instead.
  • Scale images using img { max-width: 100%; }. For very large images, consider using something like Responsive Images to offer the very smallest screens smaller image downloads and then use JavaScript to swap in larger images for larger screens. Similar techniques can be used to scale video.
  • Forget about perfect. If you haven’t already, abandon the notion of pixel perfect designs across devices. An iPad isn’t a laptop isn’t a television. Build the perfect site for each.

Further Reading:

  • Future Friendly — An overview of how some of the smartest people in web design are thinking about the ever-broadening reach of the web: “We can’t be all things on all devices. To manage in a world of ever-increasing device complexity, we need to focus on what matters most to our customers and businesses. Not by building lowest common-denominator solutions but by creating meaningful content and services. People are also increasingly tired of excessive noise and finding ways to simplify things for themselves. Focus your service before your customers and increasing diversity do it for you.”
  • Building a Future-Friendly Web — Brad Frost’s excellent advice: “Think of your core content as a fluid thing that gets poured into a huge number of containers.”
  • There is no mobile web — “There is no mobile web, nor desktop web. It is just the web. Start with the content and meet people halfway.”
  • Responsive by default — Andy Hume on why the web has always been responsive, but was temporarily sidetracked by the fad of fixed-width sites.
  • COPE: Create Once, Publish Everywhere — NPR’s Director of Application Development, Daniel Jacobson, walks through how NPR separates content from display and uses a single data source for all its apps, sites, APIs and feeds. A great example of what Frost talks about regarding content as a fluid thing.
  • Support Versus Optimization — It can seem daunting to support dozens of mobile browsers, but if you aren’t up to the challenge of a few mobile browsers now what are you going to do when you need to support car dashboards, refrigerators, televisions and toasters, all with dozens of varying browsers?
  • The Coming Zombie Apocalypse — Not satisfied thinking a few years ahead? Scott Jenson explores further into the future and tries to imagine what the web might look like when devices all but invisible.


Your rating: None

This years FOTB was special. At the end of my session I showed a sneak preview of project Hiddenwood. I demonstrated complete playback of Audiotool tracks on stage — in a browser. Now that does not sound too special…

But then again, the playback was done using JavaScript only and calculated in realtime.

Audiotool is a complex piece of software so you might ask how one could torture themselves by implementing it in JavaScript? We didn’t. Instead we started building our own vision of a cross-platform application framework a couple of months ago.

Introducing project Hiddenwood.

Hiddenwood is a collection of libraries and tools specifically designed to support different devices and platforms. The core libraries are the “driver layer” and always platform-specific with a platform-independent interface.
On top of that we provide a basic layer of libraries like our UI system, animation framework or managed collections which guarantee 0% garbage collection activity and have been battle-tested in Audiotool.

The framework is all about speed and consistency. The rendering pipeline is optimized for OpenGL and although we offer something similar to Flash’s display list a lot of features are not available because they would compromise the speed.

Speaking about speed: we are always interested in staying as native as possible on our target platform. So for the browser we emit JavaScript, for Android you will get the full DalvikVM performance and for the desktop you will get JVM performance. This approach has also another very important aspect. If you want to go platform-specific for certain features you can do that.
For instance if we want to render Audiotool songs on the server using a fork-join pool for our audio calculation this is possible and might not make sense on an Android device.

You write Java code and the supported platforms are native desktop applications, Android (minimum requirements are Gingerbread and OpenGL ES 2.0) and modern browsers. Now for browsers we even go one step further and support multiple options. That means if WebGL is not available we simply fallback to a normal canvas based render-engine. The same applies to some of the Android drivers.

iOS is of course important as well and we are actively researching the best option that will give us the most flexibility and performance.

We are currently working on two real applications built with Hiddenwood. So far it is a real pleasure to enjoy quick build times and simply test what you want on the desktop with great debugging capabilities. When you are ready you can try the same app on Android or in the browser — which might take a little bit longer to compile.

Because we see Hiddenwood as an application framework there are a lot of goodies built-in like a sprite-sheet based class generator. Think Image mixerBackground = Textures.mixer.background(); where mixer was the folder name and background the name of the file.

We believe that as a developer you really do not care about what kind of technology you are using and just want a great result. We also think that you should be able to reuse platform-independent code across multiple projects. However we do not want to take power away from the developer because if you know what you are doing: go for it.

Of course we are not the only ones with this idea. Nicolas Cannasse saw the signs years ago and invented haXe which gives you a comparable experience and Google released playN a couple of weeks ago which takes a similar approach (and requires another 25 committers :P).

But when we started Hiddenwood we wanted the Java tooling experience and playN was not public at that time. We also think that a game engine is not what you want to use for all kinds of applications. So we like to be able to give people the freedom to build their own game engine on top of Hiddenwood — and calculate physics in a different thread peut-être.
Speaking about threading: the only possible solution that works across all platforms is a shared-nothing architecture which we put in place. However if you write platform specific code you can use of course everything the platform offers and a lot of the Hiddenwood core libraries like the network- or cache-layer make use of multiple threads.

In the end what makes Hiddenwood special in my opinion is that we do not believe in write once run anywhere because that just does not make sense. The essence and philosophy behind Hiddenwood is to write platform-agnostic code using kickass-libraries and being able to reuse that. Audiotool on a tablet would look completely different from Audiotool running in a browser. And Audiotool on iOS would probably be also a little bit different from Audiotool on an Android device because there are simply different paradigms you should respect.

I hope that we can share more information with you soon. With the news of mobile Flash Player being deprecated and the ongoing demand for cross-platform development we have exciting times ahead of us. I am also super excited about the (beautiful <3) applications which we are going to release in the not so distant future.

flattr this!

Your rating: None

Alexander Chen, he of Kinect hacks and subways turned to strings, is back with another string visualization. Built in the browser (an interactive version is available), this work makes a visual accompaniment to Bach’s First Prelude from the Cello Suites. If you read music notation fluently, you may find the score itself suffices, but even so, the math to make this work – and the dance of circles across strings – is compelling. Alex, whose day job is with Google’s Creative Lab, talks to us a bit about the mathematics and process. First, his description: visualizes the first Prelude from Bach’s Cello Suites. Using the math behind string length and pitch, it came from a simple idea: what if all the notes were drawn as strings? Instead of a stream of classical notation on a page, this interactive project highlights the music’s underlying structure and subtle shifts.

Built in: HTML5 Canvas, Javascript, SoundManager
Made while a resident at Eyebeam

CDM: How did you settle on this particular visualization of this famous work? And how did you work out the maths, that is, why this specific number of dots, the distance from the strings, and the length of the strings themselves?

Alex: When I listened to the opening of the Bach, where it repeats the same bar twice, it made me think of a call and response. So I immediately pictured two wheels that echo each other, instead of just one wheel with four dots.

Figuring out the symbolic string lengths in pixels was a fun research project. I wanted explore the simple math behind string length. I learned that you can derive an entire chromatic scale just by using two fractions: 2/3 and 1/2. These correspond to the fifth and octave intervals. It’s called Pythagorean tuning. I stumbled onto this great little worksheet [PDF link] which seems to be intended for students.

Were there other things you tried, any failed experiments?

There were important learnings. It used to begin playing the piece right away. I started the opening tuning animation as an afterthought while I was preloading the strings. But that sequence became really critical.

What’s your sense of the music now having done this? Did it change your hearing of the piece

A lot of music visualization these days is linear, like reading a score. Logic’s editor, or even games like Guitar Hero, all follow that structure. And there’s a reason for that, as it’s convenient, for both computers and humans, since we can read it (and edit it) like a book. But I wanted to try something different. I think some of the magic of watching a performer is seeing such subtle, intricate finger movements produce such moving sounds. When I watch these strings morph, it feels more like the computer is performing, not just checking off notes one by one.

Seeing the Bach Prelude in groups of 8 notes gives me a bigger picture view of the piece. Instead of focusing on the individual notes, you can see each bar as a group. The strings start shifting very subtly, but as the piece builds, the strings seem to be panicking to me, shifting more rapidly. The computer is not expressive. All notes are played at equal volume. But the notes themselves, the data of the song, is inherently expressive. [interactive - grab the ... circles ("grab the balls" doesn't sound quite right)]

Oddly enough, I found another – non-digital – visualization of the same work. Photo (CC-BY) Brooklyn-based player and architect George Showman, who explains the process thusly: “Basically it’s strings attached to my wrists, that run around the room to connect to a pen hanging from the ceiling in such a way that the left hand controls up-down, and the right (bow) hand controls left-right. I.e. it turns me into a plotter. Then, when I play cello, the gestures of the playing are transmitted into the line in the drawing.” Compare this to the image above – in particular, two different ways of treating time, each distinct from a conventional score.

Your rating: None