Skip navigation


warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

luanna is a beautiful new application out of Tokyo-based visual/sound collective Phontwerp_. Amidst a wave of audiovisual iPad toys, luanna is notable for its elegance, connecting swirling flurries of particles with gestures for manipulation. I imagine I’m not alone when I say I have various sample manipulation patches lying around, many in Pd, lacking visualization, and wonder what I might use in place of a knob or fader to manipulate them. In the case of luanna, these developers find one way of “touching” the sound.

As the developers put it:

luanna is an audio-visual application designed for the iPad
that allows you to create and control music through the manipulation of moving images.

The luanna app has been designed to be visually simple and intuitive, whilst retaining a set of rich and comprehensive functions. Through hand gestures you can touch, tap and manipulate the image, as if you were touching the sound. The image changes dynamically with your hand movements, engaging you with the iPad’s environment.

The interface is multi-modal, with gestures activating different modes. This allows you to select samples, play in reverse, swap different playback options, mute, and add a rhythm track or crashing noises. It’s sort of half-instrument, half-generative.

Phontwerp_ themselves are an interesting shop, descibed as a “unit” that will “create tangible/intangible products all related to sound.” Cleverly naming each as chord symbols, ∆7, -7, add9, and +5 handle sound art, merch, music performance / composition / sound design, and code, respectively. That nexus of four dimensions sounds a familiar one for our age.

Sadly, this particular creation is one of a growing number of applications that skips over the first-generation iPad and its lower-powered processor and less-ample RAM. Given Apple can make some hefty apps run on that hardware, though, I hope that if independent developers find success supporting the later models, they back-port some of their apps.

See the tutorial for more (including a reminder that Apple’s multitasking gestures are a no-no).

US$16.99 on the App Store. (Interested to see the higher price, as price points have been low for this sort of app – but I wonder if going higher will eventually be a trend, given that some of the audiovisual stuff we love has a more limited audience!)

Readers request Audio Copy and sample import right away. I think sample import, at least, could easily justify a higher price, by making this a more flexible tool.

Find it on our own directory, CDM Apps:

Very similar in its approach is the wonderful Thicket, well worth considering:

See our recent, extensive profile of that application’s development:
Thicket for iOS Thickens; Artists Describe the Growth of an Audiovisual Playground

See also, in a similar vein, Julien Bayle’s recent release US$4.99 Digital Collisions:


Your rating: None

Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:


Your rating: None

Last night I was sitting in a child psychologist office watching my son undergo a whole heap of cognitive testing (given he has a rare condition called Trisomy 8 Mosaicism) and in that moment I had what others would call a “flash” or “epiphany” (i.e. theory is we get ideas based on a network of ideas that pre-existed).

The flash came about from watching my son do a few Perceptional Reasoning Index tests. The idea in these tests is to have a group of imagery (grid form) and they have to basically assign semantic similarities between the images (ball, bat, fridge, dog, plane would translate to ball and bat being the semantic similarities).

This for me was one of those ahah! Moments. You see, for me when I first saw the Windows 8 opening screen of boxes / tiles being shown with a mixed message around letting the User Interface “breathe” combined with ensuring a uniform grid / golden ratio style rant … I just didn’t like it.

There was something about this approach that for me I just instantly took a dislike. Was it because I was jaded? Was it because I wanted more? ..there was something I didn’t get about it.


Over the past few days I’ve thought more about what I don’t like about it and the most obvious reaction I had was around the fact that we’re going to rely on imagery to process which apps to load and not load. Think about that, you are now going to have images some static whilst others animated to help you guage which one of these elements you need to touch/mouse click in order to load?

re-imagining or re-engineering the problem?

This isn’t re-imagining the problem, its simply taken a broken concept form Apple and made it bigger so instead of Icons we now have bigger imagery to process.

Just like my son, your now being attacked at Perceptional Reasoning level on which of these “items are the same or similar” and given we also have full control over how these boxes are to be clustered, we in turn will put our own internal taxonomy into play here as well…. Arrghh…

Now I’m starting to formulate an opinion that the grid box layout approach is not only not solving the problem but its actually probably a usability issue lurking (more testing needs to be had and proven here I think).

Ok, I’ve arrived at a conscious opinion on why I don’t like the front screen, now what? The more I thought about it the more I kept coming back to the question – “Why do we have apps and why do we cluster them on screens like this”

The answer isn’t just a Perspective Memory rationale, the answer really lies in the context in which we as humans lean on software for our daily activities. Context is the thread we need to explore on this screen, not “Look I can move apps around and dock them” that’s part of the equation but in reality all you are doing is mucking around with grouping information or data once you’ve isolated the context to an area of comfort – that or you’re still hunting / exploring for the said data and aren’t quite ready to release (in short, you’re accessing information in working memory and processing the results real-time).

As the idea is beginning to brew, I think about to sources of inspiration – the user interfaces I have loved and continue to love that get my design mojo happening. User interfaces such as the one that I think captures the concept of Metro better than what Microsoft has produced today – the Microsoft Health / Productivity Video(s).


Back to the Fantasy UI for Inspiration

If you analyze the attractive elements within these videos what do you notice the most? For me it’s a number of things.


I notice the fact that the UI is simple and in a sense “metro –paint-by-numbers” which despite their basic composition is actually quite well done.


I notice the User Interface is never just one composition that the UI appears to react to the context of usage for the person and not the other way around. Each User Interface has a role or approach that carries out a very simplistic approach to a problem but done so in a way that feels a lot more organic.

In short, I notice context over and over.

I then think back to a User Interface design I saw years ago at Adobe MAX. It’s one of my favorites, in this UI Adobe were showing off what they think could be the future of entertainment UI, in that they simply have a search box on screen up top. The default user interface is somewhat blank providing a passive “forcing function” on the end user to provide some clues as to what they want.

The user types the word “spid” as their intent is Spiderman. The User Interface reacts to this word and its entire screen changes to the theme of Spiderman whilst spitting out movies, books, games etc – basically you are overwhelmed with context.

Crazy huh?

I look at Zune, I type the word “the Fray” and hit search, again, contextual relevance plays a role and the user interface is now reacting to my clues.


I look back now at the Microsoft Health videos and then back to the Windows 8 Screens. The videos are one in the same with Windows 8 in a lot of ways but the huge difference is one doesn’t have context it has apps.

The reality is, most of the Apps you have has semantic data behind (except games?) so in short why are we fishing around for “apps” or “hubs” when we should all be reimagineering the concept of how an operating system of tomorrow like Windows 8 accommodates a personal level of both taxonomy and contextual driven usage that also respects each of our own cognitive processing capabilities?

Now I know why I dislike Windows 8 User Interface, as the more I explore this thread the more I look past the design elements and “WoW” effects and the more I start coming to the realization that in short, this isn’t a work of innovation, it simply a case of taking existing broken models on the market today and declaring victory on them because it’s now either bigger or easier to approach from a NUI perspective.

There isn’t much reimagination going on here, it’s more reengineering instead. There is a lot of potential here for smarter, more innovative and relevant improvements on the way in which we interact with software of tomorrow.

I gave a talk similar to this at local Seattle Design User Group once. Here’s the slides but I still think it holds water today especially in a Windows 8 futures discussion. Usability broken.
View more presentations from Scott Barnes

Related Posts:

Your rating: None