Skip navigation
Help

futuristic

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Imagine any acoustic instrument able to act as a synth, and you begin to appreciate the potential instrumental pioneer Paul Vo may be about to unlock.

As we reported last month, music-technological innovation can absolutely involve guitars, not just synths with keyboards. So, it’s fitting that we tun now to a lover of keyboards and guitars alike, Chris Stack, for a look in video at the work of Paul Vo.

Vo may not be a household name in sound tech, but he should be, as the inventor of the impressive Moog Guitar. Here, we get look back at what came before — and what’s next.

Below, Chris gets his hands on a one-of-a-kind prototype that came before the Moog Guitar, in the form of a fretless model. You can see the fruits of the labors on Moog Guitar in the video at bottom, which demonstrates what a versatile electronic instrument this can be – as much a “synth with strings” as anything, beyond only what you might think of in guitar tone.

But having done fretless, electric, bass, and lap steel, Paul Vo’s tech now reaches a truly new frontier: the acoustic guitar and other stringed instruments. And that could be very big news. Watch, at top. It’s still early to fully grasp what this instrument may be like, but already there’s something really special going on:

The Vo-96 Acoustic Synth is the newest innovation from Paul Vo, the inventor of The Moog Guitar. It opens a new method of musical expression called Acoustic Synthesis. Will Rayan and Vincent Crow of The Electric Jazz Project try it out for the first time.

Code-named LEV-96, the concept instrument here uses harmonic content from strings as its source material. The inventor explains:

The numeral 96 refers to the number of individual harmonic control channels. Each channel is capable of controlling the behavior of one harmonic partial of a string’s timbre. 16 such channels are instantiated per string. 6 x 16=96

And if your mind isn’t blown yet, here’s more from Paul on how he’s thinking:

Add-on hardware, says Vo, will unlock the harmonic content of acoustic instruments in a way you haven't ever heard before. Photo courtesy Vo Inventions.

Add-on hardware, says Vo, will unlock the harmonic content of acoustic instruments in a way you haven’t ever heard before. Photo courtesy Vo Inventions.

With Acoustic Synthesis™ any acoustic musical instrument – any object that makes a sound – can be enhanced to bring out its hidden acoustic voice. Think also of potential new instruments – playable objects of acoustic art.
So far I’ve worked mostly with vibrating strings. The musical instrument string is arguably the most ubiquitous means of making music. It’s also the most difficult to vibrate coherently using electronic control. One idea I had back in 1979 turned out to be a great solution. I was amazed to find it was still unknown and patentable 20 years later.
Over the past 50 years or so we have accepted and become familiar with using synthesizers to create an endless variety of sounds electronically. I’m saying we are now beginning to extend this idea into the physical realm. We can make the virtual become real. We can artistically create new sounds by bringing out modes of vibration that have up to now remained hidden within the material objects we call musical instruments. Through Acoustic Synthesis™ the same sonic exploration is possible for other acoustic instruments and even creative objects of acoustic art that no one has imagined – not just yet anyway.
Analog Synthesis. Digital Synthesis. Acoustic Synthesis™: it isn’t empty hype, this really is a distinctly different and new method of voicing instruments, designing new sounds, and making music.

He covers this on his site:

Vo Inventions

Finally, a look back at the best-known Vo project, the Moog Guitar:

Chris’ site has been recently improved, so it’s well worth exploring all that he’s doing with creative instrument adventures and exploring sound design.

http://experimentalsynth.com/

0
Your rating: None

AudioGL presents a new vision of visual music creation, extended into space. Images courtesy the developer.

Here in flatland, ideas for musical interfaces may have become largely well-trodden. Not so in the third dimension. And so, one of the most unusual audiovisual interfaces has now hit beta, ready for you to explore. And that does mean “explore”: think navigation through spinning, animated galaxies of musical objects in this spatial modular sound environment. With the beta available, you can determine whether that´s a bold, new final frontier, or just the wheel, reinvented.

The work of Toronto-based artist and engineer Jonathan Heppner, AudioGL is a stunning vision of music creation in 3D space, with modular synths, advanced user-editable modulation, and a freely-navigable, open-ended spatial workspace.

There is a ticket for entry. While marked “beta,” the developer has admitted he needs money. And so, a trip into the space elevator will cost you US$80 for a fully-enabled license. You can try a save-disabled version for free, however, which isn’t necessarily a deal-killer for software of this nature; I’d mark this one down practically to crowd-funding for those who like the concept. (For an open-source take on graphical, spatial music sequencing, check out Iannix – and it does seem this sort of experimentalism could benefit from open licenses.) One caveat on the beta licenses: they won’t apply to the finished version. (Seems working something out there and talking about it publicly would encourage more beta users.)

This is the first beta; upcoming betas are due every 2-3 months, says the author. There’s already a lot there:

  • Immersive 3D interface
  • Preset instruments
  • Moular synth
  • Sample-accurate automation
  • Envelopes
  • Project-wide modulation
  • MIDI support
  • Sample import
  • Audio export

AudioGL isn’t limited to compelling 3D ideas. Project–wide modulation means networks of transformations that work across a scene.

For fine-grained editing of user envelopes, AudioGL does offer a more conventional 2D view.

At the top of the to-do list: ReWire, VST instruments and effects, and enhanced tempo change and modulation. Further down the line, says the developer, are DAW-style features like arrangement and project management.

No new videos of this build, but an impressive previous video is available below.

http://www.audiogl.com/

0
Your rating: None

Vincent Fournier is a photographer of the future—both the one that’s actually happened, and the science-fiction future that we hoped would come to be. In his earlier work, the French artist plucked robots out of laboratories and staged portraits of artificial life forms like Sony’s Asimo going about their business in the human world, drinking from a water fountain or playing basketball. In his sprawling “Space Project,” Fournier—who used to visit the Paris museum of science as a child—traveled to world’s centers of space exploration, places like the Gagarin Cosmonaut Training Center in Russia and NASA’s venerable Kennedy Space Center in Florida.

Fournier’s photographs make the Mars Desert Research Station in Utah look like the forbidding alien landscape it was meant to stand in for, while his shots of technicians in bubble-helmeted space suits are mined from the same visual vein as Stanley Kubricks’s 2001: A Space Odyssey. These are glimpses of Tomorrowland, the space age that never quite took off. Even his work on Brasilia—the custom-built capital of Brazil, that perpetual “country of the future”—show an obsession with classic visions of tomorrow, with humankind’s effort to bring the universe to heel. “I love machines, the ones that fly, speak, count or observe,” Fournier has written. “I’m fascinated by the magical aspect of science, which seems to reduce the complexity of the world to a few mathematical formulae.”

In his new work, Fournier is still looking to the future—to the hard lines of the man-made—but he’s moved to things that are living. Or at least, things that may live. In his “Engineered Species” project, part of his recently released book Past Forward, Fournier explores how life itself tinkers with its own design, changing DNA to make species better, faster and stronger. Fournier took pictures of taxidermy specimens—stuffed and pinned animals—and brought them to animal geneticists to find how these species were evolving in real time as the environment, thanks largely to human action, keeps changing.

The result are new engineered species like a global warming-tolerant pangolin, a rodent-like Asian mammal with a tougher keratin skin that enables it to maintain a constant body temperature, even in a hotter climate. An ibis—a long-legged wading bird—evolves longer, stronger claws that help make it more resistant to both drought and frost. A rabbit—one that stares at the viewer with expressive blue eyes—is engineered for higher intelligence thanks to neural stem cell treatment.

None of these species are real yet, and like Fournier’s earlier space-age work, they may turn out to be a vision of a future that does not come to pass. But I doubt it. We’re already on our way to engineering new life forms, to tinkering with the DNA of the species around us—and eventually ours as well. We may have no other choice—the environment is changing more rapidly than wildlife can adapt to, and the result is a wave of extinction happening faster than any this planet has witnessed for millions of years. For nature to survive, it may have to become artificial—though even Fournier, who says he loves machines, has his doubts about our ability to control these metamorphoses. “The universe is not as well ordered as our machines,” he writes. “It acts in an irrational, chaotic, violent and mysterious way, and even though there are computers that can design our forests, the control remains artificial.” Our engineering, after all, can exceed our wisdom.

Vincent Fournier’s limited edition monograph Past Forward was recently released by IDEA BOOKS.

Additionally, Fournier’s photographic work will be on display as part of the Les Rencontres d’Arles photography festival in France through September.

0
Your rating: None

[ By WebUrbanist in Architecture & Cities & Urbanism. ]

This is a conceptual project with a breathtaking scope: to create a perimeter of citadels surrounding Japan, protecting the island nation above, on and below the ground.

Like turrets of an impossibly large fortress, the idea (perhaps unintentionally) borrows from a neighboring nation (the Great Wall of China) the notion that one can create a barrier spanning an entire empire. Were this constructed, this project by Victor Kopeikin and Pavlo Zabotin could certainly contest other Wonders of the World.

The idea is part of a larger reorganizational notion tackling all levels of infrastructure:  “Economic Zone: (a city in a modern context), infrastructure and government buildings will be left untouched - Residential functions will be taken outside the cities - Economic (“agrarian”) zone will be located between the two previous zones (orchards, gardens, greenhouses).” But it is also born out of a more immediate and pragmatic response to the recent tsunami that devastated the country: “[The] skyscrapers themselves are connected by a system of breakwaters and drainage channels, able to withstand waves up to 50 meters that prevent the destructive waves to reach the shoreline.”

Finally, they are also intended to be self-sufficient, using “the energy of the tides in the design of such facilities. The system of underwater routers situated along the perimeter of the building in-between breakwater corridors, where because of an artificially flow increases the velocity of the water masses. The building also has an autonomous supply of drinking water (boreholes) and the system of the ground floors with live fish tanks and warehouses storing food for the inhabitants of a skyscraper (a system of bunkers).”

Want More? Click for Great Related Content on WebUrbanist:



5 Urban Design Proposals for 3D City Farms: Sustainable, Ecological and Agricultural Skyscrapers

Here are five of these remarkable architectural designs for sustainable (and stylish) urban farm towers that may revolutionize agriculture as we know it.
117 Comments - Click Here to Read More »»



Another Wave in the Wall: Vertical Lake Building Facade

This upcoming public art installation at the Brisbane Airport will put the soothing power of flowing water on display for harried travelers and drivers.
2 Comments - Click Here to Read More »»



Manhattan of the Desert: The Oldest Surviving Skyscrapers in the World

Surprisingly, neither is the art of constructing tall buildings, as evidenced by the city of Shibam.
9 Comments - Click Here to Read More »»


Share on Facebook



[ By WebUrbanist in Architecture & Cities & Urbanism. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]

0
Your rating: None

What you’re watching in the video above doesn’t involve cameras or motion sensors. It’s the kind of brain-to-machine, body-to-interaction interface most of us associate with science fiction. And while the technology has made the occasional appearance in unusual, niche commercial applications, it’s poised now to blow wide open for music – open as in free and open source.

Erasing the boundary between contracting a muscle in the bio-physical realm and producing electronic sound in the virtual realm is what Xth Sense is all about. Capturing biological data is all the rage these days, seen primarily in commercial form in products for fitness, but a growing trend in how we might make our computers accessories for our bodies as well as our minds. (Or is that the other way around?) This goes one step further: the biological becomes the interface.

Artist and teacher Marco Donnarumma took first prize with this project in the prestigious Guthman Musical Instrument Competition at Georgia Tech in the US. Born in Italy and based in Edinburgh, Scotland, Marco explains to us how the project works and why he took it up. It should whet your appetite as we await an open release for other musicians and tinkerers to try next month. (By the way, if you’re in the New York City area, Marco will be traveling to the US – a perfect chance to collaborate, meet, or set up a performance or workshop; shout if you’re interested.)

Hypo Chrysos live at Trendelenburg AV Festival, Gijon, Spain, December 2011.

CDM: Tell us a bit about yourself. You’re working across disciplines, so how do you describe what you do?

Marco: People would call me a media and sound artist. I would say what I love is performing, but at the same time, I’m really curious about things. So, most of the time I end up coding my software, developing devices and now even designing wearable tech. Since some years now I work only with free and open source tools and this is naturally reflected in what I do and how I do it. (Or at least I hope so!)

I just got back from Atlanta, US, where the Xth Sense (XS) was awarded the first prize in the Margaret Guthman New Musical Instrument, as what they named the “world’s most innovative new musical instrument.” [See announcement from Georgia Tech.]

It’s an encouraging achievement and I’m still buzzing, specially because the other 20 finalists all presented great ideas. Overall, it has been an inspiring event, and I warmly recommend musicians and inventors to participate next year. My final performance:

Make sure to use a proper soundsystem [when watching the videos]; most of the sound spectrum lives between 20-60Hz.

Music for Flesh II live at Georgia Tech Center for Music Technology, Atlanta, USA, February 2012. Photo courtesy the artist.

You’re clenching your muscles, and something is happening – can you tell us how this XS system works?

Marco: My definition of it goes like “a biophysical framework for musical performance and responsive milieux.” In other words, it is a technology that extends some intrinsic sonic capabilities of the human body through a computer system that senses the physical energy released by muscle tissues.

I started developing it in September 2011 at the SLE, the Sound Lab at the Edinburgh University, and got it ready to go in March 2011. It has evolved a lot in many ways ever since.

The Xth Sense wearable biosensors by Chris Scott.

The XS is composed of custom biophysical sensors and a custom software.

At the onset of a muscle contraction, energy is released in the form of acoustic sound. This is to say, similarly to the chord of a violin, each muscle tissue vibrates at specific frequencies and produces a sound (called Mechanomyographic signal, or MMG). It is not audible to human ear, but it is indeed a soundwave that resonates from the body.

The MMG data is quite different from locative data you can gather with accelerometers and the like; whereas the latter reports the consequence of a movement, the former directly represents the energy impulse that causes that movement. If you add to this a high sampling rate (up to 192.000Hz if your sound card supports it) and very low latency (measured at 2.3ms) you can see why the responsiveness of the XS can be highly expressive.

The XS sensors capture the low-frequency acoustic vibrations produced by a performer’s body and send them to the computer as an audio input. The XS software analyzes the MMG in order to extract the characteristics of the movements, such as dynamics of a single gesture, maximum amplitude of a series of gestures in time, etc.

These are fed to some algorithms that produce the control data (12 discrete and continuous variables for each sensor) to drive the sound processing of the original MMG.

Eventually, the system plays back both the raw muscle sounds (slightly transposed to become better audible, say about 50/60Hz) and the processed muscle sounds.

I like to term this model of performance biophysical music, in contrast with biomusic, which is based on the electrical impulses of muscles and brainwaves.

By differently contracting muscles (which has a different meaning than simply “moving”) one can create and sculpt musical material in real-time. One can design a specific gesture that produces a specific sonic result, what I call a sound-gesture. These can be composed in a score, or improvised, or also improvised on a more or less fixed score.

The XS software has also a sensing sequencing time-line: with a little machine learning (just implemented few days ago) the system understands when you’re still or moving, when you’re being fast or slow, and can use this data to change global parameters, functions or to play with the timing of events. For example, the computer can track your behaviour in time and wait for you to stop whatever you’re doing before switching to a different set of funcions.

The XS sensors are wearable devices, so the computer can be forgotten in a corner of the stage; the performer has complete freedom on stage, and the audience is not exposed to the technology, but rather to the expressivity of the performance. What I like most about the XS is that is a flexible and multi-modal instrument. One can use it to:

  • capture and playback acoustic sounds of the body,
  • control audio and video software on the computer, or
  • capture body sounds and control them through the computer simultaneously.

This opens up an interesting perspective on the applications of the XS to musical performance, dance, theatre and interaction design. The XS can also be used only as a gestural controller, although I never use it exclusively this way. We have thousands of controllers out there.

Besides, I wanted the XS to be accessible, usable, hackable and redistributable. Unfortunately, the commercialized product dealing with biosignals are mostly not cheap and — most importantly — closed to the community. See the Emotiv products (US$299 Neuro Headset, not for developers), or the BioFlex (US$392.73). One could argue that the technology is complex, and that’s why those devices are expensive and closed. This could make sense, but who says we can’t produce new technologies that openly offer similar or new capabilities at a much lower cost?

The formal recognition of the XS as an innovative musical instrument and the growing effort of the community in producing DIY EEG, ECG and Biohacking devices are a clear statement in this sense. I find this movement encouraging and possibly indispensable nowadays, as the information technology industry is increasingly deploying biometric data for adverts and security systems. For the geeky ones there are some examples in a recent paper of mine for the 2012 CHI workshop on Liveness.

For those reasons, the XS hardware design has been implemented in the simplest form I could think of; the parts needed to build an XS sensor cost about £5 altogether and the schematics looks purposely dumb. The sensors can be worn on any parts of the body. I worked with dancers who wore them on the neck and legs, a colleague stuck one to his throat to capture the resonances of his voice, I use them on the arms or to capture the pumping of the blood flow and the heart rate.

The XS software is free, based in Pd, aka Pure Data, and comes with a proper, user-friendly Graphical User Interface (GUI) and its own library, which includes over one hundred objects with help files. It is developed on Linux, and it’s Mac OS X compatible; I’m not developing for Windows, but some people got it working there too. A big thumb up goes to our wonderful Pd Community; if I had not been reading and learning through the Pd mailing list for the past 5 years I would have never been able to code this stuff.

The Xth Sense software Graphical User Interface. Built in Pd.

The public release of the project will be in April. The source code, schematics, tutorials, will be freely available online, and there will be DIY kits for the lazier ones. I’m already collecting orders for the first batch of DIY kits, so if anybody is interested please, get in touch:
http://marcodonnarumma.com/contact

I do hope to see the system hacked and extended, especially because the sensors were initially built with the support of the folks at the Dorkbot ALBA/Edinburgh Hacklab. I’m also grateful to the community around me, friends, musicians, artists devs and researchers for contributing to the success of the project by giving feedback, inspiring and sharing (you know who you are!).

Thanks, Marco! We’ll be watching!

More on the Work

http://marcodonnarumma.com/works/xth-sense/
http://marcodonnarumma.com/works/music-for-flesh-ii/
http://res.marcodonnarumma.com/blog/

And the Edinburgh hack lab:
http://edinburghhacklab.com/

Biological Interfaces for Music

There isn’t space here to recount the various efforts to do this; Marco’s design to me is notable mainly in its simplicity and – hopefully, as we’ll see next month – accessibility to other users. I’ve seen a number of brain interfaces just in the past year, but perhaps someone with more experience on the topic would like to share; that could be a topic for another post.

Entirely unrelated to music, but here’s the oddest demo video I’ve seen of human-computer interfacing, which I happened to see today. (Well, unrelated to music until you come up with something this crazy. Go! I want to see your band playing with interactive animal ears.)

Scientific American’s blog tackles the question this week (bonus 80s sci-fi movie reference):
Brain-Machine Interfaces in Fact and Fiction

I’ve used up my Lawnmower Man reference quota for the month, so tune in in April.

Tweet

2
Your rating: None Average: 2 (1 vote)

FRACT is a curious combination of music studio and puzzle game, merging elements of games like Myst with the sorts of synths and pattern editors you’d expect somewhere like Ableton Live. You have to make sounds and melodies to solve puzzles; by the end of the game, say the creators, you’re even producing original music. The work of a small student team out of Montreal, FRACT looks like it has all the makings of an underground indie hit – at least for music nerds.

As the creators describe it:

FRACT is a first person adventure game for Windows & Mac much in the vein of the Myst titles, but with an electro twist. Gameplay boils down to three core activities: Explore, Rebuild, Create. The player is let loose into an abstract world built on sound and structures inspired by electronic music. It’s left to the player to explore the environment to find clues to resurrect and revive the long-forgotten machinery of this musical world, in order to unlock its inner workings. Drawing inspiration from Myst, Rez and Tron, the game is also influenced by graphic design, data visualization, electronic music and analog culture.

The hub of the game is a virtual studio, collecting patterns and timbres. It’s right now in prototype phase, but it already looks visually stunning, an alien, digital world in which more-conventional step-sequencer views seem to emerge from futuristic landscapes. And you can spot Pd in the background (the free and open source patching tool, Pure Data), so libpd seems a must. That enables synths with sounds like phase modulation and classic virtual analog sounds, all modulating and generating sounds in-game.

The developers have also published plenty of sound samples so you can experience the musical side of this. Via SoundCloud:

While never released, one place some similar ideas has shown up is a prototype game inspired by Deadmau5. As in this title, two-dimensional editing screens and synth parameters are mapped to a first-person, three-dimensional environment. However, FRACT appears to take this concept much further, expanding upon the world, building more instruments, and actually turning those interactions into gameplay elements. The video of the Deadmau5 project – apparently done in-house for fun and not endorsed by the mouse-headed artist:

That title was the work of a game house called Floaty Hybrid; music blog Synthtopia got the scoop on this in August:
http://www.floathybrid.com
Mau5Bot Sequencer Lets You Make Music In A 3D World [Synthtopia]

We’ll be watching this one develop, certainly; good luck to the team!
http://fractgame.com/

Tweet

0
Your rating: None

Electronic music making has had several major epochs. There was the rise of the hardware synth, first with modular patch cords and later streamlined into encapsulated controls, in the form of knobs and switches. There was the digital synth, in code and graphical patches. And there was the two-dimensional user interface.

We may be on the cusp of a new age: the three-dimensional paradigm for music making.

AudioGL, a spectacularly-ambitious project by Toronto-based engineer and musician Jonathan Heppner, is one step closer to reality. Three years in the making, the tool is already surprisingly mature. And a crowd-sourced funding campaign promises to bring beta releases as soon as this summer. In the demo video above, you can see an overview of some of its broad capabilities:

  • Synthesis, via modular connections
  • Sample loading
  • The ability to zoom into more conventional 2D sequences, piano roll views, and envelopes/automation
  • Grouping of related nodes
  • Patch sharing
  • Graphical feedback for envelopes and automation, tracked across z-axis wireframes, like circuitry

All of this is presented in a mind-boggling visual display, resembling nothing more than constellations of stars.

Is it just me, or does this make anyone else want to somehow combine modular synthesis with a space strategy sim like Galactic Civilizations? Then again, that might cause some sort of nerd singularity that would tear apart the fabric of the space-time continuum – or at least ensure we never have any normal human relationships again.

Anyway, the vitals:

  • It runs on a lowly Lenovo tablet right now, with integrated graphics.
  • The goal is to make it run on your PC by the end of the year. (Mac users hardly need a better reason to dual boot. Why are you booting into Windows? Because I run a single application that makes it the future.)
  • MIDI and ReWire are onboard, with OSC and VST coming.
  • With crowd funding, you’ll get a Win32/64 release planned by the end of the year, and betas by summer (Windows) or fall/winter (Mac).

I like this quote:

Some things which have influenced the design of AudioGL:
Catia – Dassault Systèmes
AutoCAD – Autodesk
Cubase – Steinberg
Nord Modular – Clavia
The Demoscene

Indeed. And with computer software now reaching a high degree of maturity, such mash-ups could open new worlds.

Learn about the project, and contribute by the 23rd of March via the (excellent) IndieGogo:

http://audiogl.com

Tweet

0
Your rating: None

Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Previously:
Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:
http://www.subcycle.org/

Tweet

0
Your rating: None

AudioGL, a project teased in videos first in April and then again last week, is a new concept in designing a user interface for real-time music creation. Visuals and sound alike are generative, with the rotating, 3D-wireframe graphics and symbolic icons representing a kind of score for live synthesized music. The tracks in the video may sound like they’ve been pre-synthesized, polished, and sampled from elsewhere, but according to the creator, they’re all produced in the graphical interface you see – what you see is what you hear.

The newest video, released this week, reveals in detail the project’s notions of how to make a 3D, live music interface work. The UI itself is similar to other graphical patching metaphors, but here, like exploding a circuit diagram in space, routings and parameter envelopes are seen and edited in a freely-rotating view in three dimensions rather than on a flat plane.

There’s a reason interfaces like this have been few. Computer displays and pointing methods tend to be heavily biased to two-dimensional use, modeled as flat planes like pieces of paper. Working in two dimension is simply easier; there’s no reason you can’t take another layer of parameters and represent it on a two-dimensional interface. And rotating around in 3D space can make it difficult to keep your bearings.

Those challenges, though, don’t make this less interesting – they make it juicier and more delicious as design problem and stunning, futuristic musical model. Freed in three dimensions, a complex set of envelopes and parameters has room to spread out visually, making a kind of spatial score. This particular project strikes an interesting balance between traditional, iconic UI – operators are represented with graphic symbols – and more free-flowing geometry representing the sequencing and envelopes. To me, the latter is more compelling, but putting the two together may make the program more flexible and familiar to users of other music software.

What could knock you out of your chair, though, is the sheer depth of the software teased in the video. This is no simple tech demo: it’s an attempt to build an entirely new, live-synthesizing music tool from scratch in 3D. It’s like the International Space Station of music software, assembled in some void. I got a couple of tips on this today, and some are even wondering if it’s real.

It appears to be very real. Whether this particular tool is usable or not to me almost isn’t important: a spectacular failure in this arena would even be useful. Anyone waiting for some sort of “singularity” in music tech, I think it’s coming: it’s just going to be a singularity of human software ingenuity, explosive creativity and invention from independent developers. I can’t wait.

Stay tuned to find out more about this particular project.

See also the earlier video (not able to grab the embed code for some reason).

Thanks, Bodo Peeters, among others, for the tip.

Tweet

0
Your rating: None