Skip navigation
Help

patching

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Peter Kirn

laptopsonacid

CDM and yours truly team up with Berlin arts collective Mindpirates next week for a learning event we hope will be a little different than most. The idea behind the gathering is to combine learning in some new ways. The evenings begin with more traditional instruction, as I cover, step-by-step, how you’d assemble beat machines, instruments, effects, and video mixers using free software (Pure Data and Processing).

But, we’ll go a little further, opening up sessions to hacking and jamming, finally using the event space at Mindpirates to try out ideas on the PA and projectors. By the last night, we’ll all get to play together for the public before opening things up to a party at night. I know when I’ve personally gotten to do this, I’ve gotten more out of a learning experience. Getting to do it with the aim of creating useful instruments and beats and visuals here, then, I think makes perfect sense.

Working with free software in this case means that anyone can participate, without the need for special software or even the latest computers. (What we’re doing will work on Raspberry Pi, for instance, or old netbooks, perfect for turning small and inexpensive hardware into a drum machine.) No previous experience is required: everyone will get to brush up on the basics, with beginning users getting the essentials and more advanced users able to try out other possibilities in the hack sessions.

If you’re in easyJet distance of Berlin, of course, we’d love to see you and jam with you. In trying to keep this affordable for Berliners, we’ve made this 40 € total for three nights including a meal each evening and a guest list spot on the Saturday night party.

But I hope this is the sort of format we can try out elsewhere, too. If you have ideas of what you’d like to see in this kind of instruction – in-person events being ideal, but also perhaps in online tutorials – let us know.

Create Digital Music + Mindpirates present: Laptops on Acid
Facebook event

Pre-registration required; spots limited – Eventbrite
Register while spots are still available!

(fellow European residents, I’m as annoyed at the absence of bank transfer/EC payment at Eventbrite as you are – we’re working on an alternative, so you should email elisabeth (at) mindpirates [dot] org to register if you don’t want to use that credit card system!)

0
Your rating: None

Could a piece of software make you think differently about sound? Could it reflect ideas, the culture of listening?

The developers of the SUFI series of plug-ins seem to think so. In place of screencasts showing which knob to turn which way, they head with a video crew to Morocco. The “instruction” might be about the value of reflection or call to prayer, about living as much as how to use a tool. You can see the first two examples: a meditation on the idea of daily interruptions in the soundscape coming from God, and a collection of electronic drones set to a beautifully-shot backdrop. The interfaces are rendered in graphics and (for the vast majority of us) a foreign language, and instead of reverting to the conventions of plug-in design, they assimilate ideas from another culture about tonality and function.

The plug-ins will be released for Max for Live on the 8th of May, and VST plug-ins later on. (Some version of the Max for Live plug-ins are available now – links at bottom.) The collection includes:

  • DEVOTION, lowering your volume five times a day at the time of call to prayer
  • A drone machine (in the second video, sounding quite nice)
  • Four soft synths tuned to Arabic maqam scales. (They describe these as “North African maqams,” but I believe the tuning should be consistent with the use of maqam elsewhere around the Mediterranean and Arabic world.
  • One drum machine amidst the synths, Palmas, with a hand-clapping UI (see screenshot).

You have a week to practice learning to read neo-Tifinaght Amazigh script.

Updated: There are in fact no references in the videos here to Sufism, but the creators respond to questions about why they selected this name on their FAQ. As with the videos above, collaborations and friendship inspired their thinking. They write:

The title is an homage to several Moroccan Sufi musicians we’ve worked with over the years who influenced our thinking about musicianship & sound itself, as well as a way of foregrounding the complex but largely unremarked relationship between faith and technology. We’re fascinated with how software and digital environments encode cultural values and beliefs by conditioning choices and framing possibilities. For example, If Apple is a secular religion, selling contemporary magic, then should that change the way we feel about – and engage with – its operating system? The spirit of Sufi aphorisms, we hope, is manifest in these plug-ins. At a literal level, many of the roll-over infotexts come from Sufi verse.

Apart from being an interesting “cross-cultural” exercise, though, these plug-ins can serve as a reminder of two things. First, design choices are constrained only by your imagination. Aside from any perceived cultural values, you can really make software do, theoretically, anything – and make any sound. Convention can be a useful tool, but it can also become a prison. Second, the creators consider VST compatibility as a way to reach users in the Middle East and Africa. Whether this particular effort is successful or not, those are massive and growing audiences. (To anyone reading there, by the way, hello from way up at this end of the Northern Hemisphere!) Of course, these plug-ins will be just as foreign to nearly all of that audience as it is to, say, producers in Melbourne or London, but as we watch the videos from Morocco, it’s worth considering just how small our Internet-connected planet is – and how wonderfully-vast the spaces between us, and the possibility contained there, remains.

Software can serve for a medium for collaboration, as in this case, which ties together a variety of backgrounds from traditional producer to Amazigh musician. The Amazigh people, tying together modern Arabic culture and language with Phoenician roots (much like my own Lebanese ancestry), represent a rich practice of music. Just as the remote, historical world of J.S. Bach might direct a modern software plug-in, these can, too – and in living fashion.

The work is led by Jace Clayton (DJ Rupture), with programmer Bill Bowen, designer Rosten Woo, Amazigh musician Hassan Wargui , and videographers Maggie Schmitt and Juan Alcón Durán. The creators report that “a physical Sufi Plug Ins Forever Box is expected for late 2012, and Clayton is currently preparing an installation version of the Sufi Plug Ins.”

Mark your calendar for next Tuesday, or join the mailing list at the site. More information:

http://www.beyond-digital.org/sufiplugins/

Thanks, Jesse Engel!

As seen on maxforlive.com (thanks, David):

Devotion: http://www.maxforlive.com/library/device/1140/devotion
Drone: http://www.maxforlive.com/library/device/1139/drone
Palmas: http://www.maxforlive.com/library/device/1138/palmas
Hijaz: http://www.maxforlive.com/library/device/1137/hijaz
Bayati: http://www.maxforlive.com/library/device/1136/bayati
Saba: http://www.maxforlive.com/library/device/1134/saba
Khomasi: http://www.maxforlive.com/library/device/1133/khomasi

Tweet

0
Your rating: None

What you’re watching in the video above doesn’t involve cameras or motion sensors. It’s the kind of brain-to-machine, body-to-interaction interface most of us associate with science fiction. And while the technology has made the occasional appearance in unusual, niche commercial applications, it’s poised now to blow wide open for music – open as in free and open source.

Erasing the boundary between contracting a muscle in the bio-physical realm and producing electronic sound in the virtual realm is what Xth Sense is all about. Capturing biological data is all the rage these days, seen primarily in commercial form in products for fitness, but a growing trend in how we might make our computers accessories for our bodies as well as our minds. (Or is that the other way around?) This goes one step further: the biological becomes the interface.

Artist and teacher Marco Donnarumma took first prize with this project in the prestigious Guthman Musical Instrument Competition at Georgia Tech in the US. Born in Italy and based in Edinburgh, Scotland, Marco explains to us how the project works and why he took it up. It should whet your appetite as we await an open release for other musicians and tinkerers to try next month. (By the way, if you’re in the New York City area, Marco will be traveling to the US – a perfect chance to collaborate, meet, or set up a performance or workshop; shout if you’re interested.)

Hypo Chrysos live at Trendelenburg AV Festival, Gijon, Spain, December 2011.

CDM: Tell us a bit about yourself. You’re working across disciplines, so how do you describe what you do?

Marco: People would call me a media and sound artist. I would say what I love is performing, but at the same time, I’m really curious about things. So, most of the time I end up coding my software, developing devices and now even designing wearable tech. Since some years now I work only with free and open source tools and this is naturally reflected in what I do and how I do it. (Or at least I hope so!)

I just got back from Atlanta, US, where the Xth Sense (XS) was awarded the first prize in the Margaret Guthman New Musical Instrument, as what they named the “world’s most innovative new musical instrument.” [See announcement from Georgia Tech.]

It’s an encouraging achievement and I’m still buzzing, specially because the other 20 finalists all presented great ideas. Overall, it has been an inspiring event, and I warmly recommend musicians and inventors to participate next year. My final performance:

Make sure to use a proper soundsystem [when watching the videos]; most of the sound spectrum lives between 20-60Hz.

Music for Flesh II live at Georgia Tech Center for Music Technology, Atlanta, USA, February 2012. Photo courtesy the artist.

You’re clenching your muscles, and something is happening – can you tell us how this XS system works?

Marco: My definition of it goes like “a biophysical framework for musical performance and responsive milieux.” In other words, it is a technology that extends some intrinsic sonic capabilities of the human body through a computer system that senses the physical energy released by muscle tissues.

I started developing it in September 2011 at the SLE, the Sound Lab at the Edinburgh University, and got it ready to go in March 2011. It has evolved a lot in many ways ever since.

The Xth Sense wearable biosensors by Chris Scott.

The XS is composed of custom biophysical sensors and a custom software.

At the onset of a muscle contraction, energy is released in the form of acoustic sound. This is to say, similarly to the chord of a violin, each muscle tissue vibrates at specific frequencies and produces a sound (called Mechanomyographic signal, or MMG). It is not audible to human ear, but it is indeed a soundwave that resonates from the body.

The MMG data is quite different from locative data you can gather with accelerometers and the like; whereas the latter reports the consequence of a movement, the former directly represents the energy impulse that causes that movement. If you add to this a high sampling rate (up to 192.000Hz if your sound card supports it) and very low latency (measured at 2.3ms) you can see why the responsiveness of the XS can be highly expressive.

The XS sensors capture the low-frequency acoustic vibrations produced by a performer’s body and send them to the computer as an audio input. The XS software analyzes the MMG in order to extract the characteristics of the movements, such as dynamics of a single gesture, maximum amplitude of a series of gestures in time, etc.

These are fed to some algorithms that produce the control data (12 discrete and continuous variables for each sensor) to drive the sound processing of the original MMG.

Eventually, the system plays back both the raw muscle sounds (slightly transposed to become better audible, say about 50/60Hz) and the processed muscle sounds.

I like to term this model of performance biophysical music, in contrast with biomusic, which is based on the electrical impulses of muscles and brainwaves.

By differently contracting muscles (which has a different meaning than simply “moving”) one can create and sculpt musical material in real-time. One can design a specific gesture that produces a specific sonic result, what I call a sound-gesture. These can be composed in a score, or improvised, or also improvised on a more or less fixed score.

The XS software has also a sensing sequencing time-line: with a little machine learning (just implemented few days ago) the system understands when you’re still or moving, when you’re being fast or slow, and can use this data to change global parameters, functions or to play with the timing of events. For example, the computer can track your behaviour in time and wait for you to stop whatever you’re doing before switching to a different set of funcions.

The XS sensors are wearable devices, so the computer can be forgotten in a corner of the stage; the performer has complete freedom on stage, and the audience is not exposed to the technology, but rather to the expressivity of the performance. What I like most about the XS is that is a flexible and multi-modal instrument. One can use it to:

  • capture and playback acoustic sounds of the body,
  • control audio and video software on the computer, or
  • capture body sounds and control them through the computer simultaneously.

This opens up an interesting perspective on the applications of the XS to musical performance, dance, theatre and interaction design. The XS can also be used only as a gestural controller, although I never use it exclusively this way. We have thousands of controllers out there.

Besides, I wanted the XS to be accessible, usable, hackable and redistributable. Unfortunately, the commercialized product dealing with biosignals are mostly not cheap and — most importantly — closed to the community. See the Emotiv products (US$299 Neuro Headset, not for developers), or the BioFlex (US$392.73). One could argue that the technology is complex, and that’s why those devices are expensive and closed. This could make sense, but who says we can’t produce new technologies that openly offer similar or new capabilities at a much lower cost?

The formal recognition of the XS as an innovative musical instrument and the growing effort of the community in producing DIY EEG, ECG and Biohacking devices are a clear statement in this sense. I find this movement encouraging and possibly indispensable nowadays, as the information technology industry is increasingly deploying biometric data for adverts and security systems. For the geeky ones there are some examples in a recent paper of mine for the 2012 CHI workshop on Liveness.

For those reasons, the XS hardware design has been implemented in the simplest form I could think of; the parts needed to build an XS sensor cost about £5 altogether and the schematics looks purposely dumb. The sensors can be worn on any parts of the body. I worked with dancers who wore them on the neck and legs, a colleague stuck one to his throat to capture the resonances of his voice, I use them on the arms or to capture the pumping of the blood flow and the heart rate.

The XS software is free, based in Pd, aka Pure Data, and comes with a proper, user-friendly Graphical User Interface (GUI) and its own library, which includes over one hundred objects with help files. It is developed on Linux, and it’s Mac OS X compatible; I’m not developing for Windows, but some people got it working there too. A big thumb up goes to our wonderful Pd Community; if I had not been reading and learning through the Pd mailing list for the past 5 years I would have never been able to code this stuff.

The Xth Sense software Graphical User Interface. Built in Pd.

The public release of the project will be in April. The source code, schematics, tutorials, will be freely available online, and there will be DIY kits for the lazier ones. I’m already collecting orders for the first batch of DIY kits, so if anybody is interested please, get in touch:
http://marcodonnarumma.com/contact

I do hope to see the system hacked and extended, especially because the sensors were initially built with the support of the folks at the Dorkbot ALBA/Edinburgh Hacklab. I’m also grateful to the community around me, friends, musicians, artists devs and researchers for contributing to the success of the project by giving feedback, inspiring and sharing (you know who you are!).

Thanks, Marco! We’ll be watching!

More on the Work

http://marcodonnarumma.com/works/xth-sense/
http://marcodonnarumma.com/works/music-for-flesh-ii/
http://res.marcodonnarumma.com/blog/

And the Edinburgh hack lab:
http://edinburghhacklab.com/

Biological Interfaces for Music

There isn’t space here to recount the various efforts to do this; Marco’s design to me is notable mainly in its simplicity and – hopefully, as we’ll see next month – accessibility to other users. I’ve seen a number of brain interfaces just in the past year, but perhaps someone with more experience on the topic would like to share; that could be a topic for another post.

Entirely unrelated to music, but here’s the oddest demo video I’ve seen of human-computer interfacing, which I happened to see today. (Well, unrelated to music until you come up with something this crazy. Go! I want to see your band playing with interactive animal ears.)

Scientific American’s blog tackles the question this week (bonus 80s sci-fi movie reference):
Brain-Machine Interfaces in Fact and Fiction

I’ve used up my Lawnmower Man reference quota for the month, so tune in in April.

Tweet

2
Your rating: None Average: 2 (1 vote)

Pure Data (Pd) is already a free, convenient tool for making synths, effects, and sequencers and other musical generators. But imagine stripping away all the things that tie it to a platform – UI, specific hardware support – so it will run just about anywhere, on anything, in any context.

That’s what libpd, a free, embeddable, open source (BSD) tool for making interactive music, does. Coders can take their favorite language and their favorite platform, and just plug in the power of Pd. They don’t even have to know almost anything about Pd – they can let an intrepid Pd patcher create the interactive sound effects and dynamic music for their game and just drop a patch into their assets.

One of the most powerful applications for this is the ability to add interactive music and sound to mobile apps, on iOS and Android, without writing and testing a bunch of custom DSP code. And that has enabled the use of libpd in apps as successful as Inception: The App. With music by Hans Zimmer and a custom “dream” experience created by RjDj, that app racked up millions of downloads in under a couple of months, and then, far from sitting idle on the app launch screen, went on to clock in over a century of user “dreamtime.”

Okay, so, you’re sold. You want to see what this thing can do, and maybe try it out, and you’re wondering where to start. So, here’s some good news: there’s a new site and a new book to get you going.

The site: libpd.cc

libpd has a new home on the Web, both in the form of a new GitHub repository to organize all the code and docs and samples, and a site that brings together a showcase of what the apps does and points you to where to learn more. The single destination is now hosted here by CDM:

http://libpd.cc

I built that site, so please, if there’s anything you’d like to see or you’ve got your own work created with libpd, let me know about it.

Even just having selected a few key highlights of apps built with libpd, it’s impressive what people are already doing with this tool:

libpd Showcase

The book, and a chat with its author

A new book published by O’Reilly focuses on building mobile apps using libpd, for iOS and Android. (iPhone, iPod touch, Android phones and tablets, and yes, even that “new iPad” introduced yesterday are therefore all fair game.)

You can read a section of the book right here on CDM, for a taste of what’s in store:
How to Make a Music App for iOS, Free, with libpd: Exclusive Book Excerpt

It’s an exceptional, comprehensive look at development using libpd, covering iOS and Android, but also a complete look at the libpd API and how to use it. For Pd patchers just getting started with iOS and Android, it includes all of the basics of how to use libpd in your mobile development environment. For mobile developers new to Pd and patching, it makes clear how you’d communicate with Pd, so you can either dive into Pd yourself or properly interface with patches made by musicians, composers, and sound designers with whom you may be collaborating. It’s an ideal title for anyone interested in taking a game and giving it a more dynamic soundtrack – in sound effects, music or both – or for people building mobile musical instruments and effects, sonic toys, interactive albums, or, really, anything at all that involves sound or music. Since it walks you through the entire development experience, you can sit down with it in the course of a few evenings, and get a complete picture of how to integrate Pd with your development workflow.

Dr. Peter Brinkmann, the principal developer of libpd, is the author of the title. I asked Peter to explain a little bit about the book, who it’s for (hint: you!), and what’s in it (hint: stuff you want to read!) …

CDM: How did this book come about? And the book process really helped drive improvements to libpd, too?

Peter B.: Shawn Wallace, an editor at O’Reilly, contacted me last summer and asked whether I would be interested in writing a short book on libpd. I was interested, and so I talked to my [Google] manager (“No conflict — we all have time-consuming hobbies!”) as well as a couple of colleagues who had written books for O’Reilly. They made a token attempt to dissuade me, but it was clear that they had enjoyed writing their books, and they seemed quite proud of the result, too.

Once I had made up my mind to write a book, the next question was whether to self-publish or go with O’Reilly. Self-publishing is a viable option these days, but then I decided that I really wanted an animal on the cover. Besides, I had never written a book before, and having the support of O’Reilly’s editorial staff made the prospect seem less daunting.

The first draft was done in mid-November, but at that time it was basically science fiction because it presented libpd the way I wanted it to be, not the way it was at the time.

So, after the bulk of the writing was done, libpd needed to be revised so that it would actually be in agreement with the book. In particular, Rich Eakin and I rewrote the iOS components for better performance and usability. That delayed the book by a month or so, which turned out to be a great stroke of luck because that was when I discovered that Xcode 4.2 had changed the entire development model by introducing automatic reference counting, instantly rendering existing
texts obsolete. That included my chapter on iOS, and so I had to sit down and rewrite it.

After that, the rest happened rather quickly — getting reviews, revising the draft, going through the production process. O’Reilly’s toolchain is remarkably efficient, using asciidoc and docbook in a Subversion repository. The editorial staff is great, too. I’m amazed to see how quickly it all came together.

How did you approach writing the book?

For the first draft, I just imagined that I was teaching a class on libpd. When you’re lecturing in front of an audience, you don’t have time to polish every sentence; you just have to talk and maintain some sense of momentum. That approach helps a lot when facing a blank page. After that, it’s many, many rounds of revisions to eliminate weak or redundant sentences.

For the sample code, I picked one project that uses all major components of libpd. That provided a natural progression from idea to completion, while touching on all important points in their proper context. I’m basically providing running commentary on my thought process when making an app, including common mistakes and pitfalls. Like this, readers will know how to recognize and work around most problems.

Another trick is to write more than necessary. The first draft contained a lot of gratuitous editorializing. Those parts were never meant to make it into the finished text, but they were fun to write and they kept me going when I wasn’t quite sure what to write next.

Who it’s for?

The book explains how to patch for libpd, and how to write apps with libpd, with special emphasis on the interface between Pd patches and application code. It’s for mobile developers who want to add real-time audio synthesis to their projects, as well as sound designers who want to deploy their work on mobile devices. It’s light on prerequisites; if you know how to write a basic app for Android or iOS, you’re ready to read the book.

Ed.: I’d add to that, given that there are such great tutorials on app development for Android and iOS – even many of them free, including some very worthwhile documentation from Google and Apple — if you’ve messed with Pd, you should give the book a try. And if you haven’t messed with Pd, this could be a great excuse. This book won’t teach you Pd, but it’ll make very clear how to glue everything together. -PK

Why does a book like this matter? What do you hope will come out of it?

I hope that the book will help popularize real-time procedural audio, in games and other apps. I’m thrilled to see all the projects that use libpd, and I hope that the book will help people create even more awesomeness of this kind. One thing I only fully realized when writing the book is that libpd lets developers use DSP code like a media file: An audio developer creates a Pd patch, and the app developer just drops it into the resources of the app and loads and
triggers it as needed. I guess this was implicit in a blog post I wrote on workflow and prototyping a year ago, but I think the DSP-as-media angle is even more powerful. I hope that the book will bring this out.

The book project has already improved libpd. Whenever I faced the choice between fixing an awkward bit of code or explaining the awkwardness in the book, I chose to fix the code. That took care of all the little things that were sort of bothering me but didn’t seem significant enough to spend time on. It also gave us a deadline for a number of related things that we wanted to do, such as migrating to GitHub and launching the new website, libpd.cc. Ed.: Cough. Yes, glad that gave me that deadline – and thanks to Peter B. for the extra push! -PK

Congrats to Peter on his first animal-on-a-cover! It’s really a great book: you read it, and feel like making more new things, inventing new creations that produce sound and music. And that’s a very good thing.

Tweet

0
Your rating: None

As Max for Live has matured, this tool for extending the functionality of Ableton Live has played host to a growing wave of brilliant custom tools – enough so that it can be hard to keep track. This month saw a few that deserve special mention. In particular, two tools help make MIDI mapping and automation recording easier in Live, and point the way for what the host itself could implement in a future update. (Live 9, we’re looking at you.) And in a very different vein, from Max for Live regular Protofuse, we see an intriguing alternative approach to sequencing.

Clip Automation does something simple: it patches a limitation in Live itself, by allowing you to record mapped automation controls directly in the Session View clips. (As the developer puts it, it grabs your “knob-twisting craziness in Session View.”) The work of Tête De Son (Jul), it’s an elegant enough solution that I hope the Abletons take note.

Clip Automation

Mapulator goes even further, re-conceiving how mapping in general works in Ableton – that is, how Live processes a change in an input (like a knob) with a change in a parameter (like a filter cutoff). Live does allow you to set minimum and maximum mappings, and reverse direction of those mappings. But the interpolation between the two is linear. Mapulator allows you to ramp in curves or even up and down again.

There’s more: you can also control multiple parameters, each at different rates. And that can be a gateway into custom devices, all implemented in control mappings. BentoSan writes:

For example, if you wanted to create a delay effect that morphs into a phaser, then cuts out and finally morphs into a reverb with an awesome freeze effect, you would be able to do this with just a single knob…

Again, this seems to me not just a clever Max for Live hack, but an illustration of how Ableton itself might work all the time, in that it’s a usable and general solution to a need many users have. Sometimes the itch Max for Live patchers scratch is an itch other people have, too.

Lots of additional detail and the full download on the excellent DJ TechTools:
Mapulator: An Advanced MIDI Mapping Tool for Ableton

Protoclidean We’ve seen Euclidean rhythms many times before, but this takes the notion of these evenly-spaced rhythmic devices to a novel sequencer. Developed by Julien Bayle, aka artist Protofuse, the Max for Live device is also a nice use of JavaScript in Max patching. See it in action in the video above. There are custom display options for added visual feedback, and whereas we’ve seen Euclidean notions in use commonly with percussion, the notion here is melodic gestures. Additional features:

  • Eight channels
  • Independent pitch, velocity, and offset controls
  • Scale mapping
  • For percussion, map to General MIDI drum maps (Eep – darn you, English, we’re using the word “map” a lot!)
  • Randomization
  • MIDI thru, transport sync, more…

More information:
http://designthemedia.com/theprotoclidean

Also, if you’re looking for more goodness to feed your Live rig, Ableton has added a new section to their own site called Library. You can find specific Max for Live content in that area, as well:
http://www.ableton.com/library
http://www.ableton.com/library/tags/mfl/

This is in addition to the community-hosted, community-run, not-officially-Ableton Max for Live library, which is the broadest resource online for Max for Live downloads:
http://maxforlive.com/library/

Tweet

0
Your rating: None

FRACT is a curious combination of music studio and puzzle game, merging elements of games like Myst with the sorts of synths and pattern editors you’d expect somewhere like Ableton Live. You have to make sounds and melodies to solve puzzles; by the end of the game, say the creators, you’re even producing original music. The work of a small student team out of Montreal, FRACT looks like it has all the makings of an underground indie hit – at least for music nerds.

As the creators describe it:

FRACT is a first person adventure game for Windows & Mac much in the vein of the Myst titles, but with an electro twist. Gameplay boils down to three core activities: Explore, Rebuild, Create. The player is let loose into an abstract world built on sound and structures inspired by electronic music. It’s left to the player to explore the environment to find clues to resurrect and revive the long-forgotten machinery of this musical world, in order to unlock its inner workings. Drawing inspiration from Myst, Rez and Tron, the game is also influenced by graphic design, data visualization, electronic music and analog culture.

The hub of the game is a virtual studio, collecting patterns and timbres. It’s right now in prototype phase, but it already looks visually stunning, an alien, digital world in which more-conventional step-sequencer views seem to emerge from futuristic landscapes. And you can spot Pd in the background (the free and open source patching tool, Pure Data), so libpd seems a must. That enables synths with sounds like phase modulation and classic virtual analog sounds, all modulating and generating sounds in-game.

The developers have also published plenty of sound samples so you can experience the musical side of this. Via SoundCloud:

While never released, one place some similar ideas has shown up is a prototype game inspired by Deadmau5. As in this title, two-dimensional editing screens and synth parameters are mapped to a first-person, three-dimensional environment. However, FRACT appears to take this concept much further, expanding upon the world, building more instruments, and actually turning those interactions into gameplay elements. The video of the Deadmau5 project – apparently done in-house for fun and not endorsed by the mouse-headed artist:

That title was the work of a game house called Floaty Hybrid; music blog Synthtopia got the scoop on this in August:
http://www.floathybrid.com
Mau5Bot Sequencer Lets You Make Music In A 3D World [Synthtopia]

We’ll be watching this one develop, certainly; good luck to the team!
http://fractgame.com/

Tweet

0
Your rating: None

Above: Cycling 74′s just-released video highlights enhanced audio quality; our friend, French artist protofuse, has a go at working with the beta and showing off the new user interface. (See C74′s official take on the new UI below.

Max 6 in Public Beta; For Home-brewing Music Tools Graphically, Perhaps the Biggest Single Update Yet

Just because a music tool fills your screen with tools and options doesn’t necessarily make it easier to realize your ideas. From the beginning, the appeal of Max – as with other tools that let you roll your own musical tools from a set of pre-built building blocks – has been the blank canvas.

Max 6 would appear to aim to make the gap between your ideas and those tools still narrower, and to make the results more sonically-pleasing. The reveal: it could also change how you work with patches in performance and production. I was surprised when early teasers failed to impress some users, perhaps owing to scant information. Now, Max 6 is available in public beta, and the details are far clearer. Even if Max 5 was the biggest user interface overhaul in many years, Max 6 appears to be the biggest leap in actual functionality.

It’s what I’d describe as a kitchen-sink approach, adding to every aspect of the tool, so there’s almost certain to be some things here you won’t use. What could appeal to new users, though, are I think two major changes.

More visual patching feedback and discoverability. First, building upon what we saw in Max 5, Max’s approach is to provide as much visual information as possible about what you’re doing. It’s probably the polar opposite of what we saw earlier this week in something like the live-coding environment Overtone: Max’s UI is actively involved with you as you patch. There are visual tools for finding the objects you want, then visual feedback to tell you what those objects do, plus an always-visible reference bar and rewritten help. This more-active UI should make Max more accessible to people who like this sort of visual reference as they work. No approach will appeal to everyone – some people will find all that UI a bit more than they like – but Max’s developers appear to be exploiting as much as they can with interactive visual patching.

Multiple patches at once. New objects for filters and data, a 64-bit audio engine, and low-level programming are all well and good. But the change that may more profoundly impact users and workflow is be the way Max 6 handles multiple patches. Max – and by extension Pd – have in the past made each patch operate independently. Sound may stop when you open a patch, and there’s no easy or fully reliable way to use multiple patches at once. (Compare, for example, SuperCollider, which uses a server/client model that lacks this limitation.) That changes with Max 6: you can now operate multiple patches at the same time, mix them together with independent volume, mute, and solo controls, and open and close them without interrupting your audio flow. (At least one reader notes via Twitter that you can open more than one patch at once – I’d just say this makes it better, with more reliable sound and essential mixing capabilities.) Update: since I mentioned Pd, Seppo notes that the pd~ object provides similar functionality in regards to multiple patches and multi-core operation. This has been an ongoing discussion in the libpd group, so I think we’ll revisit that separately!

One upshot of this change: some users have turned to Ableton Live just to host multiple patches. For users whose live performance set involves Ableton, that’s a good thing. But it could be overkill if all you want to do is bring up a few nifty patches and play with them. Now, I think we’ll start to see more people onstage with only Max again. (Check back in a few months to see if I’m right.)

Here’s an overview of what’s new:

  • Discoverability: A “wheel” makes the mysterious functions of different objects immediately visible; Object Explorer makes them easier to find, and new help and reference sidebar keep documentation close at hand.

  • 64-bit audio engine

  • Open multiple patches, solo and mute them, open and close them without stopping audio, mix audio between them with independent volume, and take advantage of multiple processors with multiple patches.

  • Low level building blocks: You don’t get new synth objects, but you could build them yourself. New low-level data-crunching goodness work with MSP audio, Jitter Matrix, and OpenGL textures

  • More JavaScript: An overhauled JavaScript engine makes JS scripting faster and more flexible, and there’s a proper text editor with syntax highlighting (though, of course, you may still prefer your own).

  • New visuals: Vector graphics and “HTML5 Canvas-like” UI scripting (though to me it’s a shame this isn’t just the HTML5 Canvas). There are also massively-expanded Jitter powers, but those are best left to our sister site Create Digital Motion.

  • Filters: New filter-visualizing tools for audio filter construction and manipulation.

  • Dictionary data type and associated objects let you describe information in a more structured way (all kinds of potential here from control to composition)

  • Projects now let you organize data, media, and scripts in the manner more associated with conventional development environments

  • What about Ableton? No news on that front, but I expect more soon. Max for Live users will at the very least get the advantages above, since Max for Live is really Max inside Live.

Looking over all that Max does, I have to say, I’m really amazed. I wonder if computer musicians ever pause to consider how fortunate we are. Even if this isn’t the tool for you, its availability – compounded by the availability of a range of other tools – is itself worth reflection.

Max is a program that shouldn’t exist, doing a number of things it shouldn’t do, for a user base that shouldn’t exist, doing things they shouldn’t be doing.

It doesn’t make sense that you could maintain a commercial project for this kind of audience, that you’d wind up with something this mature and powerful that had a continuous lineage stretching back to the 1980s. It doesn’t make sense that musicians would embrace such a tool and produce invention. The only explanation is sheer love.

Then, even as Max reaches new heights, some of the alternatives you have for making your own music tools are simultaneously growing by leaps and bounds. They provide very different approaches to music making (compare Overtone and SuperCollider, or Pd and libpd, or AudioMulch, or new Web audio tools). There really aren’t many fields that have this kind of choice, free and commercial, in their medium. In science and engineering, there’s private and public funding, producing some amazing tools but nothing with this kind of meeting of power and accesibility. There’s just something about music.

The fact that Cycling ‘74 can maintain a business model – just as open source projects maintain volunteer contributions – is a testament to sheer passion and love for music, and a commitment to perpetually re-imagining how that music is made from an atomic level up. There was a wonderful piece on C creator and UNIX co-creator Dennis Ritchie, whom I remembered yesterday, that observed that what he did was to do what others said couldn’t be done. From Max itself to what people make with it, I think that fits nicely.

So, have a look at the public beta, and let us know what you think. The release of Max 6 has caused more people to ask what this means for Pd and other tools, or even whether to patch things from scratch at all, but I’ll leave that question to a bit later. (I do have my own opinion about which tool fits which circumstance and user, but that’s best left to a separate discussion.) For now, you can try Max yourself and see what the fuss is about. If it doesn’t fit your means of music-making, know that you have a wide array of other options – pre-built to low-level code to old-fashioned tape-and-mic approaches, and everything in between. Go out and listen and see what you discover.

http://cycling74.com/downloads/max-6-public-beta/

Tweet

0
Your rating: None