Skip navigation


warning: Creating default object from empty value in /var/www/vhosts/ on line 33.
Original author: 
Chris McDonnell

Encyclopedia Pictura

Encyclopedia Pictura is the creative association of Isaiah Saxon, Sean Hellfritsch and Daren Rabinovitch that has been producing striking, playful work since its inception. One of their early shorts, “Grow,” shows off the power of a simple, clever idea executed well:

The team has produced several music videos including work for Björk and Grizzly Bear. Here are a few stills from Grizzly Bear’s “Knife” video, which features their multimedia, practical/digital effects combination approach to direction:

Encyclopedia Pictura

There is a load of interesting behind the scenes footage and photos also on their website, such as this video:

Their claim of working in “film, art, game design, community building, and agriculture” is not a bit of bombast. From their about page:

From 2008-2011, EP led an effort to build a unique hillside neighborhood and farm called Trout Gulch. They lived and worked there along with 15 others. In 2012, they co-founded DIY in San Francisco, with Vimeo co-founder Zach Klein and OmniCorp Detroit co-founder Andrew Sliwinski. Saxon also volunteers as Media Advisor to Open Source Ecology.

They are passionate about gardening, farming, construction, villages, augmented reality, science visualization, social ecology, technological empowerment, adventure, and country living.

DIY is both a feature film in development as well as more recently a new and growing online community that encourages young people to become “Makers” and share their work, gaining confidence in their creativity and earning digital badges for their profile as they go. DIY meets kids where they already are, on connected devices, and encourages their natural creativity while learning real-world, off and online skills. The DIY “anthem”:

The Do It Yourself/Maker attitude is perhaps the most valuable thing that is being nourished as young people challenge themselves to new experiences inspired by the site.

When a person grows up understanding that they can create and mold the media and environment around them, they don’t have to resign to an existence of passively consuming at the corporate trough. An individual’s confidence in their own creativity is an essential survival skill for the future.

Your rating: None

For new readers just joining us, this is the fourth in a series of articles on getting your hands dirty by setting up a personal Web server and some popular Web applications. We've chosen a Linux server and Nginx as our operating system and Web server, respectively; we've given it the capability to serve encrypted pages; and we've added the capability to serve PHP content via PHP-FPM. Most popular Web apps, though, require a database to store some or all of their content, and so the next step is to get one spun up.

But which database? There are many, and every single one of them has its advantages and disadvantages. Ultimately we're going to go with the MySQL-compatible replacement MariaDB, but understanding why we're selecting this is important.

To SQL or NoSQL, that is the question

In most cases these days, when someone says "database" they're talking about a relational database, which is a collection of different sets of data, organized into tables. An individual record in a database is stored as a row in a table of similar records—for example, a table in a business's database might contain all of that business's customers, with each record consisting of the customer's first name, last name, and a customer identification number. Another table in this database might contain the states where the customers live, with each row consisting of a customer's ID number and the state associated with it. A third table might contain all the items every customer has ordered in the past, with each record consisting of a unique order number, the ID of the customer who ordered it, and the date of the order. In each example, the rows of the table are the records, and the columns of the table are the fields each record is made of.

Read 61 remaining paragraphs | Comments

Your rating: None

If you’re dreaming of creating your own controller from scratch, there are certain basic elements you’ll need – and a strong case for reusing, not reinventing, the wheel. There are a range of products out there that cater to you DIYers; Livid’s Builder line is certainly one of the most comprehensive. It’s a line of hardware accessories that help you piece together MIDI controllers with all the requisite knobs and buttons and sensors you might like, and its brain just got an upgrade.

The soul of any controller is the electronics and microcontroller that read all of those inputs and let them talk to a computer. And it’s that “brain” that Livid recently upgraded, with their Builder Brain v2. Messages from controls go in, messages to devices like lights go out, all via a connection to your computer that’s USB powered, class-compliant MIDI. (That means you won’t need any drivers – not on Mac, not on Windows, and not on Linux. You could even plug this into one of those Raspberry Pi devices, if you’re lucky enough to have one!) They also operate standalone with a 5V power supply.

The Brain v2 is for some seriously large and complex controllers, with support for up to 64 analog inputs, 128 Buttons, and 192 LEDs. (Fortunately, a companion board called the Omni, and connections via ribbon cables, mean that you won’t create complete spaghetti trying to do that.) In fact, it’s so powerful I’d recommend considering something simpler for less-ambitious projects, but if you’re planning a big controller, it’s tough to beat Livid’s offerings.

New in v2:

  • A Bus Board for easier control connections
  • LED support up from 48 to 192, extra circuitry for ultra-brights.
  • Encoders now work with LED encoder ring support, so you can make a big circle of ultra-bright lights to go around your encoder.
  • RGB LED support.
  • 5V standalone power is new.

Add those features to cool extras from the original, like accelerometer and velocity-sensitive surface support and programmable MIDI settings.

CDM asks Livid’ Jay Smith to tell us what this is all about.

CDM: Who is this for?

Jay: That’s kind of a loaded question! It’s really for anyone wanting to create a class-complaint MIDI device of their own. An artist, a maker of commercial products, a musician, a visualist? With Brain version 1 we’ve seen a MIDI controlled electric mandolin, Moldover’s Mojo, and The Choppertone to name a few. We’ve also powered some other pretty sophisticated commercial devices for other companies with it, so it’s not just a DIY solution.

With v2 we’ve really expanded the functionality by adding almost any kind of control you’d want to hook up to it, and made the process of doing that much easier. If you are talking about standard MIDI controller type controls, our Omni board support thousands of configurations with just one circuit board. This isn’t just for building “controllers” in terms of software controllers either. We’ve added external power so you can use it to control analog gear and other MIDI controlled devices.

Apart from those examples, what can you build with Builder and the Brain?

Anything that has a button, LEDs, potentiometer, encoder, FSRs, accelerometers, sensors, and more. Single LEDs, RGB LEDs, and “groups” of LEDs of 6,12, or 24 can be created and controlled with one MIDI note or CC or locally controlled with an encoder or pot. As a result, inventive, designs with interesting lighting feedback are possible. VU meters driven by CCs, or a clever array of LEDS that make glyphs or patterns can be arranged with your controls to provide novel, custom feedback that would never make it on Guitar Center’s shelves, but mean something special to you. The omni board provides enough physical limitation that you can think about a “chunk” of a controller and isolates parts of your project into digestible parts, and allows you to sensibly expand and modify your control surface with only 1 brain.

Why would you choose this over another platform?

Frankly there is no other platform for controller building that is this packed with features, well documented and supported, and easy to use. Since the release of Brain v1 three years ago we’ve spent a lot of time listening to our user’s requests, thinking about the features we’d like for our own use, and developing them into a platform for others to use. We didn’t spend much time looking at what else was out there, we looked for what wasn’t and tried to fill in those gaps. When it comes to building your own device, whether for creating music, controlling lights, or something else completely, there are really only other “solutions”, not platforms, which is what we intended to create.

Who is this not for?

If you are looking for an all-in-one solution for your dream controller but don’t want to do any of the labor, this is definitely not for you. We’ve really set out to create the most comprehensive platform that has the smallest learning curve. There are some other great solutions out there, but some of them either have a big learning curve or require programming to achieve results. If you have a smaller project and don’t care about MIDI, the ability to edit, expand, and have a long terms solution, there are certainly cheaper solutions out there. We tried to make the process more streamlined, feature packed, and have taken a lot of the guesswork out of it with Brain v2. With the addition of the Bus Board we’ve added things like resistors, transistors, and chips that make the building process much easier.

Quick start video:

Find out more:


Your rating: None

The corporate social web still sucks

Expert Labs, the non-profit organization behind ThinkUp, a web-based data-liberation and analytics application, is rebooting into a commercial entity.

No need to panic if you use ThinkUp to back up your social network life; the application will remain open source and freely available.

But Expert Labs is going away and ThinkUp is refocusing on a larger goal — liberating your online social life from the clutches of corporate web entities.

In its own words the new ThinkUp wants to build “an information network that connects to today’s social networks, but isn’t centralized and dependent on a company or investors.”

That’s not an entirely new idea. Diaspora and some other projects are trying to do the same thing, but ThinkUp is taking a different approach — it wants to build an app first and focus on the user experience rather than the underlying technology.

In fact ThinkUp already is an app that’s pretty close to what it’s aiming to do. ThinkUp is a web-based app that pulls your data out of social silos like Facebook or Twitter and stores it on your own server. You control your own data, and have a record of your conversations potentially long after Facebook, Twitter and the rest have become mere footnotes in the history of the web.

For more on how ThinkUp works and how you can use it be sure to check out our earlier coverage and then grab the code and try it for yourself.

So what of ThinkUp’s new, loftier goals? Is any attempt to replace Facebook doomed to failure? Of course not. Everything is replaceable, just ask MySpace. And ThinkUp believes its approach is different. “Prior attempts have tried to solve this problem based on the assumption that it is a technical challenge,” says ThinkUp’s Knight News Challenge application. “We believe it to be a social one.” ThinkUp’s focus going forward will be on the social and the interface:

We will draw people in through a compelling media site that encourages participation via our decentralized platform… a peer-to-peer network that powers a great media property with broad appeal — imagine if Digg or Reddit were open, decentralized and powered by a network instead of votes.

If you’re curious to know what that might look like, head on over to the ThinkUp proposal for the Knight News Challenge and click the heart icon to “like” it (incidentally if the Knight New Challenge sounds familiar, that might be because it’s also the birthplace of EveryBlock). In the meantime, work on the ThinkUp app continues with a new release that improves the charts and graphs and paves the way for the coming Foursquare support. Check out the ThinkUp GitHub page for more details.

Your rating: None

We've talked here before about the extremely important (and often-overlooked) DIY aspect of science. Scientists are makers. They have to be. The tools they need often aren't available any other way. Other times, the tools are available, but they're far more expensive than what you could construct out of your own ingenuity.

In this video, researchers at Cambridge build LEGO robots that automate time-consuming laboratory processes at a fraction of the cost of a "real" robot.

Video Link

Via Karyn Traphagen

Your rating: None Average: 3 (1 vote)

monotribe, in limited silver and gold. Photo by Marsha Vdovin for CDM.

It’s a beautiful thing when music hardware improves with age. And lately, that’s been what’s happening to Korg’s monotribe and monotron. Over the past few months, we’ve seen a major update from Korg for the monotribe that makes its sequencing functions easier and more useful. To save you the trouble of navigating the Korg Japan site – a difficulty for those of us who don’t speak Japanese – here on CDM, we’ve got a number of downloads for saving monotron patches, and the Japan-exclusive overlay for the monotribe update. And, courtesy enterprising hackers in Brazil unassociated with Korg, a monotribe MIDI update gives the hardware the feature it sorely lacks.

And how many videos do we have of all of this? Too many videos.

Grab some downloads, and see what’s new:

The monotron update: Over the new year, Korg updated their monotribe drum machine/synth, with expanded steps up to (at last) 16, volume automation, easier sequencing, drum rolls, gate time hold, and sample and hold, along with sync. Oddly, you update the monotribe by playing it an audio file. (Better hope it doesn’t contain a Cylon virus.)

More on the System Version 2 update (in English):

And in Japanese:

And some words of wisdom in mangled English translation, courtesy Google Translate:

Monotribe stuck to the analog sound, even how to update the analog stick to technique. Past, as had been loaded by the cassette tape to PC data, has adopted a voice in how to update using monotribe.

(Real translation: because there aren’t any ports on the monotribe, the hack is playing it an audio file.)

And on the availability of the overlays, see if you can make sense of this:

Get in the music stores nationwide !
Reversal from heavy image of monotribe so far, has started distribution of the national musical instrument dealers in sequential overlay of vivid yellow color, such as the intensity of the synth sounds tell. Because there is limited number of people you want to soon.

(Real translation: if you don’t live in Japan, or simply missed out, print out this PDF.)

Get your circuit diagrams, patch storage sheets, and overlays. [monotron/monotribe] Thanks to reader Mutis Mayfield, we’ve got a whole bundle of PDFs for monotribe and monotron owners to enjoy. You can get your own overlays – otherwise available only apparently in dealers in Japan – provided you can work out how to print them so they look nice. And you get some terrific other additions, including the latest circuit schematics (in case you’ve missed their intentional appearance on the Interwebs), and even patch sheets. (Prior to the MeeBlip’s recent addition of patch storage, we referred to these cheekily as Hipster Patch Storage. You need a marker.)

Via Scribd, we’ve got all those downloads for you, so enjoy.

KORG monotron and monotribe goodies [cdmblogs @Scribd]

Updated: Seems Scribd couldn’t handle the complexity of those schematics. (What, no one taught their plug-in Electrical Engineering?) So here they are, switfly downloading from our servers:

monotron DELAY schematic [PDF]

monotron DUO schematic [PDF]

(Please link to this page on CDM and not to these files directly, unless you hate us.)

These PDFs are marked for public distribution, courtesy Korg. Speaking of which, it’s really nice to see Korg releasing that overlay under a Creative Commons license. (I suppose that means you could translate it and release the translated version, too, if you’re an especially big, multi-lingual monotribe fan!)

Adding MIDI to the monotribe

From Brazil, Amazing Machines have done a clever MIDI input and output mod for the monotribe. Now, some of us (cough, cough) think this should have been on the hardware in the first place, but the mod really is quite clever, so lovers of the monotribe get something that they should really love.

Even though it’s a mod, you just plug the thing in – no soldering required. And while you may have seen this mod before, the Brazilians have been busy working on improving it. New features, introduced late in February and shipping now:

  • MIDI output: MIDI clock, arpeggiator from the synth section, trigger info from the rhythm section, and even the ability to use the ribbon controller as note, volume controller, and gate time.
  • CC output.
  • Using sync I/O on the monotribe, converts MIDI clock to sync pulse or the other way around.
  • Improved DIN connectors.

All of this is now pre-assembled at US$64. You can even get US$10 off if you ordered the previous version.

Owners’ manual, more info:

Videos: monotribe v2

Korg Japan shows off those new features:

Videos: monotribe + MIDITRIBE

A look at what’s new in the revised hardware:

And from our friend Nick at Sonic State, a video review of the unit:


Your rating: None

Electronic music making has had several major epochs. There was the rise of the hardware synth, first with modular patch cords and later streamlined into encapsulated controls, in the form of knobs and switches. There was the digital synth, in code and graphical patches. And there was the two-dimensional user interface.

We may be on the cusp of a new age: the three-dimensional paradigm for music making.

AudioGL, a spectacularly-ambitious project by Toronto-based engineer and musician Jonathan Heppner, is one step closer to reality. Three years in the making, the tool is already surprisingly mature. And a crowd-sourced funding campaign promises to bring beta releases as soon as this summer. In the demo video above, you can see an overview of some of its broad capabilities:

  • Synthesis, via modular connections
  • Sample loading
  • The ability to zoom into more conventional 2D sequences, piano roll views, and envelopes/automation
  • Grouping of related nodes
  • Patch sharing
  • Graphical feedback for envelopes and automation, tracked across z-axis wireframes, like circuitry

All of this is presented in a mind-boggling visual display, resembling nothing more than constellations of stars.

Is it just me, or does this make anyone else want to somehow combine modular synthesis with a space strategy sim like Galactic Civilizations? Then again, that might cause some sort of nerd singularity that would tear apart the fabric of the space-time continuum – or at least ensure we never have any normal human relationships again.

Anyway, the vitals:

  • It runs on a lowly Lenovo tablet right now, with integrated graphics.
  • The goal is to make it run on your PC by the end of the year. (Mac users hardly need a better reason to dual boot. Why are you booting into Windows? Because I run a single application that makes it the future.)
  • MIDI and ReWire are onboard, with OSC and VST coming.
  • With crowd funding, you’ll get a Win32/64 release planned by the end of the year, and betas by summer (Windows) or fall/winter (Mac).

I like this quote:

Some things which have influenced the design of AudioGL:
Catia – Dassault Systèmes
AutoCAD – Autodesk
Cubase – Steinberg
Nord Modular – Clavia
The Demoscene

Indeed. And with computer software now reaching a high degree of maturity, such mash-ups could open new worlds.

Learn about the project, and contribute by the 23rd of March via the (excellent) IndieGogo:


Your rating: None

Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:


Your rating: None

With great power comes great learning curves – or maybe not. Csound for Live, just announced this weekend and shipping on Tuesday, brings one of the great sound design tools into the Ableton Live environment. You can use it without any actual knowledge of Csound, without a single line of code — or, for those with the skills, it could transform how you use Csound.

For anyone who thinks music creation software has to be disposable, you’ve never seen Csound. With a lineage going literally to the dawn of digital synthesis and Max Mathews, Csound has managed to stay compatible without being dated, host to a continuous stream of composition and sonic imagination that has kept it at the bleeding edge of what computers can do with audio.

Csound for Live does two things. First, it makes Csound run in real-time in ways that are more performative and, well, “live” than ever before, inside the Live environment. Second, its release marks a kind of “greatest hits” of Csound, pulling some of the platform’s best creators into building new and updated work that’s more usable.

If you’re not a Csound user, you just dial up their work and see what your music can do. If you are, of course, you can go deeper. And if you’re somewhere in between, you can dabble first before modifying, hacking, or making your own code. And that means for everybody, you get:

  • Spectral processors
  • Phase vocoders
  • Granular processors
  • Physical models
  • Classic instruments

More description:

It looks great. It works great. It sounds… beyond great.

CsoundForLive is a collection of over 120 real time audio-plugins that brings the complexity and sound quality of Csound to the fingertips of ANY Ableton Live user – without ANY prior Csound knowledge.

Capitalizing on the design power of Max For Live, what once took pages of text in Csound can now be accomplished in a few clicks of your mouse.

Move a slider on your APC40 and deconstruct your audio through professional quality granular synthesis…

Touch a square of your Launchpad and warp pitch and time with real time FFT processing…

Press letters on your keyboard and create sonically intricate melodies through wave terrain synthesis…

And Dr. Richard Boulanger, unofficial Jedi Master of the Csound movement, instigator of this project, and Berklee School of Music sound and music wizard, posts a bit more:

With my former student, and now partner, Colman O’Reilly, I have been working around the clock for months to collect, adapt, create, wrap, and simplify a huge collection of Csound instruments and make them all work simultaneously and interchangeably in Ableton Live. In this guise, I am able to “hot-swap” the most complex Csound instruments in and out of an arrangement or composition – on the fly. This is something Csound could never do (and still can’t!), but CsoundForLive can, and it makes a huge difference in the playability and the usability of Csound.

Two weeks ago, I played a solo concert in Hanover Germany, at the first International Csound Conference. There, all of my compositions, from 20 years ago to 20 minutes ago, were performed in real-time using CsoundForLive. Tonight, at the Cycling ’74 Expo in Brooklyn, NY, I will be demonstrating the program; and next week, I will be releasing this huge collection (on Tuesday, October 17th, at 12:01am).

A huge part of the complete collection is FREE, and I hope it will make the creative difference in your (and your student’s) lives that it is making in mine. This is a serious game changer for Csound. Check it out. Dr. B.

If you’re at Expo ’74, do say hello to Dr. B for us (and I think you’ll get some nice surprises with this project).

I’ve got a copy in for testing, so stay tuned. And I’ll be doing some follow-ups with Dr. Boulanger and company.

The only bad news here, of course, is that both a supported version of Ableton Live and Max for Live are required to be able to run Csound in this way. In fact, sounds like we have a nice four-horse race going. Max 6 overhauls how multiple patches work (on top of Max for Live), SuperCollider has its own possibilities for multiple real-time patch loading, someone suggested in comments using pd~ inside Pd to manage multiple Pd creations (something fairly new even to most experienced Pd users), and now we have Csound in Live.

But overall, Csound for Live looks like a no-brainer for Max for Live owners, no question, and an exciting taste of the ongoing convergence of cutting-edge creative sound and code with live music making for everybody. As I hinted at in the Max 6 post, I think it’s suddenly a Renaissance for all these platforms.

Silly geeky footnote: With pd~ for Max, I know it’s possible to run Pd for Max. And via another external, Pd can also run Csound. So we could theoretically run Csound in Pd in Max in Live. But let’s not get carried away.

More Videos


Your rating: None

Above: Cycling 74′s just-released video highlights enhanced audio quality; our friend, French artist protofuse, has a go at working with the beta and showing off the new user interface. (See C74′s official take on the new UI below.

Max 6 in Public Beta; For Home-brewing Music Tools Graphically, Perhaps the Biggest Single Update Yet

Just because a music tool fills your screen with tools and options doesn’t necessarily make it easier to realize your ideas. From the beginning, the appeal of Max – as with other tools that let you roll your own musical tools from a set of pre-built building blocks – has been the blank canvas.

Max 6 would appear to aim to make the gap between your ideas and those tools still narrower, and to make the results more sonically-pleasing. The reveal: it could also change how you work with patches in performance and production. I was surprised when early teasers failed to impress some users, perhaps owing to scant information. Now, Max 6 is available in public beta, and the details are far clearer. Even if Max 5 was the biggest user interface overhaul in many years, Max 6 appears to be the biggest leap in actual functionality.

It’s what I’d describe as a kitchen-sink approach, adding to every aspect of the tool, so there’s almost certain to be some things here you won’t use. What could appeal to new users, though, are I think two major changes.

More visual patching feedback and discoverability. First, building upon what we saw in Max 5, Max’s approach is to provide as much visual information as possible about what you’re doing. It’s probably the polar opposite of what we saw earlier this week in something like the live-coding environment Overtone: Max’s UI is actively involved with you as you patch. There are visual tools for finding the objects you want, then visual feedback to tell you what those objects do, plus an always-visible reference bar and rewritten help. This more-active UI should make Max more accessible to people who like this sort of visual reference as they work. No approach will appeal to everyone – some people will find all that UI a bit more than they like – but Max’s developers appear to be exploiting as much as they can with interactive visual patching.

Multiple patches at once. New objects for filters and data, a 64-bit audio engine, and low-level programming are all well and good. But the change that may more profoundly impact users and workflow is be the way Max 6 handles multiple patches. Max – and by extension Pd – have in the past made each patch operate independently. Sound may stop when you open a patch, and there’s no easy or fully reliable way to use multiple patches at once. (Compare, for example, SuperCollider, which uses a server/client model that lacks this limitation.) That changes with Max 6: you can now operate multiple patches at the same time, mix them together with independent volume, mute, and solo controls, and open and close them without interrupting your audio flow. (At least one reader notes via Twitter that you can open more than one patch at once – I’d just say this makes it better, with more reliable sound and essential mixing capabilities.) Update: since I mentioned Pd, Seppo notes that the pd~ object provides similar functionality in regards to multiple patches and multi-core operation. This has been an ongoing discussion in the libpd group, so I think we’ll revisit that separately!

One upshot of this change: some users have turned to Ableton Live just to host multiple patches. For users whose live performance set involves Ableton, that’s a good thing. But it could be overkill if all you want to do is bring up a few nifty patches and play with them. Now, I think we’ll start to see more people onstage with only Max again. (Check back in a few months to see if I’m right.)

Here’s an overview of what’s new:

  • Discoverability: A “wheel” makes the mysterious functions of different objects immediately visible; Object Explorer makes them easier to find, and new help and reference sidebar keep documentation close at hand.

  • 64-bit audio engine

  • Open multiple patches, solo and mute them, open and close them without stopping audio, mix audio between them with independent volume, and take advantage of multiple processors with multiple patches.

  • Low level building blocks: You don’t get new synth objects, but you could build them yourself. New low-level data-crunching goodness work with MSP audio, Jitter Matrix, and OpenGL textures

  • More JavaScript: An overhauled JavaScript engine makes JS scripting faster and more flexible, and there’s a proper text editor with syntax highlighting (though, of course, you may still prefer your own).

  • New visuals: Vector graphics and “HTML5 Canvas-like” UI scripting (though to me it’s a shame this isn’t just the HTML5 Canvas). There are also massively-expanded Jitter powers, but those are best left to our sister site Create Digital Motion.

  • Filters: New filter-visualizing tools for audio filter construction and manipulation.

  • Dictionary data type and associated objects let you describe information in a more structured way (all kinds of potential here from control to composition)

  • Projects now let you organize data, media, and scripts in the manner more associated with conventional development environments

  • What about Ableton? No news on that front, but I expect more soon. Max for Live users will at the very least get the advantages above, since Max for Live is really Max inside Live.

Looking over all that Max does, I have to say, I’m really amazed. I wonder if computer musicians ever pause to consider how fortunate we are. Even if this isn’t the tool for you, its availability – compounded by the availability of a range of other tools – is itself worth reflection.

Max is a program that shouldn’t exist, doing a number of things it shouldn’t do, for a user base that shouldn’t exist, doing things they shouldn’t be doing.

It doesn’t make sense that you could maintain a commercial project for this kind of audience, that you’d wind up with something this mature and powerful that had a continuous lineage stretching back to the 1980s. It doesn’t make sense that musicians would embrace such a tool and produce invention. The only explanation is sheer love.

Then, even as Max reaches new heights, some of the alternatives you have for making your own music tools are simultaneously growing by leaps and bounds. They provide very different approaches to music making (compare Overtone and SuperCollider, or Pd and libpd, or AudioMulch, or new Web audio tools). There really aren’t many fields that have this kind of choice, free and commercial, in their medium. In science and engineering, there’s private and public funding, producing some amazing tools but nothing with this kind of meeting of power and accesibility. There’s just something about music.

The fact that Cycling ‘74 can maintain a business model – just as open source projects maintain volunteer contributions – is a testament to sheer passion and love for music, and a commitment to perpetually re-imagining how that music is made from an atomic level up. There was a wonderful piece on C creator and UNIX co-creator Dennis Ritchie, whom I remembered yesterday, that observed that what he did was to do what others said couldn’t be done. From Max itself to what people make with it, I think that fits nicely.

So, have a look at the public beta, and let us know what you think. The release of Max 6 has caused more people to ask what this means for Pd and other tools, or even whether to patch things from scratch at all, but I’ll leave that question to a bit later. (I do have my own opinion about which tool fits which circumstance and user, but that’s best left to a separate discussion.) For now, you can try Max yourself and see what the fuss is about. If it doesn’t fit your means of music-making, know that you have a wide array of other options – pre-built to low-level code to old-fashioned tape-and-mic approaches, and everything in between. Go out and listen and see what you discover.


Your rating: None