Skip navigation
Help

processing.org

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
marius watz

From the Catenary Madness series (created with Toxiclibs, see code on OpenProcessing)

Workshop: Advanced Processing – Geometry and animation
Sat June 29th, Park Slope, NYC

Processing is a great tool for producing complex and compelling visuals, but computational geometry can be a challenge for many coders because of its unfamiliar logic and reliance on mathematics. In this workshop we’ll break down some of the underlying principles, making them more comprehensible and showing that we can create amazing output while relying on a set of relatively simple techniques.

Participants will learn advanced strategies for creating generative visuals and motion in 2D/3D. This will include how to describe particle systems and generating 3D mesh geometry, as well as useful techniques for code-based animation and kinetic behaviors. We will use the power of libraries like Modelbuilder and Toxiclibs, not just as convenient workhorses but as providers of useful conceptual approaches.

The workshop will culminate in the step-by-step recreation of the Catenary Madness piece shown above, featuring a dynamic mesh animated by physics simulation and shaded with vertex-by-vertex coloring. For that demo we’ll be integrating Modelbuilder and Toxiclibs to get the best of worlds.

Suitable for: Intermediate to advanced. Participants should be familiar with Processing or have previous coding experience allowing them to understand the syntax. Creating geometry means relying on vectors and simple trigonometry as building blocks, so some math is unavoidable. I recommend that participants prepare by going through Shiffman’s excellent Nature of Code chapter on vectors) and Ira Greenberg’s Processing.org tutorial on trig.

Practical information

Venue + workshop details: My apartment in Park Slope, Brooklyn. Workshops run from 10am to 5pm, with a 1 hour break for lunch (not included). Workshops have a maximum of 6 participants, keeping them nice and intimate.

Price: $180 for artists and freelancers, $250 for agency professionals. Students (incl. recent graduates) and repeat visitors enjoy a $30 discount.

Price: $180 for artists and freelancers, $250 for design professionals and institutionally affiliated academics. Students (incl. recent graduates) and repeat visitors enjoy a $30 discount. The price scale works by the honor system and there is no need to justify your decision.

Basically, if you’re looking to gainfully apply the material I teach in the commercial world or enjoy a level of financial stability not shared by independent artists like myself, please consider paying the higher price. In doing so you are supporting the basic research that is a large part of my practice, producing knowledge and tools I invariably share by teaching and publishing code. It’s still reasonable compared to most commercial training, plus you might just get your workplace to pay the bill.

Booking: To book a spot on a workshop please email info@mariuswatz.com with your name, address and cell phone # as well as the name of the workshop you’re interested in. If you’re able to pay the higher price level please indicate that in your email. You will be sent a PayPal URL where you can complete your payment.

Attendance is confirmed once payment is received. Keep in mind that there is a limited number of seats on each workshop.

0
Your rating: None

GCirc01E-025

Update: In my eagerness to announce these workshops I made a scheduling error, incorrectly thinking the dates would be March 15+16 rather than 16+17. As a result I need to move one of the workshops to the weekend before, and since the Intro workshop should happen before the Advanced the new dates will be:

  • Saturday March 9: Introduction to Processing and Generative Art
  • Saturday March 16: Generative Art, Advanced Topics

Sorry for the confusion! On the plus side the Intro workshop might now be a smaller group which should make it nice and intimate.

I haven’t done any workshops in New York since November, so I have decided to offer my Intro and Advanced Generative Art workshops back-to-back the weekend of March 16+17 on consecutive weekends, Saturday March 9 and Saturday March 17.

The venue will be my apartment in comfortable Park Slope, Brooklyn. As usual I have 8 spots available for each workshop, they do tend to reach capacity so get in touch sooner rather than later. Reservation is by email and your spot is confirmed once I receive payment via PayPal.

The workshops will be taught using the most recent Processing 2.0 beta version (2.0b8 as of this moment), and as usual I will be using my own Modelbuilder library as a toolkit for solving the tasks we look. Familiarizing yourself with Processing 2.0 and Modelbuilder would be good preparation.

Make sure to download Modelbuilder-0019 and Control-P5 2.0.4, then run through the provided examples. Check OpenProcessing.org for more Modelbuilder examples.

Note about dataviz: I know there is a lot of interest in data vizualization and I do get asked about that frequently in workshops. I can’t promise to cover data in detail since it’s a pretty big topic.

If you’re specifically looking for data techniques I would recommend looking at the excellent workshops series taught by my friend Jer Thorp. He currently offers two such workshops, titled “Processing and Data Visualization” and “Archive, Text, & Character(s)”.

0
Your rating: None

While experimenting with ways to calculate organic mesh surfaces I’ve tried to avoid 3D Bezier patches, since setting up control points programmatically is a bit of a pain. 2D is bad enough. But, as so often happens, I’ve found myself in a situation where I need a structure that is best described as a Bezier patch.

Paul Bourke comes to the rescue with sample code written in C, which took all of 5 minutes to port to Processing. The code below is all Bourke’s apart from the rendering logic. If you don’t know his depository of miscellaneous geometry code and wisdom, run and have a look. It’s proven invaluable over the years.

An applet version of this sketch can be seen on OpenProcessing.org.

Code: bezPatch.pde

// bezPatch.pde by Marius Watz
// Direct port of sample code by Paul Bourke.
// Original code: http://paulbourke.net/geometry/bezier/

int ni=4, nj=5, RESI=ni*10, RESJ=nj*10;
PVector outp[][], inp[][];

void setup() {
  size(600,600,P3D);
  build();
  smooth();
}

void draw() {
  background(255);
  translate(width/2,height/2);
  lights();
  scale(0.9);
  rotateY(map(mouseX,0,width,-PI,PI));
  rotateX(map(mouseY,0,height,-PI,PI));

  noStroke();
  fill(255);
  stroke(0);
  for(int i=0; i<RESI-1; i++) {
    beginShape(QUAD_STRIP);
    for(int j=0; j<RESJ; j++) {
      vertex(outp[i][j].x,outp[i][j].y,outp[i][j].z);
      vertex(outp[i+1][j].x,outp[i+1][j].y,outp[i+1][j].z);
    }
    endShape();
  }
}

void keyPressed() {
  if(key==' ') build();
  if(!online)saveFrame("bezPatch.png");
}

void build() {
  int i, j, ki, kj;
  double mui, muj, bi, bj;

  outp=new PVector[RESI][RESJ];
  inp=new PVector[ni+1][nj+1];

  for (i=0;i<=ni;i++) {
    for (j=0;j<=nj;j++) {
      inp[i][j]=new PVector(i,j,random(-3,3));
    }
  }

  for (i=0;i<RESI;i++) {
    mui = i / (double)(RESI-1);
    for (j=0;j<RESJ;j++) {
      muj = j / (double)(RESJ-1);
      outp[i][j]=new PVector();

      for (ki=0;ki<=ni;ki++) {
        bi = BezierBlend(ki, mui, ni);
        for (kj=0;kj<=nj;kj++) {
          bj = BezierBlend(kj, muj, nj);
          outp[i][j].x += (inp[ki][kj].x * bi * bj);
          outp[i][j].y += (inp[ki][kj].y * bi * bj);
          outp[i][j].z += (inp[ki][kj].z * bi * bj);
        }
      }
      outp[i][j].add(new PVector(-ni/2,-nj/2,0));
      outp[i][j].mult(100);
    }
  }
}

double BezierBlend(int k, double mu, int n) {
  int nn, kn, nkn;
  double blend=1;

  nn = n;
  kn = k;
  nkn = n - k;

  while (nn >= 1) {
    blend *= nn;
    nn--;
    if (kn > 1) {
      blend /= (double)kn;
      kn--;
    }
    if (nkn > 1) {
      blend /= (double)nkn;
      nkn--;
    }
  }
  if (k > 0)
    blend *= Math.pow(mu, (double)k);
  if (n-k > 0)
    blend *= Math.pow(1-mu, (double)(n-k));

  return(blend);
}

0
Your rating: None

Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Previously:
Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:
http://www.subcycle.org/

Tweet

0
Your rating: None