Skip navigation
Help

code

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
marius watz

From the Catenary Madness series (created with Toxiclibs, see code on OpenProcessing)

Workshop: Advanced Processing – Geometry and animation
Sat June 29th, Park Slope, NYC

Processing is a great tool for producing complex and compelling visuals, but computational geometry can be a challenge for many coders because of its unfamiliar logic and reliance on mathematics. In this workshop we’ll break down some of the underlying principles, making them more comprehensible and showing that we can create amazing output while relying on a set of relatively simple techniques.

Participants will learn advanced strategies for creating generative visuals and motion in 2D/3D. This will include how to describe particle systems and generating 3D mesh geometry, as well as useful techniques for code-based animation and kinetic behaviors. We will use the power of libraries like Modelbuilder and Toxiclibs, not just as convenient workhorses but as providers of useful conceptual approaches.

The workshop will culminate in the step-by-step recreation of the Catenary Madness piece shown above, featuring a dynamic mesh animated by physics simulation and shaded with vertex-by-vertex coloring. For that demo we’ll be integrating Modelbuilder and Toxiclibs to get the best of worlds.

Suitable for: Intermediate to advanced. Participants should be familiar with Processing or have previous coding experience allowing them to understand the syntax. Creating geometry means relying on vectors and simple trigonometry as building blocks, so some math is unavoidable. I recommend that participants prepare by going through Shiffman’s excellent Nature of Code chapter on vectors) and Ira Greenberg’s Processing.org tutorial on trig.

Practical information

Venue + workshop details: My apartment in Park Slope, Brooklyn. Workshops run from 10am to 5pm, with a 1 hour break for lunch (not included). Workshops have a maximum of 6 participants, keeping them nice and intimate.

Price: $180 for artists and freelancers, $250 for agency professionals. Students (incl. recent graduates) and repeat visitors enjoy a $30 discount.

Price: $180 for artists and freelancers, $250 for design professionals and institutionally affiliated academics. Students (incl. recent graduates) and repeat visitors enjoy a $30 discount. The price scale works by the honor system and there is no need to justify your decision.

Basically, if you’re looking to gainfully apply the material I teach in the commercial world or enjoy a level of financial stability not shared by independent artists like myself, please consider paying the higher price. In doing so you are supporting the basic research that is a large part of my practice, producing knowledge and tools I invariably share by teaching and publishing code. It’s still reasonable compared to most commercial training, plus you might just get your workplace to pay the bill.

Booking: To book a spot on a workshop please email info@mariuswatz.com with your name, address and cell phone # as well as the name of the workshop you’re interested in. If you’re able to pay the higher price level please indicate that in your email. You will be sent a PayPal URL where you can complete your payment.

Attendance is confirmed once payment is received. Keep in mind that there is a limited number of seats on each workshop.

0
Your rating: None
Original author: 
Stack Exchange

Stack Exchange

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

Ankit works in J2SE (core java). During code reviews, he's frequently asked to reduce his lines of code (LOC). "It's not about removing redundant code," he writes. To his colleagues, "it's about following a style." Style over substance. Ankit says the readability of his code is suffering due to the dogmatic demands of his code reviewers. So how to find the right balance of brevity and readability?

See the original question here.

Read 20 remaining paragraphs | Comments

0
Your rating: None

GCirc01E-025

Update: In my eagerness to announce these workshops I made a scheduling error, incorrectly thinking the dates would be March 15+16 rather than 16+17. As a result I need to move one of the workshops to the weekend before, and since the Intro workshop should happen before the Advanced the new dates will be:

  • Saturday March 9: Introduction to Processing and Generative Art
  • Saturday March 16: Generative Art, Advanced Topics

Sorry for the confusion! On the plus side the Intro workshop might now be a smaller group which should make it nice and intimate.

I haven’t done any workshops in New York since November, so I have decided to offer my Intro and Advanced Generative Art workshops back-to-back the weekend of March 16+17 on consecutive weekends, Saturday March 9 and Saturday March 17.

The venue will be my apartment in comfortable Park Slope, Brooklyn. As usual I have 8 spots available for each workshop, they do tend to reach capacity so get in touch sooner rather than later. Reservation is by email and your spot is confirmed once I receive payment via PayPal.

The workshops will be taught using the most recent Processing 2.0 beta version (2.0b8 as of this moment), and as usual I will be using my own Modelbuilder library as a toolkit for solving the tasks we look. Familiarizing yourself with Processing 2.0 and Modelbuilder would be good preparation.

Make sure to download Modelbuilder-0019 and Control-P5 2.0.4, then run through the provided examples. Check OpenProcessing.org for more Modelbuilder examples.

Note about dataviz: I know there is a lot of interest in data vizualization and I do get asked about that frequently in workshops. I can’t promise to cover data in detail since it’s a pretty big topic.

If you’re specifically looking for data techniques I would recommend looking at the excellent workshops series taught by my friend Jer Thorp. He currently offers two such workshops, titled “Processing and Data Visualization” and “Archive, Text, & Character(s)”.

0
Your rating: None

How do you prefer to compose? Pen and manuscript paper? Recording ideas from a piano? Firing up your favorite music software? How about … coding in 65c816 Assembly language?

The trio behind this video prefers the latter, more intensive approach, to get close to the chip hardware by communicating directly with the Super NES. It’s one heck of a way to make an invitation to an event, but that’s just what they’ve done, in celebration of Blip Festival Tokyo 2012, in a kind of audiovisual spectacular. With code by Batsly Adams, music by Zabutom, and graphics by KeFF, the result is a throwback to the demoscene of yore. (Kris Keyser notes that I should point out that the SNES is sample-based, not synthesis based as you might have with the NES. It’s still … a lot of work.)

YouTube looks good, but running this directly looks better, so you can point your SNES emulator at this free file:

BlipFestivalTokyo2012.sfc

Quoth Andrew: “The hardware is a bitch – but it has some really sweet features … the 65c816 – 8/16 bit selection blows.”

Amen, brother. Thanks to music hacker Todd Bailey for the heads-up.

0
Your rating: None

Peter Curet: Generative Origami (built w/ Modelbuilder)

Corrected: The 3D printed trophy is by Chevalvert.

I just found this by accident, and what a nice accident it was: Peter Curet took my Processing Paris Master Class back in March, and subsequently produced the above video of origami structures continuously being created and unfolding. Not only is a great piece, it was apparently built with Modelbuilder. (Soft rendering courtesy of joons-renderer, which plugs the Sunflow radiosity rendered into Processing.)

I guess I finally get to feel a fraction of the pride Karsten Schmidt must feel seeing people doing awesome things with Toxiclibs. Not that I’m anywhere near reaching the awesomeness of his Toxiclibs Community showreel, but it’s a very good start.

Not that Peter’s origami is the first time Modelbuilder is spotted in the wild. Last year Paris studio Chevalvert used it to produce this 3D printed trophy for a dance award, and Greg Borenstein’s O’Reilly book Making Things See demonstrates how to combine Modelbuilder with a Kinect.

Do you know of any other examples of Modelbuilder being part of a project that made it past the “Messy Sketch” stage and on to the next level of “Thing of Beauty”? Let me know: marius at mariuswatz com.

Better yet, post it to Flickr in the new Modelbuilder group I just created: http://www.flickr.com/groups/modelbuilder/.

Peter Curet: Conway Origami City

BLoc-D: IDILL 2011

0
Your rating: None

This Q&A is part of a biweekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 80+ Q&A sites.

golergka O asks:

I'm working on a project solo and have to maintain my own code. Usually code review should be done by someone other than the author so the reviewer can look at the code with the fresh eyes. I don't have such luxury. What practices can I employ to more effectively review my own code?

Answer: Checklist & Refresh (7 Votes)

Aditya Sahay replies:

Read more on Ars Technica…

0
Your rating: None

While experimenting with ways to calculate organic mesh surfaces I’ve tried to avoid 3D Bezier patches, since setting up control points programmatically is a bit of a pain. 2D is bad enough. But, as so often happens, I’ve found myself in a situation where I need a structure that is best described as a Bezier patch.

Paul Bourke comes to the rescue with sample code written in C, which took all of 5 minutes to port to Processing. The code below is all Bourke’s apart from the rendering logic. If you don’t know his depository of miscellaneous geometry code and wisdom, run and have a look. It’s proven invaluable over the years.

An applet version of this sketch can be seen on OpenProcessing.org.

Code: bezPatch.pde

// bezPatch.pde by Marius Watz
// Direct port of sample code by Paul Bourke.
// Original code: http://paulbourke.net/geometry/bezier/

int ni=4, nj=5, RESI=ni*10, RESJ=nj*10;
PVector outp[][], inp[][];

void setup() {
  size(600,600,P3D);
  build();
  smooth();
}

void draw() {
  background(255);
  translate(width/2,height/2);
  lights();
  scale(0.9);
  rotateY(map(mouseX,0,width,-PI,PI));
  rotateX(map(mouseY,0,height,-PI,PI));

  noStroke();
  fill(255);
  stroke(0);
  for(int i=0; i<RESI-1; i++) {
    beginShape(QUAD_STRIP);
    for(int j=0; j<RESJ; j++) {
      vertex(outp[i][j].x,outp[i][j].y,outp[i][j].z);
      vertex(outp[i+1][j].x,outp[i+1][j].y,outp[i+1][j].z);
    }
    endShape();
  }
}

void keyPressed() {
  if(key==' ') build();
  if(!online)saveFrame("bezPatch.png");
}

void build() {
  int i, j, ki, kj;
  double mui, muj, bi, bj;

  outp=new PVector[RESI][RESJ];
  inp=new PVector[ni+1][nj+1];

  for (i=0;i<=ni;i++) {
    for (j=0;j<=nj;j++) {
      inp[i][j]=new PVector(i,j,random(-3,3));
    }
  }

  for (i=0;i<RESI;i++) {
    mui = i / (double)(RESI-1);
    for (j=0;j<RESJ;j++) {
      muj = j / (double)(RESJ-1);
      outp[i][j]=new PVector();

      for (ki=0;ki<=ni;ki++) {
        bi = BezierBlend(ki, mui, ni);
        for (kj=0;kj<=nj;kj++) {
          bj = BezierBlend(kj, muj, nj);
          outp[i][j].x += (inp[ki][kj].x * bi * bj);
          outp[i][j].y += (inp[ki][kj].y * bi * bj);
          outp[i][j].z += (inp[ki][kj].z * bi * bj);
        }
      }
      outp[i][j].add(new PVector(-ni/2,-nj/2,0));
      outp[i][j].mult(100);
    }
  }
}

double BezierBlend(int k, double mu, int n) {
  int nn, kn, nkn;
  double blend=1;

  nn = n;
  kn = k;
  nkn = n - k;

  while (nn >= 1) {
    blend *= nn;
    nn--;
    if (kn > 1) {
      blend /= (double)kn;
      kn--;
    }
    if (nkn > 1) {
      blend /= (double)nkn;
      nkn--;
    }
  }
  if (k > 0)
    blend *= Math.pow(mu, (double)k);
  if (n-k > 0)
    blend *= Math.pow(1-mu, (double)(n-k));

  return(blend);
}

0
Your rating: None

Html css book

Most books on code are visually boring, but HTML & CSS designed by Jon Duckett makes the reading experience much more interesting and fun. I haven’t read the book, but I’d guess that the good design will help with the learning experience.

0
Your rating: None

Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Previously:
Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:
http://www.subcycle.org/

Tweet

0
Your rating: None

With great power comes great learning curves – or maybe not. Csound for Live, just announced this weekend and shipping on Tuesday, brings one of the great sound design tools into the Ableton Live environment. You can use it without any actual knowledge of Csound, without a single line of code — or, for those with the skills, it could transform how you use Csound.

For anyone who thinks music creation software has to be disposable, you’ve never seen Csound. With a lineage going literally to the dawn of digital synthesis and Max Mathews, Csound has managed to stay compatible without being dated, host to a continuous stream of composition and sonic imagination that has kept it at the bleeding edge of what computers can do with audio.

Csound for Live does two things. First, it makes Csound run in real-time in ways that are more performative and, well, “live” than ever before, inside the Live environment. Second, its release marks a kind of “greatest hits” of Csound, pulling some of the platform’s best creators into building new and updated work that’s more usable.

If you’re not a Csound user, you just dial up their work and see what your music can do. If you are, of course, you can go deeper. And if you’re somewhere in between, you can dabble first before modifying, hacking, or making your own code. And that means for everybody, you get:

  • Spectral processors
  • Phase vocoders
  • Granular processors
  • Physical models
  • Classic instruments

More description:

It looks great. It works great. It sounds… beyond great.

CsoundForLive is a collection of over 120 real time audio-plugins that brings the complexity and sound quality of Csound to the fingertips of ANY Ableton Live user – without ANY prior Csound knowledge.

Capitalizing on the design power of Max For Live, what once took pages of text in Csound can now be accomplished in a few clicks of your mouse.

Move a slider on your APC40 and deconstruct your audio through professional quality granular synthesis…

Touch a square of your Launchpad and warp pitch and time with real time FFT processing…

Press letters on your keyboard and create sonically intricate melodies through wave terrain synthesis…

And Dr. Richard Boulanger, unofficial Jedi Master of the Csound movement, instigator of this project, and Berklee School of Music sound and music wizard, posts a bit more:

With my former student, and now partner, Colman O’Reilly, I have been working around the clock for months to collect, adapt, create, wrap, and simplify a huge collection of Csound instruments and make them all work simultaneously and interchangeably in Ableton Live. In this guise, I am able to “hot-swap” the most complex Csound instruments in and out of an arrangement or composition – on the fly. This is something Csound could never do (and still can’t!), but CsoundForLive can, and it makes a huge difference in the playability and the usability of Csound.

Two weeks ago, I played a solo concert in Hanover Germany, at the first International Csound Conference. There, all of my compositions, from 20 years ago to 20 minutes ago, were performed in real-time using CsoundForLive. Tonight, at the Cycling ’74 Expo in Brooklyn, NY, I will be demonstrating the program; and next week, I will be releasing this huge collection (on Tuesday, October 17th, at 12:01am).

A huge part of the complete collection is FREE, and I hope it will make the creative difference in your (and your student’s) lives that it is making in mine. This is a serious game changer for Csound. Check it out. Dr. B.

If you’re at Expo ’74, do say hello to Dr. B for us (and I think you’ll get some nice surprises with this project).

I’ve got a copy in for testing, so stay tuned. And I’ll be doing some follow-ups with Dr. Boulanger and company.

The only bad news here, of course, is that both a supported version of Ableton Live and Max for Live are required to be able to run Csound in this way. In fact, sounds like we have a nice four-horse race going. Max 6 overhauls how multiple patches work (on top of Max for Live), SuperCollider has its own possibilities for multiple real-time patch loading, someone suggested in comments using pd~ inside Pd to manage multiple Pd creations (something fairly new even to most experienced Pd users), and now we have Csound in Live.

But overall, Csound for Live looks like a no-brainer for Max for Live owners, no question, and an exciting taste of the ongoing convergence of cutting-edge creative sound and code with live music making for everybody. As I hinted at in the Max 6 post, I think it’s suddenly a Renaissance for all these platforms.

http://www.csoundforlive.com/

Silly geeky footnote: With pd~ for Max, I know it’s possible to run Pd for Max. And via another external, Pd can also run Csound. So we could theoretically run Csound in Pd in Max in Live. But let’s not get carried away.

More Videos

Tweet

0
Your rating: None