Skip navigation
Help

software architecture

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

hacker-dojo

Suddenly programming is sexy. Codecademy is drawing hundreds of thousands to its online programming tutorials. “Those jumping on board say they are preparing for a future in which the Internet is the foundation for entertainment, education and nearly everything else … ensuring that they are not left in the dark ages,” says a recent New York Times piece.

The NYT’s Randall Stross went on to write about how “many professors of computer science say college graduates in every major should understand software fundamentals.” At parties these days, people are more impressed when I say I write apps than when I say I’ve had a few novels published. How weird is that?

Is this the long-fabled Triumph of the Geek? If so, it should seem unreservedly great to those of us who started programming when we were ten and haven’t much stopped since. So why does this sudden surge of enthusiasm make me feel so uneasy?

Partly, I suppose, because something like this happened once before, and it didn’t end well. Remember how hackers were hot in the late ’90s, and would-be dot-commers flooded computer-science classes everywhere? Demand for programmers back then was so high — sound familiar? — that companies hired hordes of freshly minted coders whose ability did not match their ambition. Half of every team I worked in back then was composed of people who couldn’t be trusted with anything beyond basic programming grunt work, if that. It’s no coincidence that the best technical team I ever worked with was in 2002, right after the dot-bust weeded out all of the chaff.

But mostly, I think, I’m uneasy because it seems like the wrong people are taking up coding, for the wrong reasons.

It’s disconcerting that everyone quoted in the articles above say they want to be “literate” or “fluent”, to “understand” or to teach “computational thinking.” Nobody says they want to do something. But coding is a means, not an end. Learning how to program for its own sake is like learning French purely on the off chance that you one day find yourself in Paris. People who do that generally become people who think they know some French, only to discover, once in France, that they can’t actually communicate worth a damn. Shouldn’t people who want to take up programming have some kind of project in mind first? A purpose, however vague?

That first cited piece above begins with “Parlez-vous Python?”, a cutesy bit that’s also a pet peeve. Non-coders tend to think of different programming languages as, well, different languages. I’ve long maintained that while programming itself — “computational thinking”, as the professor put it — is indeed very like a language, “programming languages” are mere dialects; some crude and terse, some expressive and eloquent, but all broadly used to convey the same concepts in much the same way.

Like other languages, though, or like music, it’s best learned by the young. I am skeptical of the notion that many people who start learning to code in their 30s or even 20s will ever really grok the fundamental abstract notions of software architecture and design.

Stross quotes Michael Littman of Rutgers: “Computational thinking should have been covered in middle school, and it isn’t, so we in the C.S. department must offer the equivalent of a remedial course.” Similarly, the Guardian recently ran an excellent series of articles on why all children should be taught how to code. (One interesting if depressing side note there: the older the students, the more likely it is that girls will be peer-pressured out of the technical arena.)

That I can get behind. Codecademy and the White House teaming up to target a youthful audience? Awesome. So let’s focus on how we teach programming to the next generation. But tackling a few online tutorials in your 20s or later when you have no existing basis in the field, and/or learning a few remedial dumbed-down concepts in college? I fear that for the vast majority of people, that’s going to be much too little, far too late.

Of course there will always be exceptions. Joseph Conrad didn’t speak a word of English until his 20s, and he became one of the language’s great stylists. But most of us need to learn other languages when we’re young. I’m sorry to say that I think the same is true for programming.

Image credit: Jeff Keyzer, Flickr.

0
Your rating: None

Anti-aliasing has an intimidating name, but what it does for our computer displays is rather fundamental. Think of it this way -- a line has infinite resolution, but our digital displays do not. So when we "snap" a line to the pixel grid on our display, we can compensate by imagineering partial pixels along the line, pretending we have a much higher resolution display than we actually do. Like so:

2d-anti-aliasing

As you can see on these little squiggly black lines I drew, anti-aliasing produces a superior image by using grey pixels to simulate partial pixels along the edges of the line. It is a hack, but as hacks go, it's pretty darn effective. Of course, the proper solution to this problem is to have extremely high resolution displays in the first place. But other than tiny handheld devices, I wouldn't hold your breath for that to happen any time soon.

This also applies to much more complex 3D graphics scenes. Perhaps even more so, since adding motion amplifies the aliasing effects of all those crawling lines that make up the edges of the scene.

No-aa-vs-4x-aa

But anti-aliasing, particularly at 30 or 60 frames per second in a complex state of the art game, with millions of polygons and effects active, is not cheap. Per my answer here, you can generally expect a performance cost of at least 25% for proper 4X anti-aliasing. And that is for the most optimized version of anti-aliasing we've been able to come up with:

  1. Super-Sampled Anti-Aliasing (SSAA). The oldest trick in the book - I list it as universal because you can use it pretty much anywhere: forward or deferred rendering, it also anti-aliases alpha cutouts, and it gives you better texture sampling at high anisotropy too. Basically, you render the image at a higher resolution and down-sample with a filter when done. Sharp edges become anti-aliased as they are down-sized. Of course, there's a reason why people don't use SSAA: it costs a fortune. Whatever your fill rate bill, it's 4x for even minimal SSAA.

  2. Multi-Sampled Anti-Aliasing (MSAA). This is what you typically have in hardware on a modern graphics card. The graphics card renders to a surface that is larger than the final image, but in shading each "cluster" of samples (that will end up in a single pixel on the final screen) the pixel shader is run only once. We save a ton of fill rate, but we still burn memory bandwidth. This technique does not anti-alias any effects coming out of the shader, because the shader runs at 1x, so alpha cutouts are jagged. This is the most common way to run a forward-rendering game. MSAA does not work for a deferred renderer because lighting decisions are made after the MSAA is "resolved" (down-sized) to its final image size.

  3. Coverage Sample Anti-Aliasing (CSAA). A further optimization on MSAA from NVidia [ed: ATI has an equivalent]. Besides running the shader at 1x and the framebuffer at 4x, the GPU's rasterizer is run at 16x. So while the depth buffer produces better anti-aliasing, the intermediate shades of blending produced are even better.

Pretty much all "modern" anti-aliasing is some variant of the MSAA hack, and even that costs a quarter of your framerate. That's prohibitively expensive, unless you have so much performance you don't even care, which will rarely be true for any recent game. While the crawling lines of aliasing do bother me, I don't feel anti-aliasing alone is worth giving up a quarter of my framerate and/or turning down other details to pay for it.

But that was before I learned that there are some emerging alternatives to MSAA. And then, much to my surprise, these alternatives started showing up as actual graphics options in this season's PC games -- Battlefield 3, Skyrim, Batman: Arkham City, and so on. What is this FXAA thing, and how does it work? Let's see it in action:

No AA

4x MSAA

FXAA

Noaa-closeup-1

Msaa-closeup-1

Fxaa-closeup-1

(this is a zoomed fragment; click through to see the full screen)

FXAA stands for Fast Approximate Anti-Aliasing, and it's an even more clever hack than MSAA, because it ignores polygons and line edges, and simply analyzes the pixels on the screen. It is a pixel shader program documented in this PDF that runs every frame in a scant millisecond or two. Where it sees pixels that create an artificial edge, it smooths them. It is, in the words of the author, "the simplest and easiest thing to integrate and use".

Fxaa-algorithm

FXAA has two major advantages:

  1. FXAA smooths edges in all pixels on the screen, including those inside alpha-blended textures and those resulting from pixel shader effects, which were previously immune to the effects of MSAA without oddball workarounds.
  2. It's fast. Very, very fast. Version 3 of the FXAA algorithm takes about 1.3 milliseconds per frame on a $100 video card. Earlier versions were found to be double the speed of 4x MSAA, so you're looking at a modest 12 or 13 percent cost in framerate to enable FXAA -- and in return you get a considerable reduction in aliasing.

The only downside, and it is minor, is that you may see a bit of unwanted edge "reduction" inside textures or in other places. I'm not sure if it's fair to call this a downside, but FXAA can't directly be applied to older games; games have to be specifically coded to call the FXAA pixel shader before they draw the game's user interface, otherwise it will happily smooth the edges of on-screen HUD elements, too.

The FXAA method is so good, in fact, it makes all other forms of full-screen anti-aliasing pretty much obsolete overnight. If you have an FXAA option in your game, you should enable it immediately and ignore any other AA options.

FXAA is an excellent example of the power of simple hacks and heuristics. But it's also a great demonstration of how attacking programming problems from a different angle -- that is, rather than thinking of the screen as a collection of polygons and lines, think of it as a collection of pixels -- can enable you to solve computationally difficult problems faster and arguably better than anyone thought possible.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

Addison-Wesley has published a new book in my Signature Series. It’s by Robert Daigneau and it’s called Service Design Patterns. It’s a topic that’s already had too many books on it, but I added this one to the series because I think Robert has done a particularly good job of collecting together the best advice on the topic and organizing it into a useful handbook. This is the book that I think ought to become the standard book on the topic.

0
Your rating: None