Skip navigation
Help

Computing

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Drupal's theming system offers developers and designers a flexible way to override default HTML output when specific portions of the page are rendered. Everything from the name of the currently logged in user to the HTML markup of the entire page can be customized by a plugin "theme".

Unfortunately, this system can be its own worst enemy. Themes are very powerful, but in many cases they're the only place where specific output can be changed without hacking core. Because of this, themes on highly customized production sites can easily turn into code-monsters, carrying the weight of making 'Drupal' look like 'My Awesome Site.'

This can make maintenance difficult, and it also makes sharing these tweaks with other Drupal developers tricky. In fact, some downloadable modules also come with instructions on how to modify a theme to 'complete' the module's work. Wouldn't it be great if certain re-usable theme overrides could be packaged up and distributed as part of any Drupal? As it turns out, that is possible. In this article, we'll be exploring two ways to do it: a tweaky, hacky approach for Drupal 5, and a clean and elegant approach that's only possible in Drupal 6.

read more

0
Your rating: None

Well I finally got Ogre to compile. Bugger knows how I’m going to graft Openframeworks libraries into it. I only really need sound and the VideoGrabber, but those are still tall orders.

If you want to play with Ogre on a PC you’ll need the following:

Visual Studio 9. Trying to get it to work with Code Blocks is more trouble than it’s worth, especially as it breaks Openframeworks when you get the bare minimum working. Plus VS9 is pretty damn good. I’m amazed that Microsoft made it.
The Ogre SDK
DirectX Runtime

Then you’ll want to read the following:

Installing the Ogre SDK
Setting up an application. No the Ogre Application Wizard doesn’t work on VS9. This means we have to do some twiddling with project settings to get it to work. But despite this being a pain, it teaches you how to set up projects properly and will help you in the long run.
My thread in the Ogre Forum about trying to get VC9 to compile Ogre. I didn’t quite read the instructions in the previous link thoroughly, but those instructions also assumed some knowledge I didn’t have. Most of the difficulty of getting Ogre to compile is down to correct project settings and correct file placement. So you have to use your initiative a little to figure it out. Now I’m at the following stage:

Ogre Tutorials

I’m going to settle in to this stuff now to put off the nightmare that combining Ogre and Openframeworks will be.

Bruce Sterling on the future of interaction design
Magic Pen
Artificial Stupidity
Fruit Mystery game
Cat with Bow Golf game
Floating Head
The Control Master (a Run Wrake film)
Metal Gear Solid 4 game play demo

C++ optimisation strategies
Processing for Javascript

As a side note I got a new phone recently. So I downloaded the latest Mobile Processing and spent one Sunday writing a new and more clever game of Snake vs the Computer for it. It uses the new A* algorithm I built and shows the Snake’s thoughts about which path to take ahead of it.

Snake AI 2

0
Your rating: None

Steven Levithan has been flagrant by creating a simple way to remove nested patterns with a while loop and a replace:

PLAIN TEXT
JAVASCRIPT:

  1.  
  2. var str = "abc&lt;1&lt;2<>3>4>def";
  3.  
  4. while (str != (str = str.replace(/<[^<>]*>/g, "")));
  5.  
  6. // str -> "abcdef"
  7.  

Notice that the regex in this one-liner doesn't try to deal with nested patterns at all. The while loop's condition replaces instances of <…> (where angled brackets are not allowed in the inner pattern) with an empty string. This repeats from the inside out, until the regex no longer matches. At that point, the result of the replacement is the same as the subject string, and the loop ends.

You can use a similar approach to grab nested patterns rather than delete them, as shown below.

PLAIN TEXT
JAVASCRIPT:

  1.  
  2. var str = "abc(d(e())f)(gh)ijk()",
  3.     re = /\([^()]*\)/,
  4.     output = [],
  5.     match, parts, last;
  6.  
  7. while (match = re.exec(str)) {
  8.     parts = match[0].split("\uFFFF");
  9.     if (parts.length <2)
  10.         last = output.push(match[0]) - 1;
  11.     else
  12.         output[last] = parts[0] + output[last] + parts[1];
  13.     str = str.replace(re, "\uFFFF");
  14. }
  15.  
  16. // output -> ["(d(e())f)", "(gh)", "()"]
  17.  
0
Your rating: None

Two Directory Source Control is a work flow where you utilize one directory from source control, one directory that you work out of, and you then shuttle between them at your convenience. In this article, I’ll describe how traditional work flows work, the problems with them, and how you can use the Two Directory option to overcome them. This article assumes you know what source control is, what a merging tool is, and the value they bring to your work flow.

Traditional Source Control Work Flows

Many tools such as Flex Builder , Flash , and Visual Studio have plug ins that allow integration with source control such as Subversion , CVS , Perforce , and Visual Source Safe . This allows you to check out code & assets, edit them, and check your changes back into source control directly from your IDE of choice. Tools such as Subclipse , a plug in for Eclipse , allow you to check out & check in code from an SVN repo directly into your project. It also has nice built-in "diffing", the ability to show 2 pieces of code side by side to allow you to see what’s changed, and merge them if you so desire.

The reality, however, is that these work flows aren’t as easy as the marketing makes it sound. The main flaw they all have is conflict resolution.

Conflict Resolution

Conflicts are when the code you have on your machine is different from what is in source control. Whether this is because the file in source control is newer, someone has the file checked out already in Visual Source Safe, or a folder/package structure has been re-arranged making your structure out of sync. My main gripe with the tools I’ve used is that they do not do an adequate job of making conflict resolution easy. I define easy as simple to understand what the conflict is, how to resolve it, and not interrupting your current work forcing you to immediately deal with the conflict. To expound on that last part, I agree that you must deal with conflicts when checking in code, but a conflict should not immediately break your current build until you resolve a conflict. Nor should it prevent you from working.

There are 2 scenarios where this irritation is shown. The first example is where you check out the latest code from SVN for your project. You then make your Flex Project in that directory. Assuming you’ve figured out how to prevent Tortoise SVN from copying .svn folders into your bin directories, you download the latest, and get to work. How long you work, however, is based on how often your team checks in code. Even if you are working on different portions of the code, if someone checks in code that relates to yours, you’ll have to deal with the conflict. Typically, you’ll do an update first, and once you’ve ensure none of the code you are committing needs to be merged via resolved conflicts, you commit your changes.

As your team size grows, however, many people will be checking in code. Since dealing with conflicts isn’t fun, sometimes teams will check in more often. This results in a behavior re-enforcement for others to check in sooner as well. While I’m a firm believer in having something compiling sooner than later is a great Agile practice to follow to reduce risk, it’s hard to really make head way on some of your code if your focused on getting code compiling quicker so you can check in vs. making progress on your code.

It may seem subtle, but it’s a major attitude adjustment that I perceive as harmful for developers to have. They should enjoy coding their masterpieces, and assuming they are communicating with others and/or their team lead, they should be focusing. A developers focus is a very important commodity that shouldn’t be taken lightly. Instead of coding by positive reinforcement, "I can’t wait to check this in for others to see!", they are instead coding by negative reinforcement, "I need to hurry up and check in what I have working so I don’t have to deal with a confusing merge operationthat’ll slow me and my team down".

Eclipse combined with Subclipse has some good merging and history management tools. Eclipse has it’s own local history where it stores multiple versions of your files. You can choose to revert if you need to and it’ll provide a diffing tool right there for you. My first problem is the diffing tool sucks. Subclipse is not much better. I last used it in February of 2007; if it has improved since then, let me know. This wouldn’t be a big deal, except when it’s 5:59 pm on a Friday evening, and you really really want to go home. Unless you resolve the conflict, your build will be broken, and you cannot commit till you do so. Suddenly, a good diff tool becomes very important. It’s really frustrating for a developer, and removes motivation to work later. To compound it, most developers I know refuse to leave until their code is compiling. Suddenly an update interferes with this.

The second scenario which is worse than the above is package re-factoring. This is when you change the directory of a file, or a whole folder structure. If it is minute enough, you can usually get by if its not a directory you were working in. If, however, it is AND files have change, this can be a really long and frustrating process… all the while your code no longer works. You cannot compile JUST because you wanted to do the right thing and check your code in. This is also where you can truly tell if a diffing tool is good or not; how it helps you resolve folder changes and their relations to files.

One solution to this is to check your code into a branch and merge in later when it is convenient for you.

The last problem with working directly from the repo is actually caused by a feature of a lot of source control applications. This is the ability to only check in a directory with files that you are working on. For example, if your team is using Cairngorm, and all you are doing isValueObject and Factory work, you can pretty much stay in those 2 directories, and not worry about someone stepping on your toes.

…it is a false sense of security, however. The cardinal rule for source control is only check in code that compiles. This doesn’t mean that it compiles in your machine. What this means is if you were to check out the entire build from source control, could you get to compile? By updating just your area of interest from source control, and then committing your changes successfully in that area, in this case "vo" and "factories" packages, does not mean you have followed this rule. If another developer working on the Cairngorm "delegates" was using your Factories, her/his code would break even though you successfully checked code in. In effect, you broke the build. At one of my older jobs, whoever broke the build had to change their IM icon to a monkey. It sounds trivial, but it really sucked when people asked you why your IM icon was a monkey…

To prevent this, you need to update the entire repo. For some repo’s, this is not easy to do. Then one I’m working with currently is at least over 12 gigs, with hundreds of thousands of files, both ASCII and binary. That means that if at least 12 developers are working on it at the same time, chances are high a lot of files will be added and changed. Regardless, you only take the hit in the beginning, if you update frequently, and have a good diff tool, this increases your ability to ensure your code won’t break the build when you check in. Enterprise tools installed on the source control server itself like Cruise Control can actually automate this process, and send emails with information on who checked in code, if it compiled or not, and what was changed. To be fair, this isn’t a problem with a lot of independent server-side code.

Another minor point is how some source control systems work with locking files. Some allow developers to check out code and "lock" it. This prevents others from checking it in. Some make it impossible to work with because the code then becomes read-only. Since having another directory under your control, away from the source folder, you won’t have this problem.

To summarize, there are a lot of problems, even with built-in IDE tools, to working directly in the repo:

  1. conflict resolution prevents you from compiling. Manager says, "Can I see the latest?" Dev replies, "No… I’m merging code."
  2. folder change conflict resolution. "Can you re-factor later?"
  3. pain avoidance code committing. "I’d better hurry up and check in what I have working so I don’t have to merge… to heck with focusing".
  4. not checking out the entire repo, committing code, and breaking the build. "Works on my box…"

Two Directory Source Control Work Flow

To reiterate, Two Directory Source Control is a work flow where you utilize one directory from source control, one directory that you work out of, and you then shuttle & merge between them at your convenience. I’ve been using this work flow for about 2 1/2 years now and everyone I encounter usually finds it extremely odd I don’t work directly in the repo, hence the need for this blog entry. I was taught this work flow by Thomas Burleson, one of people responsible for the Universal Mind Cairngorm Extensions . Beyond the above, his additional justification at the time was that with Flex 1.5, SVN and Tomcat or JRun would do weird things together every so often, resulting in strange compilation bugs. Before Flex 2, you had to compile on the server (it was the law), so you’d setup a local web server on your box, and refresh a web browser page to see your app.

This work flow solves the problems above.

The first thing is conflict resolution is now done at your convenience. This is a big deal. Suddenly getting the latest from the repo is not a grand affair. You can just get the latest, and go back to work. You can also see what has changed without having to immediately integrate it into your code base if you’re not ready to do so. You can also keep working; since the code you updated is in a totally different directory, the code you’re working on isn’t affected. About to catch a plane an losing wireless? Just update, and you can still focus on coding instead of resolving your changes with someone else when you are incapable of giving them a phone call to discuss. Finally, you can confidence that when you update, your code will still work. That’s a good feeling.

Secondly, file conflict resolution is challenging, yet fun. Folder conflict resolution, however, is very challenging, and requires a lot of meticulous concentration. Deleting directories can delete a lot of files and folders within them, and thus tick a lot of other developers off if you screw up and check in your screw up. So you have to be careful. Rather than updating some massive package re-factoring and preparing for a battle with Subclipse, you can instead just let Tortoise SVN do it’s job; you just right click and update; done. When you’re ready, and you know your version of code works, you can then merge the 2 pieces using Beyond Compare, or whatever Merging tool you use. It’s a lot easier to do this when you have 2 real, physical folder directories vs. a virtual one.

Third, this convenience extends to code committing. Rather than being motivated to commit code out of fear of merging, you can do so when you’re good and ready. Even if you wait 3 days, you know you’ll have plenty of time to merge the changes into your code, get it working, THEN check in inside a safe, non-repo area. This confidence in your ability to merge with the assurance you have a nice sequestered area away from your repo motivates you to focus on creating good code vs. creating compiling code for the mere sake of making merging easier.

Fourth, because this work flow forces you to continually get the latest code form the entire repo, you always have the latest build, not just a portion of the build. This ensures that when you check in code, you know it won’t break the build because you already have the entire build and have tested it on your local box. Typically anything wrong from there is a configuration issue, not a code out of sync issue.

You need 2 things to utilize this work flow. The first is your folder structure and the second is a good merge / diffing tool.

Folder Structure

When setting up a project, you create your project folder. Inside of that, you make 2 directories called "dev" and "local". The "dev" folder is where your repo goes. This can be an SVN/CVS repo, or a Perforce work space. In the case of Subversion, I’ll usually have Tortoise check out trunk, tags, and branch here.

To say it another way, "dev" is where your source control code goes. The "local" folder is where your copy goes. You play and work inside of local. When you are ready to merge in changes, you use a merge tool to do so.

dev and local folders

Setting Up Beyond Compare

For the merge tool, I use Beyond Compare . While it’s only for PC, I purchased CrossOver , a Windows emulation tool for Mac, just so I could retain the same work flow on both my PC and Mac.

BeyondCompare has a few setup steps that you only have to do once. These are:

  1. Change default keyboard short cuts.
  2. Creating a new "session" to have your repo on the left, local code on the right.
  3. Showing only mismatches.
  4. Making your session expand folders & expand only folders with differences.
  5. Setting up BeyondCompare to filter (aka not show) certain files and folders
  6. Adjusting the file comparison control to be Size & CRC based
  7. Full Refresh!
  8. Synchronize to Right.

Let’s go over these steps in detail.

1. Change Default Keyboard Shortcuts

When you get the latest code, you’ll want to shuttle over to your local repository when you’re ready. If it’s new or unchanged files, you’ll want to do this quickly. I find Control + Right Arrow and Control + Left Arrow for putting local code todev is the best shortcuts for this.

Go to Tools > Options, and select "Keyboard" from the bottom left. Scroll down till you see "Copy to Right". Click it and press Control + Right Arrow key. Click "Copy to Left" and press Control + Left Arrow key.

BeyondCompare keyboard shortcuts

2. Creating a new "session" to have your repo on the left, local code on the right.

The easiest way to create a new session in BeyondCompare is to go "Session > New Empty". A session in BC is your saved settings with a name. A session will store what directory you are viewing on the left, what directory you are comparing to on the right, and other comparison options. You typically create 1 session per project. I typically put the dev, your source code repository on the left, and local, your copy, on the right.

Dev on the left, Local on the right

3. Showing only Mismatches

Click the "Only Mismatches" button. This will filter the files to only show files and folders that aren’t the same. Blue ones will be files that do not exist on the other side. Red indicates files that are different and gray indicates a filethat’s older than the file on the other side.

Show only mismatches

4. Making your session expand folders & expand only folders with differences

You want to have the views automatically expand folders if they are different. To do this, you need to check some check boxes in 2 places. The first place is in the Session settings under "Session > Session Manager". For your session, you’ll see a section called "Folder Handling". Check all 3 check boxes.

The second place is in "Session > Comparison Control". You’ll see the same 3 check boxes at the bottom of the dialogue; click all 3.

Expand folders automatically and only with differences

You’ll have to do this again in another dialogue; see step #6.

5. Setting up Beyond Compare to filter (aka not show) certain files and folders

There are some files you don’t care about. For Subversion, for example, these are ".svn". For Flex Builder, this includes .actionScriptProperties, .settings, etc. For Mac users who show all files, this is .DS_Store. You can filter these files out by clicking the glasses icon, and typing in the files and folders you want to filter out, 1 per line, and clicking OK.

While this will filter out files in the view, it WILL copy them if you shuttle folders back and forth. I never copy folders across inBeyond Compare , only files. The exception to this rule is if you do an "Actions > Synchronize Folders". This will obey your filter settings. If weird things start happening in your code, make sure you didn’t accidentally copy a folder with a bunch of .svn folders and files in it.

6. Adjusting the file comparison control to be Size & CRC based

To ensure you get a decent difference between files, choose both size & crc checking for file comparisons. You can choose this radio button in "Session > Comparison Control". See above picture.

modify file comparison and re-do folder expansion

7. Do a full refresh!

This will compare your entire folder & file structure. You can do so via "View > Full Refresh".

8. Synchronize to Right

If this is your first time shuttling files from your dev folder to your local folder, you can go "Actions > Synchronize Folders > Synchronize to Right".

Conclusions

That’s it, you’re ready to go! Get latest revision, do a full refresh, merge, code. Later, you can get latest revision, merge your code left, and check-in with confidence! In the end, Two Directory Source Control allows you to still compile your code when checking out the latest revision, makes it easier to resolve largere-factoring efforts, allows code committing to be a pleasurable endeavor, and ensuring you always check out the whole repo so you can confirm it builds on your machine with your latest changes. I hope it helps you out too.

Let me digress here a minute and point out how disappointed I am in the Mac platforms support of development tools. Tortoise SVN and Beyond Compare are fantastic tools for the PC. I have not found the equivalent for Mac. SvnX and the Tortoise equivalent for Mac really really suck, especially when you grew up in Tortoise SVN. Same thing on the Merge front; I have yet to find a good GUI based Merge tool for Mac. I’ll probably install Parallels on my new Mac, and stick to the PC version of Tortoise SVN & Beyond Compare since I still can’t shake Flash Develop .

0
Your rating: None

AJAX Libraries API

I just got to announce the Google AJAX Libraries API which exists to make Ajax applications that use popular frameworks such as Prototype, Script.aculo.us, jQuery, Dojo, and MooTools faster and easier for developers.

Whenever I wrote an application that uses one of these frameworks, I would picture a user accessing my application, having 33 copies of prototype.js, and yet downloading yet another one from my site. It would make me squirm. What a waste!

At the same time, I was reading research from Steve Souders and others in the performance space that showed just how badly we are doing at providing these libraries. As developers we should setup the caching correctly so we only send that file down when absolutely necessary. We should also gzip the files to browsers that accept them. Oh, and we should probably use a minified version to get that little bit more out of the system. We should also follow the practice of versioning the files nicely. Instead, we find a lot of jquery.js files with no version, that often have little tweaks added to the end of the fils, and caching is not setup well at all so the file keeps getting sent down for no reason.

When I joined Google I realised that we could help out here. What if we hosted these files? Everyone would see some instant benefits:

  • Caching can be done correctly, and once, by us... and developers have to do nothing
  • Gzip works
  • We can serve minified versions
  • The files are hosted by Google which has a distributed CDN at various points around the world, so the files are "close" to the user
  • The servers are fast
  • By using the same URLs, if a critical mass of applications use the Google infrastructure, when someone comes to your application the file may already be loaded!
  • A subtle performance (and security) issue revolves around the headers that you send up and down. Since you are using a special domain (NOTE: not google.com!), no cookies or other verbose headers will be sent up, saving precious bytes.

This is why we have released the AJAX Libraries API. We sat down with a few of the popular open source frameworks and they were all excited about the idea, so we got to work with them, and now you have access to their great work from our servers.

Details of what we are launching

You can access the libraries in two ways, and either way we take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

The first way to access the scripts is simply be using a standard <script src=".."> tag that points to the correct place.

For example, to load Prototype version 1.6.0.2 you would place the following in your HTML:

PLAIN TEXT
HTML:

  1.  
  2. <script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.2/prototype.js"></script>
  3.  

The second way to access the scripts is via the Google AJAX API Loader's google.load() method.

Here is an example using that technique to load and use jQuery for a simple search mashup:

PLAIN TEXT
HTML:

  1.  
  2. <script src="http://www.google.com/jsapi"></script>
  3. <script>
  4.   // Load jQuery
  5.   google.load("jquery", "1");
  6.  
  7.   // on page load complete, fire off a jQuery json-p query
  8.   // against Google web search
  9.   google.setOnLoadCallback(function() {
  10.     $.getJSON("http://ajax.googleapis.com/ajax/services/search/web?q=google&;v=1.0&;callback=?",
  11.  
  12.       // on search completion, process the results
  13.       function (data) {
  14.         if (data.responseDate.results &&
  15.             data.responseDate.results.length>0) {
  16.           renderResults(data.responseDate.results);
  17.         }
  18.       });
  19.     });
  20. </script>
  21.  

You will notice that the version used was just "1". This is a smart versioning feature that allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. For instance, consider a set of versions: 1.9.1, 1.8.4, 1.8.2.

Specifying a version of "1.8.2" will select the obvious version. This is because a fully specified version was used. Specifying a version of "1.8" would select version 1.8.4 since this is the highest versioned release in the 1.8 branch. For much the same reason, a request for "1" will end up loading version 1.9.1.

Note, these versioning semantics work the same way when using google.load and when using direct script urls.

By default, the JavaScript that gets sent back by the loader will be minified, if there is a version supported. Thus, for the example above we would return the minified version of jQuery. If you specifically want the raw JavaScript itself, you can add the "uncompressed" parameter like so:

PLAIN TEXT
JAVASCRIPT:

  1.  
  2. google.load("jquery", "1.2", {uncompressed:true});
  3.  

Today we are starting with the current versions of the library, but moving forward we will be archiving all versions from now onwards so you can be sure they are available.

For a full listing of the currently supported libraries, see the documentation.

Here I am, talking about what we are doing in two short slides:

The Future

This is just the beginning. We obviously want to add more libraries as you find them useful. Also, if you squint a little you can see how this can extend even further.

If we see good usage, we can work with browser vendors to automatically ship these libraries. Then, if they see the URLs that we use, they could auto load the libraries, even special JIT'd ones, from their local system. Thus, no network hit at all! Also, the browser could have the IP addresses for this service available, so they don't have the hit of a DNS lookup. Longer lived special browser caches for JavaScript libraries could also use these URLs.

The bottom line, and what I am really excited about, is what this could all mean for Web developers if this happens. We could be removed of the constant burden of having to re-download our standard libraries all the time. What other platform makes you do this?! Imagine if you had to download the JRE everytime you ran a Java app! If we can remove this burden, we can spend more time flushing out functionality that we need, and less time worrying about the actual download bits. I am all for lean, but there is more to life.

Acknowledgements

I want to acknowledge the other work that has been done here. Some libraries such as jQuery and Dean Edwards Base were already kind of doing this by hot linking to their Google Code project hosting repository. We thought this was great, but we wanted to make it more official, and open it up to libraries that don't use our project hosting facilities.

Also, AOL does a great job of hosting Dojo already. We recommend using them for your Dojo needs, but are proud to also offer the library. Choice is good. Finally, Yahoo! placed the YUI files on their own CDN for all to use.

0
Your rating: None

Tink posted a great library manager for using external assets in your flash projects. When you start getting deep in projects you end up either having to roll your own that might end up being project specific, or you can just grab this from Tink and it is nice and standardized now for you.

Here’s and example of our Library & LibraryManager classes that we use in some of our Flex and AS 3.0 projects to manage our external assets stored in SWF’s.

The classes enabled you to create multiple libraries of embedded (retaining and giving access to code) or loaded SWF’s.

You can create instance of Library wherever you want, but you can also create them through the LibraryManager giving you a single class to gain access to all your Library instances.

As you develop more and more flash/flex projects with AS3 these types of utilities come in handy. Another that comes to mind is Arthur Debert’s BulkLoader and polygonal labs Data Structures for Game Developers that are all great kits.

Add Tink’s Library and LibraryManager to your arsenal today! Thanks Tink.

0
Your rating: None

I’ve been playing with openFrameworks (currently in prerelease) for the past couple of days for a project I currently doing. I have to say that it is extremely powerful and a great introduction to C++ programming. “OpenFrameWorks, is a new open source, cross platform, c++ library, which was designed by Zachary Lieberman (US) and Theo Watson (UK) to make programming in c++ for students accessible and easy. In it, the developers wrap several different libraries like opengl for graphics, quicktime for movie playing and capturing, and free type for font rendering, into a convenient package in order to create a simple, intuitive framework for creating projects using c++. It is designed to work in freely available compilers, and will run under any of the current operating systems” Coming from a processing.org background which has satisfied all my needs till now, I really got interested in openFrameworks because of my increasing use of computer vision. While I never had the confidence to use C++ from scratch, to build computer vision software, openFrameworks offers a lot of the building blocks to let you go straight to the design phase. Check out the video above to find out more about the scope of openFrameworks and join the mailing list.

0
Your rating: None

Update: Bryan Veloso recently re-launched his personal site Avalonstar using the 16 column Photoshop template from the 960 Grid System. Special thanks to Bryan, for letting me use a screenshot of his work on 960.gs.

960 Grid System

As you might have heard, I recently launched a site for the templates that I created to use in my personal and professional projects. For lack of a better name, and because the numerals made for a nice logo, I’ve decided to simply refer to it as the 960 Grid System. It’s funny, because even though all I did was post a single tweet about it before going to bed Sunday night, viral word of mouth kicked in, and I’m already getting inquiring emails.

I should note that the 180 KB download size isn’t for the CSS framework alone. That download bundle is for all the files: printable sketch sheets, design templates for Fireworks, Photoshop, OmniGraffle and Vision + the CSS and HTML. The 960.css file itself is only 4 KB when compressed.

First of all, I made this for myself, to scratch an design itch. Not everyone will like it, and I’m okay with that. There’s an old saying: “You can please everyone some of the time, and some people all of the time.” I am not trying to accomplish either. I am simply sharing the grid dimensions that I have found myself gravitating towards over the past year or so. If others can benefit from that, then all the better. That being said, read on.

Browsers

I built the code in my grid system to accomodate all A-Grade browsers, as determined by Yahoo. At the time of this writing, that chart looks like…

A-Grade Browser Chart

Note that IE5.x is nowhere to be found. That’s for good reason. There’s not enough market share for it to be important to Yahoo. Not only that, but even Microsoft has discontinued support for it. If you’re stuck in a position where you’re still having to code for that ancient browser, the best suggestion I have is to go peruse Authentic Jobs and find better employment.

Background

I first became interested in grid design via reading articles by Khoi Vinh and Mark Boulton. Initially, I must admit that I didn’t fully grasp the concept. But, the more I thought about it, the more I realized that this time-tested practice from printing works well on the web. Like it or not, web pages essentially revolve around boxy shapes. Inevitably, these rectangles have to come together in some way or another to form a design.

As long as we’re using shapes consisting of right angles, we might as well make some logical sense of it all. Some time after the intial work of Khoi and Mark, I happened upon an article by Cameron Moll, heralding a width of nine-hundred and sixty pixels as the optimal size for design. Basically, 1024×768 is the new 800×600, and 960 makes for a good magic number to utilize the wider canvas on which we paint.

After that seminal article, I have been basically designing on some subdivided variant of 960 pixels ever since. Last spring (2007), I found my groove, so to speak. It happened when I began to redesign my personal site. It’s still in the works, but here’s a sneak peek. As you can see, I’m eating my own dog food, as the future version of my site will use a 16 column grid.

Sneak Peek

I have yet to sit down and complete the redesign, since I am still finishing up my masters degree via correspondence, and because other paying freelance work came up, plus day job, etc. Chronologically though, I technically “began” work on my grid system before the release of Blueprint.

That’s not a value judgement, but helps to explain “why another grid framework,” because I had already started down that path by the time Blueprint was announced. I have used BP, on a project where I was required to use a documented code base, to ease future maintenance of the site. Though it was agreed upon, the designer on the project didn’t design according to BP’s dimensions. That was partially my fault, for not better communicating how it worked. By the end of the project, I had basically overridden nearly everything BP brought to the table.

That’s when I got to thinking, wouldn’t it be great if there was a way to streamline things, to get designers and developers thinking and communicating better? And, what if IAs were included in that workflow? That inevitably led up to what is now the 960 Grid System.

What it’s not

By far, the most common question I have received via email has been “How does this differ from Blueprint?” People seem to ask the question almost antagonistically, as if to say — “You’re wasting your time, because Blueprint already exists, and I like it better, so nanny boo-boo.”

For those people, I say, bravo. Please, continue to use whatever works best for you, and whatever you are most familiar and/or comfortable with. I am certainly not looking to draw up battle lines, or trying to change anyone’s mind about what framework to use, or even if one should be used.

One of the glaring omissions, or nice features, depending on how you look at it, is the way the 960 handles (or doesn’t) typography. There is a text.css file included, but this is mainly to ensure that there is at least a something in place, so that as you do rapid prototyping, common elements such as headings, paragraphs and lists have basic styling.

I haven’t gone out of my way to establish a vertical rhythm for text, as is described in the Baseline article on ALA. It’s not that I don’t see the value in it, I do. I think it’s an awesome idea, and a noble pursuit. However, it is fragile. All it takes is for a content editor to upload an arbitrarily sized, 173 pixel tall image, and all the subsequent elements are off-beat.

I see this as being more of a design choice, than something that needs to be standardized. In fact, most things that might be considered unique to a site design have been left out. I have specifically not set any text or page background colors. Call me lazy if you want, but that’s one thing I found myself consistently overriding when using BP. I’d wonder things like: “Why is there a <th> background color?”

I have also not put in pull or push image/quote styles, as those are things I rarely use, and I consider that to be a bit more design and contextually content oriented than related to page layout and prototyping. It’d be easy enough to write as a one-off in your own design stylesheet.

So, I hope you’ll forgive me if my grid system isn’t all you think it should be. Much as I like the The Beatles, I don’t exactly want to hold your hand.

Texting

After that underwhelming explanation of what the 960 Grid System doesn’t do, let me highlight what I consider to be the features. First off, some love for Linux users. The default font-family order is Helvetica, Arial, Liberation Sans and FreeSans, with a generic fallback of sans-serif.

One thing I’ve found about Ubuntu is that the default sans-serif font is actually closer in width to Verdana than Helvetica or Arial. It’s not a big deal, but if you want text that looks relatively the same cross-browser, you need more than just a generic fallback for Linux users. This can be especially important if using the size of a particular font to approximate a certain width. It stinks to see carefully crafted navigation break to the next line.

In talking to my friend Yannick, he advised me that Liberation Sans is available by default on both Fedora and Red Hat distributions of Linux. It is also freely available under the GPL, so what’s not to like? If I had to describe it, I’d say the numerals are reminiscent of Verdana, but the rest of the characters map nicely to the dimensions (if not glyphs) of Helvetica.

From what I could tell via Jon Christopher’s article, FreeSans is the nicest looking equivalent to Helvetica available by default on Ubuntu. So, the text.css font-family list handles OSX, Windows, Linux builds.

Sizing + Spacing

Before we get into the pedantic debate about pixel font sizes, let me just say — Let’s not go there. I’m aware of all the arguments, and have even toyed with making elastic layouts. For me, it’s about ROI. You can spend countless hours toying with font-size inheritance, like I did on the Geniant Blog, or you can set sizes in pixels and get on with your life.

I’m aware that Blueprint’s text equates to 12px, but 13px is what YUI‘s fonts.css file specifies, and I actually quite like the way they’ve done their typography, aside from percentage based sizing being a bear to wrangle. We used it on the Geniant Blog, to great (albeit time-consuming) effect.

While I haven’t painstakingly set a vertical baseline, body text is 13px with a line-height of 1.5 (unit-less) = 19.5px. Most block-level elements have a bottom margin of 20px. So, that right there gets you pretty close to a consistent baseline. Just need to tweak line-heights on headings, etc.

The two block-level elements to which I did not apply bottom margin are blockquote and form. In strict doctypes, both of these require other block level elements to be inside of them. For instance, paragraph tag(s) inside a blockquote or fieldsets in a form. By not having any margins themselves, other blocky children can stack neatly inside.

I’ve also re-indented list items by 30 pixels on the left, so if you want hanging bullets, be sure to set that back to zero. I actually think hanging punctuation is pretty cool, but have yet to meet a client who agrees, so I erred on the side of caution in my CSS, in hopes of mitigating future arguments.

:Focus

I just wanted to briefly mention this, because it has to do with accessibility. While I personally think that Eric Meyer’s reset.css removal of :focus outlines is aesthetically pleasing, I think the right thing to do is to retain focus borders, for users that navigate links via a keyboard instead of a mouse.

This has been a longstanding gripe of mine with the Opera browser — the utter non-existence of any sort of :focus indicator. Anyway, you can remove the a:focus code in text.css if you want to have cleaner looking links, but know that you do so at the expense of accessibility.

Columns

One of the key differences between Blueprint and 960, aside from nomenclature, is the way the grid columns receive their gutters. In Blueprint, all the gutters are 10px wide (a bit too narrow for my taste), and are to the right side. The last column in each row needs class="last" to remove the margin. This means that neither the left nor right sides of a container have any spacing. For the most part, this is no big deal, but if a user’s browser is constrained, body text actually touches the browser chrome.

In the 960 Grid System, each column has 10px margin on both the left and right sides. This allows for the overall container to have a 10px buffer at the edges, and the margins combine between columns to form 20px gutters (a bit more room to breathe). Additionally, there is no need to add an extra class for the last column in a row.

In some unique cases, you might want grid units nested within a parent grid unit. In this case, you will need class="alpha" and class="omega" on the first/last child in each row. It’s a bit more work, but is more of a fringe situation, so it won’t come up as often.

I specifically chose grid_XX as the naming convention because for some reason, having a class named span-XX bothered my brain, since <span>, <td colspan="X"> and <colgroup span="X"> already exist in HTML. I guess I just like the idea of there not already being several tag/attributes with the same naming conventions. I also disliked the monotony of typing class="column ..." repeatedly, though to their credit, the BP guys have eliminated that necessity in their most recent build.

Much like with Blueprint, you can add empty columns before/after a grid unit, by specifying a class of prefix_XX and/or suffix_XX. The naming convention there was simply one of personal preference, as I admittedly tend to get confused by the word append, forgetting if it goes before or after.

Internet Explorer

Someone emailed me today asking about IE compatibility, and why there is no ie.css file included with 960. Simply put, it wasn’t needed to fix any of the grid aspects. IE6 has a wicked bug which doubles the margin on any floated elements. This could be a huge problem, but is easily fixed by adding display: inline to that which is floated. This causes no side-effects in any other browsers, so it simply lives in the main 960.css file.

You may find that IE6 and IE7 both get the spacing around <hr /> slightly off, by 7 pixels above and beneath, which is easily fixed by adjusting margins accordingly. To me, that’s an acceptable issue, and not really worth adding another CSS file with conditional logic in the markup. If you do end up creating a separate IE stylesheet, keep that in mind, and address it.

In the Clear

Lastly, I wanted to talk about the clearing methods included in the 960.css. First off is my personal preference, adding an innocuous bit of markup to clear all elements. I have already written about it extensively, so I won’t go over all that again. Basically, add class="clear" to a <span>, <div>, <li> or <dd> that you want to disappear. The only effect it will have is to clear floats.

The other method is for all you markup purists out there, who don’t want to dirty your HTML. Instead, you can insert markup via your CSS. This technique is well documented already as well. Essentially, add class="clearfix" to elements to clear what comes after them.

Licensing

The whole bundled package is free of charge, dual licensed under GPL and MIT. While I’m no suit, it is my understanding that these two licenses ensure that you can use the grid system in just about any situation. I must especially thank my friend Michael Montgomery for his informal advice, helping me to wrap my mind around all the legalese. He is a patent attorney by day, web ninja by night, and helped me wordsmith the 960 copy.

Finito

That’s all folks. Hopefully you’ll find the 960 Grid System to be helpful in sketching, wireframing, designing and/or coding. Future plans include a tutorial on how to use jQuery to add styling hooks to form elements, since I know from experience that is no cup of tea. I’m not sure JS belongs in a CSS framework, so I might just make that a separate tutorial series.

0
Your rating: None