Skip navigation


warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

Welcome to the preview release of codename "Alchemy." Alchemy is a research project that allows users to compile C and C++ code that is targeted to run on the open source ActionScript Virtual Machine (AVM2). The purpose of this preview is to assess the level of community interest in reusing existing C and C++ libraries in Web applications that run on Adobe® Flash® Player and Adobe AIR®.

With Alchemy, Web application developers can now reuse hundreds of millions of lines of existing open source C and C++ client or server-side code on the Flash Platform.  Alchemy brings the power of high performance C and C++ libraries to Web applications with minimal degradation on AVM2.  The C/C++ code is compiled to ActionScript 3.0 as a SWF or SWC that runs on Adobe Flash Player 10 or Adobe AIR 1.5.

Alchemy is primarily intended to be used with C/C++ libraries that have few operating system dependencies. Ideally suited for computation-intensive use cases, such as audio/video transcoding, data manipulation, XML parsing, cryptographic functions or physics simulation, performance can be considerably faster than ActionScript 3.0 and anywhere from 2-10x slower than native C/C++ code. Alchemy is not intended for general development of SWF applications using C/C++.

Download and Discuss

With Alchemy, it is easy bridge between C/C++ and ActionScript 3.0 to expand the capabilities of applications on the Flash Platform, while ensuring that the generated SWCs and SWFs cannot bypass existing Flash Player security protections.

Adobe is providing some example libraries, and developers are encouraged to share their ported libraries.

Your rating: None

Why Robotlegs for Games?

The following covers why you would utilize the Robotlegs MVC architecture in a lightweight, Lua game for Corona. I’ll discuss the refactoring reasoning, the common problems you run across in game architecture, and their solutions with their pro’s and con’s discussed. Finally, I conclude on how Robotlegs helps solve these common issues, specifically in a Lua & Corona context.

I’ve started the port of Robotlegs to Lua to work in Corona if you’re interested in learning more.

Refactor First

In learning Lua & Corona, I quickly ran into 2 problems with the game I’m working on. The main code controller became way too large to manage. It wasn’t quite a God object (yes, the anti-pattern) because a lot of the View’s themselves handled some of the necessary messaging & logic. Regardless, time to refactor.

The first thing was to start using classes. Lua doesn’t have classes. There are a bunch of resources out there that explain how to do them in various incarnations. I went with a version that Darren Osadchuk, creator of the Corona bundle for TextMate.

The second thing was to start using packages. Lua doesn’t have packages. There are various ways of implementing them which are all confusing as hell. The easiest way is to just put your code in a folder and use that path in the require function, but you can’t utilize folders for Android in the latest build of Corona at the time of this writing.

These are the low hanging fruit of re-factoring, and has made a huge difference in code readability & maintainability.

Burdens of Encapsulation

The downside of OOP is that it’s encapsulated. While making something a black box with internally managed state & implied reusability is good on the surface, it pushes the burden of managing communication between those encapsulated objects to someone ones.

You have 3 options: globals, messaging dependencies, or mediation.

Global Variables: The Good

Most people agree global variables are bad… in larger software development. In smaller scoped software gaming, not so much. Let’s explore their benefits.

In a lot of languages, globals are part of the language design, irrespective of how the runtime and tools/IDE’s handle them. Some languages make them first class citizens with mechanisms and rules around how they’re often used. This makes them formalized, accepted, AND useful. This is important to understand.

If it’s formalized, it means the language designers deem their usage acceptable, AND expected.

If it’s accepted, it means others will use them, incorporate them into their projects, libraries, and training & writing materials. This also means it’s then promoted amongst the community, solutions and coding strategies/solutions are built around it. This doesn’t make it inherently “good”, just accepted in that particular community.

If it’s useful, the majority of people will use it if their job is using that language as opposed to hobbyists, or those who continue to explore the language and try different ways to solve similar problems with it off of their employer’s clock.

In Lua, globals are all 3. Formalized via _G, very similar to ActionScript 1′s _global, accepted in that people do in fact use them (unknowingly a lot I’d wager), and useful since the syntax to utilize globals is terse; a Lua hallmark. Notice both have the preceding underscore to ensure they don’t interfere with the developer’s potential to create a variable called G or global. While ActionScript 1 has some security implementations to prevent certain loaded SWF’s from writing/modifying existing definitions, they had the same mechinism with strorage, first via _level’s on a per SWF basis, and later a formalized _global. Lua takes it a step farther with a series of facilities to change “which” globals your using, etc. It’s also used in package naming strategies, OOP, etc.

As a globally accessible table, access to _G is fast. Fast is good in game development. You want to be part of the solution, not part of the problem.

Global Variables: The Bad

This differs from traditional software development. If you come from any formal software development, globals are considered extremely bad practice. There is a common strategy to use the Singleton design pattern to “get away with it”. Even those are considered bad practice when over used. In fact, Singletons are often created merely because the language doesn’t offer a formalized way of doing global variables (like ActionScript 3 vs. ActionScript 1).

Why? The 3 main reasons are application state coupling, access control, and preventing testability.

Application State Coupling

Application State Coupling refers to the state of your application being dependent on usually a series of global variables. I won’t get into the why’s, but MVC and it’s MVP derivatives (Passive View, Supervising Controller, Presentation Model) have solved most of the state problems. They keep visual state in the View’s (or anything that’s a GUI that you can see), and application logic & state in the Model layer somewhere. Your application is no longer coupled to a single variable being a specific value for your application to run, to exhibit a certain bug, etc. Each piece can (usually, hehe) be run independently, and if there is a problem, you know where to look, or at least where NOT to look.

An example is if your player is dying too quickly, you know the problem isn’t in the player code, or the controller code, but rather somewhere in the model where his life state is being access/set/updated. If you used a global, it’d probably be set in player itself, or perhaps by an enemy bullet that hit him, etc. No clue where to look.

Another side effect is adding new global variables. This leads to certain parts of your application expecting the globals to be a certain value at certain times. If they aren’t, things don’t work. Because they are many different places, it becomes extremely time consuming for you to debug and track down because you don’t necessarily know who is expecting what to be set to what, and why. This is more caused by access control, however…

Access Control

Access Control refers to who can access your data, and how. In Flash & Flex apps that are smaller, we’ll usually use the server as access control. Meaning, while we provide a GUI to allow people to create, edit, and delete users from a database for example, the actual PHP/Ruby/Python/Java on the server is the access control. No matter what we do in the Flex/Flash, we can be assured the server (usually… *sigh*) will ensure we don’t delete people we’re not allowed to, delete users en-masse, put illegal characters into their name, etc. Basically anything that would corrupt the data and screw the up the GUI.

In Flex/Flash apps, and nowadays in larger JavaScript web applications, you have a notion of “application logic” or “application state”. This is because this state is no longer on the server. It used to be with some web applications, usually in the session. When you go add something to your cart on, it’s not stored on the page that much beyond to show you a slight graphical change. Once you change pages, that JavaScript & HTML page state is now gone and replaced with a new one… but it remembers what you added. It does this because the state is saved on the server since the client needs to be stateless if the user refreshes the page to go to another one.

We don’t have that problem as much in Flash/Flex apps. We don’t go to pages insomuch as new “Views”. These views, or graphical representations of a GUI that have internal state, are still in the same SWF embedded in the same page in a browser that doesn’t go anywhere (same thing for an AIR app on the desktop or on a mobile device). Therefore, someone has to be responsible for holding the data in a variable or series of variables, and updating it when the server gives you new data.

These are usually classes called “Models”, and they’ll usually mirror their server-side equivalents in terms of CRUD operations: create, read, update, and delete. Where they differ is how much of the server-side provided data is parsed, error handling, data access control, and how that server-side data is set internally.

Suffice to say, good Model classes are encapsulated, ensure application logic is accessed by a public API that ensures security, doesn’t allow the internal data to be corrupted, and does all this within reason. An example is our Player’s health. If every enemy collision and bullet out there has access to the player’s health, we have no easy way to track down who may be potentially doing the math wrong on calculating the new hit-points and setting them to invalid values, or thinking the player is dead when she’s not, etc. If you have a PlayerModel, you can ensure your problem’s regarding hit-points and letting the world know about player death is in one place, and problems with said state (how much health does the player currently have) starts in that class and those who access him. Access control in this case refers to ensuring the PlayerModel only exposes methods to increment/decrement the Player’s hit-points vs. allowing them to be set to -16, or NaN/Nil, etc. You don’t even have to worry about this; you just don’t allow it to happen.

Thus, the burden on accessing and setting hit-points correctly is pushed to someone else; those who access PlayerModel. THIS is where you look for bugs, but again, you need to ensure only a set few have access to the data access layer (PlayerModel).

In short, Models centralize data access, ensure it’s correct, testable, and DRY and prevent incorrect access errors, (i.e. global.playerHitPoints = “dude, pimp!”).

Preventing Testability

This third problem isn’t necessarily around unit-testing and TDD (Test Driven Development), but more around just ripping the class out, putting in a simple main.lua, and seeing if it works. If it has dependencies, they get dragged with it… and you have to instantiate and setup those guys. Suddenly quickly confirming some class or method works or not becomes a time consuming process that’s painful. If you can’t test something in isolation, you’ll have huge problems down the road isolating problems in a timely fashion.

Globals by their very nature are’t testable because they’re variables, not encapsulated classes. While Lua has some neat mechanisms to encapsulate globals in certain contexts, somewhere, someone is setting a global to some value in their own way. If a class depends on this global, that global must be set the same way before you test the class that relies on the global. Since you may not know who set the global a certain way, that global may not be the same in your test environment vs. in the game proper.

Global Conclusions

While globals in Lua are fast, formalized, and accepted, you must be aware of the rules so you can break them with confidence. First, keep in mind if you want to get something done, premature optimization is the root of all evil. With that in mind, ask yourself the quick question: Does creating a data access control class/layer around your globals to ensure it’s DRY and secure add so much overhead that the speed makes your game un-playable?

I’ve found this not to be the case in my implementations in Lua and ActionScript. They run fine, are encapsulated, and succeed in good access control. Thus, while globals are accepted in Lua, implementing a good coding practice via a data access layer does not adversely affect my performance while increasing my productivity in DRY’er code that’s easier to test with good encapsulation.

Messaging Dependencies

We’ve seen how globals can work to allow multiple View’s to talk to each other directly or indirectly via variables they all reference, or even methods/tables. We’ve also see how this leads to bad things like harder to find bugs and spaghetti code (you change 1 thing that adversely affects something else seemingly unrelated) as well as making refactoring more challenging than it needs to be.

One way to allow encapsulated objects to communicate is through messaging systems, also know as notifications or events (we’ll just call them events because that’s what ActionScript and Corona use). When something happens inside the object, such as a user clicking something, or some internal property changes, it’ll emit an event. There are 3 types of events we’re concerned with: local, global, and event context.

A local event is an event issued from an object/table. In ActionScript, anything that extends EventDispatcher, including DisplayObjects (things you can see), can emit events. In Lua, only DisplayObjects can; you have to build your own for regular tables as Lua doesn’t have an event system. So, if I create a wrapper table in Lua, I can then register for events from that table, and later when it emits them, I’ll hear them. Those are local events; events dispatched from an object/table directly. Although event objects/tables have the target property on them so I know where they come from, USUALLY event responders are not that encapsulated, and make the assumption that the event is coming from a specific object/table.

A global event is one that’s received, but we have no clue where it came from. At this point, we’re inspecting the event object itself to find context. If it’s a collision, then who collided with who? If it’s a touch event, then what was touched, and how? Global messaging systems are used for higher-level events and concerns. While you usually won’t have global events for touches because your mainly concerned with handling the touch event locally, a pause event for example could of been issued from anywhere, and while you don’t care who issued it, you’re more than capable of handling it (stopping your game loop, pausing your player’s movement, etc).

To properly handle an event, we need context. Context is basically the data the event, or message, has stored internally so when the handler gets it, he knows how to handle it. If the event is a touch event, cool, start moving. If it’s a touch release event, cool, stop moving. If it’s a collision, is it a bullet? If so, take less damage than say a missile hitting. Is the bullet’s velocity higher? If so, take more damage than usual. This context is what allows the encapsulation to happen. You issue a message, provide context, and whoever gets it doesn’t necessarily HAVE to know where it came from; they’ll have enough information to know how to handle it.

For Corona DisplayObjects, this is easy. You just use dispatch an event. But for Model/data access objects, not so much. You don’t want the overhead of a DisplayObject to just to send messages, so you build your own. This implies another class/table with the capability of sending/receiving messages. This means your classes will have a dependency on this object. For example, if I want to test my Model class in a brand new main.lua, it’ll bring along/require the messaging class. This is a dependency. One of the threads (just like OOP, encapsulation, keeping things DRY, don’t optimize too early, etc) you want to keep in the back of your head is to keep the dependencies low. You don’t want a class to have too many dependencies. Like globals, dependencies can cause spaghetti code, hard to test & debug situations, and overall just hard to manage code.

Corona makes the event dispatching dependency invisible with DisplayObjects because it’s built in. Not so for regular tables. Somehow, you have to put this dependency in every class. You can either create a public setter for it, and pass in an instance, or just reference a class globally in a base class. Both are a dependency, the latter just requires a lot less work and code.

Downside? It’s a dependency. You also force this implementation on everyone who wishes to utilize your class. If you make the messaging system events, 2 things happen. First, people using Corona get it because it’s the exact same API and functionality (excluding bubbling) that developers are used to. Secondly, it’s interchangeable and works with Corona DisplayObjects. The best solution would be to do Dependency Injection/Inversion of Control… but I haven’t figured out how to do this in Lua yet, and I’m not sure Lua supports metadata annotations. Considering it’s dynamic, you CAN inject things like this at runtime, but someone, somewhere has to do so. Why do so when every object requiring messaging needs the same mechanism? Thus, the pragmatic thing is to build it in.

Also, more importantly, DOES this messaging system add more performance overhead than using simple globals? To make the question harder to answer, can you make tighter coupling of your code and still get things done?

It’s been my experience, if you make things easier to use with less coupling, you’re more productive in the last 10% of your project. While coupling makes things easy in the beginning, it’s a farce; it has more than an exponential cost at the end.

Besides, you can always increase coupling easily; it’s removing coupling later that’s hard. If you need to optimize, great, you can. If not, great, no more work to do; you can instead focus on features. Usually, though, you’ll know pretty quick if you do iterative builds if a messaging system is making a huge performance impact on your game on not. The only surprises you’ll get is you spend days on something without a build. Do a small test, see if there is a noticeable difference. Sometimes you’ll have n-problems you won’t find till your game gets to a specific size which are unfortunate. An example is a charting component that works great building up to 5000 data points, but suddenly DRASTICALLY slows down beyond that.

These problems aren’t the fault of the API. If you take a step back, the “30,000ft view”, usually it’s the implementation that’s at fault, not the code itself.

For example, a little history lesson. Back when ActionScript 1 development was increasing in scope, we went through 4 different messaging systems. The first was the built-in one for Mouse and Keyboard, etc. It was only partially available; there wasn’t a MovieClip one. So, we used ASBroadcaster; an undocumented one inside of Flash Player. It had a known bug with removing a listener during a dispatch, and other things. Then Bokel released a version on top of that fixed it. Then Adobe created EventDispatcher, mirroring the ECMA one. They then built this into the Flash Player for Flash Player 9′s new virtual machine.

There were others that came out after ActionScript 3 was born. PureMVC had notifications in it for an easier port to other platforms & languages. Robert Penner created Signals for a light-weight, object poolable, quick to develop in C#-esque messaging system.

As you can see, why the bottleneck for most larger Flash & Flex 1.x applications at the time was EventDispatcher, even when they built it into the runtime in C, developers opt-ed for a slower system built by themselves to solve different problems.

So why the continuous talk about performance questions in this section when I rail against premature optimization? Because messaging is the core of any app, or game. The choice you make is the largest impact on what you’re building, both from performance and from a development perspective. Yes, you can just use direct access to objects, or callbacks, but that’s not very encapsulated, nor DRY, and is a pain to re-factor later if you need. Messages via Events are more encapsulated, and less coupled, but have a performance impact. Additionally, only DisplayObjects use the internal C implementation; your own uses interpreted Lua with whatever JIT’ing/machine/voodoo conversion Corona does.

Traditionally, it’s easy to switch to events from callbacks, but not the other way around. While the performance impact isn’t that large to utilize a DisplayObject and use the built-in event messaging, based on some benchmarks, this isn’t a good idea to build upon.


The third option is using the Mediator pattern. Once you have 2 encapsulated objects that need to talk to each other, you utilize a class that handles the communication. It takes the message from ViewA and sends that to ViewB. When ViewB wishes to talk to ViewA, it sends the message and the Mediator relays it.

All the MVC/MVP articles go over A LOT about the different things a View should/should not do, particularly with it’s needs for data and how to show it. Regardless of the implementation, it’s generally understood that someone such as a Mediator or a Presenter handels the responses from a View. If it’s a Presenter, it’s a dependency in the View, and the View calls methods on it. This allows the actual logic behind both responding to View user gestures (like when I touch a button on the phone) to not muddy up the View (muddy up means TONS of code that has nothing to do with displaying graphics, putting into groups, setting up Sprite Sheets, etc), and allows you to test the behavior in isolation.

Mediator’s are similar, although, the View has no idea he/she has a Mediator, and the Mediator has the View reference passed into it by someone else. Again, more encapsulation, yet SOMEONE has to have the burden of setting up this relationship. In ActionScript this is easy; Robotlegs just listens for the Event.ADDED_TO_STAGE/REMOVED_FROM_STAGE events and creates/destroys based on an internal hash/table. In Lua, you don’t have any DisplayObjects events, so you have to manually do it.

Either way, if you DON’T Mediate your View’s, you’ll eventually have a lot of code in there that has to “know” about other objects. This’ll be either a global reference, or a dependency… and we’ve already talked about why those things are bad to do. Additionally, tons of code in general is bad; having a View just focus on graphical things and emitting events people outside would care about while the logic can be put elsewhere makes it easier to manage. When you open a View class; you know pretty quickly what you’re looking at is just GUI specific code.

There’s another more important aspect of Mediation that is similar to messaging and that is system events that affect Application Logic.

For example, many things in a game care about a Player’s health.

  • the Sprite that represents the player; it needs to show different graphics for how much health it has
  • a health bar at the top right of the screen that fills up with green the more health the player has
  • sounds that play when the player loses and gains health

If you use globals, you’d have all 3 handled by the actual Player sprite class. It’d update it’s internal health and update the global variables. Those who have references to the globals would be informed when that value changes. Additionally, you’ve now wired them together if you do this. If you change one, you’ll break the other.

Using a Mediator allows:

  • the player and the health bar both the capability to react to the change in the hit points in an encapsulated way. If you change how the Player and HealthBar look/work, the actual logic on how yo do that is centralized to either them, or their Mediators… and usually Mediators are like 1 line of actual code to change.
  • The application logic of how hit points are updated and changed to be done in 1 place. As your game grows and different enemies need to do different types of damage to your player, and sometimes react differently depending on how much health the player has, this is all done and updated in 1 place. It’s DRY, and easy to find where the logic bugs are.
  • A side effect of this is… the Mediator pattern (lolz). You can have View’s the capability of talking to each other without having tight coupling.

The most important feature is solving the classic race condition of is the data ready for a View when he’s created, or is it null and he has to wait. Using Medaitors, you don’t have this problem. In the onRegister, you just set it if you have it on whatever Model(s) you need, else just wait for the Model(s) to be updated and inform the view.

…I wouldn’t say totally solved; handling “null” or “nil” is still a challenge for developers even in simple View’s. Those are good problems to have, though vs. race conditions.

If you enter the optimization phase of your game and want to remove events, use callbacks, and have hard-wired references, that’s fine if benchmarks truly identify that you need the performance gains. Usually, though, Mediator communication isn’t your bottle neck, it’s collisions, lack of object-pooling, and how messages are handled.

Quickie on Commands

Commands are just formalized Controller logic. In fact, in other frameworks, they’re simply Controller classes with methods. They’re called “Commands” because they do 1 thing… and can potentially undo it. If you’ve ever used an Adobe Product, a few of them like Dreamweaver, Fireworks, Flash, and Photoshop will have a Command list, also called “History Panel”. In Flash and Fireworks, you can even get the code for these Commands. The line blurs here once you lump them all together, but in my experience, good Controllers have 2 things in common:

  1. They contain the only code in the application that updates the Models (update/delete in CRUD for internal data)
  2. They contain all application logic… or at least share the burden with the Models.

For #1, this is important in tracking down bugs and keeping your code DRY. You always know who’s changing your Player’s hit points, who’s actually applying your level up rewards, etc. If you can test your Models in isolation, and they’re good… then you know who to blame. This is good because again, they tend to often start by being 1 line functions with the potential to grow to 30 to 60… not a bad way to start life, nor grow. Small, readable code.

For #2, whether you have a Controller class function, or a Command, SOMEONE in your application needs to “load the level data” from your level editor. SOMEONE needs to handle the fact that your player just drank a health potion, but he’s also wearing a Ring of Health Boost + 2, and this positively affects the effects of the health potions she drinks. SOMEONE needs to handle calling the loadSaveGame service, ensuring it worked, and updating all the Models with their relevant Memento data.

This orchestrator of pulling everyone together, to read multiple Models and “make decisions”, they’re the brains of your application. For performance reasons, a lot of Controller logic is often done directly in game views, even if it just references a Controller class for ensuring logic is somewhat DRY.

There’s a lot of loathing in the community by those with a more pragmatic bent, or just on smaller code bases with shorter deadlines. The overhead associated with them completely negates their perceived value when you can just call a centralized Controller logic function via… a function call vs. some event that magically spawns some other class that has 1 function with a bunch of injected dependencies. Just depends in you read that last sentence and go “Makes sense to me, don’t see what the issue with it is… ” or “w…t…f… why!?”.


Remember, the above is all complete bs. If you or your team have a way of building games that works for you, great. This is just helpful tool I’ve found in Flash & Flex application development, and it seems to help me in Corona games, specifically around the GUI/HUD portions. I still have globals in my collision routines for speed purposes, hehe.

Additionally, it’s a great teaching tool, too. Joel Hooks got some similar schlack like I did in using PureMVC when he was first starting out in Objective C for iPhone. Providing a comfortable & familiar framework of development really helped him learn and have context how the new language & platform work with certain concerns; it’s how he shipped his first app on the app store. Same with me in Corona… and game development.

Finally, this isn’t an all or nothing approach. You can just use it on the parts you need, or perhaps just to learn. I find its helped me learn a lot about Lua and Corona, as well as having the flexibly to change the API of my Player and Enemies without affecting how the GUI itself around the game works. In true spirit of total disclosure, I’ll report on any adverse overhead if I find it.

Your rating: None

I recently did a complete rewrite of my graph-based A* pathfinder example because I received a lot of questions on how to implement path-finding using the new ds library. So here is the updated version which works with ds 1.32:

I’m transforming the point cloud with delaunay triangulation into a graph structure. Then the system computes and draws the shortest path between two selected points.

Compile instructions

Running and examining the example is really easy:

  1. Use the automated installer to install haXe from
  2. Download the latest haXe nightly build and overwrite the existing ‘haxe.exe’ and ‘std’ folder with the downloaded version.
  3. Install the polygonal library by opening the command prompt and typing:
    haxelib install polygonal.

Sources should now be in {haxe_install_dir}/lib/polygonal/1,18/src/impl/sandbox/ds/astar, where {haxe_install_dir} is usually C:/Motion-Twin/haxe on Win7.
The demo can be compiled with:
cd C:\Motion-Twin\haxe\lib\polygonal\1,18\build
haxe.exe compile-ds-examples.hxml

Extending the Graph class

You have basically two options to extend the functionality of the Graph object: by composition or inheritance. While I highly recommend to use composition whenever possible, I’ve also included a version using inheritance – just so you see the difference.

The composition version looks like this:
astar using composition
The Graph object manages GraphNode objects, and each GraphNode holds a Waypoint object, which defines the world position of the waypoint as well as intermediate data used by the A* algorithm. Notice that GraphNode and Waypoint are cross-referencing each other as a Waypoint object has to query the graph for adjacent nodes. As a result, you have a clean separation between the data structure (Graph, GraphNode) and the algorithm (AStar, Waypoint) and don’t need object casting, which is good news because casting is a rather slow operation.

Now let’s look at the version using inheritance:
astar using inheritance
Here, Waypoint directly subclasses GraphNode. Since the Graph is defined to work with GraphNode objects, we need a lot of (unsafe) down-casts to access the Waypoint class. Furthermore, the use of haxe.rtti.Generic will be very restricted or even impossible (implementing this marker interface generates unique classes for each type to avoiding dynamic).

Your rating: None Average: 3 (1 vote)

After all the vector/font rendering library is done – the HaXe sources and SWC files for ActionScript 3.0 are available on the polygonal google code project page. As a reminder – the project started as an experiment if it’s possible to render fonts using the FP10 drawing API without loading or embedding any additional assets.

For obvious reasons, I can only include free fonts. At the moment the package contains Microsoft’s TrueType core fonts hosted on sourceforge, the Bitstream Vera fonts as well as the famous bitmap04 pixel fonts.

The pros and cons


  • No font embedding required :) Import a font class and you are ready to go
  • Provides high quality font rendering; best used for extra smooth text animation
  • Seriously fast!
  • Seamless integration into the FP 10 drawing API


  • An ASCII set of printable characters adds about 20kb-30kb to the swf file – I’ll try to reduce this in a future release.
  • Not very readable at small font sizes (except pixel fonts) because it does not include any hinting information for improving the quality – text remains legible down to about 12 points (viewed at 100%)
  • You need a copy of Fontographer 4.1 to convert ttf files
  • No text field functionality yet
  • Only supports ASCII character (latin character set like ISO-8859 is planned)


Here are the ‘MS core fonts for the web’ rendered with the font library:

MS core fonts for the web

This also works amazingly well for pixel fonts (which I didn’t expect at all):

Bitmap04 pixel font

How it works

First a .ttf file is loaded into Fontographer and exported as a postscript file (I tried other methods but I stuck with this approach because postscript files are easy to understand). A parser reads this file and generates a HaXe class that contains the glyph data. The result is something like this: Arial.hx.

If using HaXe, the font_inline compiler flag gives you control over compilation time vs runtime performance. If omitted, compilation is fast so it’s best suited for frequent testing and debugging and the compiler-based auto-completion remains responsive. If compiled with -D font_inline, compilation is slow but results in the best performance.

The actual rendering is done by a class named It uses the FP 10 drawing API in conjunction with ‘alchemy memory’ as a temporary buffer to gain some extra speed. At first all drawing commands are written into a chunk of memory, then copied into a vector and finally sent to the screen via graphics.drawGraphicsData(…). Depending on the CPU this is roughly 1.5-4x faster than using only vector. Note that this only accelerates the process of preparing the data, not the rendering itself (everything beyond drawGraphicsData())

ActionScript 3.0 usage

Grab the SWC file, add it to the library and the following code should (hopefully) compile fine:

  import de.polygonal.ds.mem.MemoryManager;
  import flash.display.MovieClip;
  import flash.Boot;

  public class Main extends MovieClip
    public function Main():void
      new Boot(this);

      var vr:VectorRenderer = new VectorRenderer(512);
      vr.setLineStyle(0, 1, 0);

      var font:Arial = new Arial();
      font.bezierThreshold = 0.001;
      font.write("Hello World!", 0, 100, false);


The source code reads like this:

  • initialize HaXe specific things
  • allocate 4 megs of alchemy memory to be on the safe side
  • create a vector renderer using a buffer size of 512kb
  • assign a line style (rgb, alpha, thickness)
  • create a font object
  • define curve smoothness, the smaller, the better (0=linear approx. using 2 segments/curve)
  • set the font size: 100 equals 72pt or one inch.
  • assign a renderer so the font can send drawing commmands to it
  • draw “Hello, World!” at the coordinates 0,100 (x,y), if the last parameter is true, the text will be centered around (x,y)
  • flush the buffer which draws everything to the screen

Glyph and text bounds

You can compute axis aligned bounding boxes for the whole text block or individual characters using the getBounds() and getIndividualBounds() methods prior to drawing the text:

Different ways of computing boundaries

Creating font classes

Converting fonts can be done using

Your rating: None

I’ve just released the first official version of ‘ds‘ (aka data structures), the successor of as3ds which is written in the HaXe language. (The name hx3ds was used during development and is obsolete because ds is now a module of the polygonal library).

The library is found here: There is also wiki page describing how to use it with ActionScript 3.0 (there are still some issues with the HaXe generated SWC files, I hope Nicolas finds some time to address them in a future release).

What’s new

The new release (changelog) contains many bug fixes, new features and a refined documentation. While there are still many things on my TODO list, I think the library is stable enough so it deserves to be the first major release. And since I’m using the library in all of my projects, you can expect some regular updates.

Why the heck HaXe !?

Firstly, ActionScript 3.0 doesn’t support any kind of generics/templates like Java does for example. The typed vectors introduced in FP10 are too crippled and limited so in the end you are still stuck with dynamic containers, which is a bad for complex projects. HaXe supports type parameters – in the image below you see that I’ve created a graph of stacks of hash maps and everything is typed. The nice thing is that you still get auto-completion:

FlashDevelop HaXe AutoCompletion

Secondly, HaXe provides a lot of syntax sugar that sweetens your daily coding experience. As an example, consider iterating over a collection. In HaXe it’s just:

//iterate over all elements of a 2D array:
var a = new Array2<Int>(3, 3);
for (i in a) trace(i);

In AS3 you have to create an iterator object and explicitly call the next() and hasNext() methods:

var a:Array2 = new Array2(3, 3);
var itr:Object = a.iterator();
while (itr.hasNext())
  var element:* =;

In general, HaXe implicitly does a lot of things for you! Probably one of the best language feature is type inference/implicit typing, which is absolutely brilliant for quick prototyping. It’s like writing AS1 but without compromising speed and type safety. At first it’s hard to get used to it because all ActionScript text books never get tired of repeating how important typing is and that everything should be typed. But If you think about it, the compiler should handle it where possible and relieve the developer from writing clumsy types over and over again:

var n:Number = 1;
var i:int = 1;

var f = 1.0; //compiler infers that n is a float
var i = 1; //compiler infers that n is an int

Thirdly, because as3ds was all about efficiency I couldn’t resist HaXe because it’s much faster. From my experience, ds performs ~2-6x better than as3ds. Same runtime, huge difference!

Your rating: None

The following is a quick start guide on how to get RobotLegs working on top of the Gaia Flash Framework written by Steven Sacks.  In later articles, I’ll show how to use per-Gaia-page Contexts, a visual example, as well as cover Enterprise scenarios.  For now this should get you up and running in 10 minutes.

What is RobotLegs?

RobotLegs is a MVCS framework for Flex & pure AS3 applications.  You can use with Flash as well.  I call it PureMVC done right.  She’s still not at version 1, but is really close (docs in progress).

What is Gaia?

The Gaia Flash Framework is the Ruby on Rails for building Flash websites.  You can code generate your entire Flash website in seconds.  It has modules, SEO, page navigation, and deep-linking all built-in, amongst other things.


Gaia is great for building websites, but it’s just a platform for you to build on, a bunch of helpful scaffolding; it doesn’t prescribe a way to build your applications.  Specifically, you have to write your own code for Business logic and Application/Domain logic.  You still have to write code to hit back-end services, parse the returning XML/JSON/AMF, and act on that data.

This is where an application framework like RobotLegs comes in.  You’re domain model goes in the Model classes, your Business logic goes in the Service classes, and your Domain/Application logic goes in your Commands & Mediators.  You setup your shiz in your Context class(es), like PureMVC’s ApplicationFacade

Gaia builds your site, and RobotLegs makes it work with dynamic data.


Before we get to the “How”, you need to know 2 requirements first.  This may scare some people off.

  1. You need both mxmlc & Flash CS4 or CS3
  2. Your application must be AS3

RobotLegs requires compiling in mxmlc because of the custom metadata tags.  Flash CS3 & CS4 do not currently support the -keep-as3-metadata mxmlc compiler parameter, thus, you must compile your RobotLeg’s implementation code using mxmlc (via Flex Builder 2, 3, Flash Builder 4, FDT, FlashDevelop, etc.).

While you can compile Gaia to an SWC in Flash CS4, and then re-compile via a Library linkage in mxmlc, this defeats the purpose of using the Gaia workflow in Flash.  I understand this is deal killer for many automated build/continuous integrations of Enterprise applications, so I’ll cover optional build scenarios in a future article.

Flash CS4 is required because it allows you to link to external libraries, in this case, Flex created SWC’s.  You could utilize Flash CS3, and just add a class path to the RobotLegs classes since you’ll typically only be using the interfaces in your Flash/Gaia specific code.  Both Flash CS3/CS4 will be exporting AS3 code since RobotLegs is for AS3, so you can’t use Flash 8 and export to AS2.

I currently do not have access to Flash CS5 alpha/beta to determine if it’s integration with Flash Builder 4 demoed at MAX 2009 would help in this instance, nor do I know if it can -keep-as3-metadata.

Quickstart Preface

The Quickstart may appear intimidating at 15 steps.  If you know Flex Builder/FlashDevelop, Flash, Gaia, and RobotLegs, you’ll do just fine, it’s not that bad.  Additionally, you only need to do this stuff once.

The rest of your project, you’ll spend in your code editor.  You can also link the Main file in Flex Builder to get code hinting on it.  The only time you go to Flash is to do Control + Enter.



1. Setup your Gaia site.

2. Open up your main.fla in Flash.

3. In Flash CS4, go to File > Publish Settings, click the Flash tab, click the Settings button next to AS version, click the Library Path tab, and link to the RobotLegs.swc.  In Flash CS3, just add a class path to the RobotLegs source code directory.


4. Save your main.fla.

5. Create an ActionScript 3 project in Flex/Flash Builder, FDT, or FlashDevelop.  Point it to the same directory your Gaia project lives.  I suggest changing bin-debug to bin since that’s what Gaia uses.  Although it’ll generate a SWF, it’s hereafter referred to as a “module” SWF since Gaia will load it in and use it’s pre-compiled classes.

6. Create your own main class, differently named (ie not “”), and put next to Gaia’s  This will be where your RobotLegs code lives.


7. Link to the RobotLegs.swc as a Library.  If you are in Flex/Flash Builder, you may wish to link to the RobotLegsLib Library project instead.  If so, I put this in Gaia’s lib folder next to the FLA’s that Gaia puts there.  The image below shows linking to the Library Project.


8. Create a “MainContext” ActionScript class where ever you’d like in your package structure.  Might I recommend something other than Gaia’s pages directory, like “”.  In this class, you register your modules, in this case, your Gaia pages that actually need Mediators.  Here’s mine:

package com.jxl.gaiarobotlegs.robotlegs.contexts
        import com.jxl.gaiarobotlegs.pages.IAboutPage;
        import com.jxl.gaiarobotlegs.robotlegs.mediators.AboutMediator;

        import flash.display.DisplayObjectContainer;

        import org.robotlegs.mvcs.Context;

        public class MainContext extends Context
                public function MainContext(contextView:DisplayObjectContainer)

                public override function startup():void
                        mediatorMap.mapModule('com.jxl.gaiarobotlegs.pages::AboutPage', IAboutPage, AboutMediator);

Notice the mapModule method goes “Your Gaia Page class as a String”, “The interface the Gaia Page class and the Mediator share”, and “The RobotLegs Mediator class”.  NOTE: In older builds of RobotLegs, they are using the fully qualified class name which is ::AboutPage, not .AboutPage (more info).  I have a hacked version locally which accepts a normal package path of “pages.AboutPage” vs. “pages::AboutPage”.  Yes, I’ve asked the RobotLegs authors to fix this.

9. Create 1 more class and 1 corresponding interface: a Mediator class for whatever Gaia page you’ll be mediating, and an interface of the same name with the I prefix.  Example: If you’re creating an “About Us” page for your site, you’ll probably have an about page node in your site.xml, and thus a corresponding FLA.  Create an “IAboutUs” interface, and an “AboutUsMediator” class that implements the interface.  Your Gaia “AboutUsPage” class will also implement the “IAboutUs” interface.  This is how RobotLegs will communicate to your Gaia code via the Bridge Pattern (more info on why).

Here’s the interface:

package com.jxl.gaiarobotlegs.pages
        public interface IAboutPage
                function setAboutText(value:String):void;

Here’s the Mediator:

package com.jxl.gaiarobotlegs.robotlegs.mediators
        import com.jxl.gaiarobotlegs.pages.IAboutPage;

        import flash.utils.Timer;

        import org.robotlegs.mvcs.Mediator;

        public class AboutMediator extends Mediator
                public var aboutPage:IAboutPage;

                private var timer:Timer;

                public function AboutMediator()

                public override function onRegister():void
                        timer = new Timer(3 * 1000);
                        timer.addEventListener(TimerEvent.TIMER, onTick, false, 0, true);

                private function onTick(event:TimerEvent):void
                        timer.removeEventListener(TimerEvent.TIMER, onTick);
                        timer = null;

                        aboutPage.setAboutText("Blah blah blah,nthis is from RobotLeg's 'AboutMediator'");

Thing to note in the above is the Dependency Injection via the [Inject] tag does IAboutPage vs. AboutPage; this ensures mxmlc doesn’t attempt to compile Gaia code into your module SWF.

10. Any events your Gaia About Us page will emit, put in the IAboutUs interface.  Any data your Gaia About Us page needs to have set on itself, implement a setter or a method in the IAboutUs interface.  This’ll ensure your About Us page class in Gaia and your AboutUsMediator won’t compile until you implement those methods, nor will your AboutUsMediator RobotLegs class.  Yes, I know events in interfaces aren’t enforced, but that doesn’t mean you shouldn’t do it.

Here’s the Gaia AboutPage class:

package com.jxl.gaiarobotlegs.pages
        import com.gaiaframework.api.*;
        import com.gaiaframework.debug.*;
        import com.gaiaframework.templates.AbstractPage;

        import flash.display.*;
        import flash.text.TextField;

        import gs.*;

        public class AboutPage extends AbstractPage implements IAboutPage
                public var copyTextField:TextField;

                public function AboutPage()
                        alpha = 0;
                        //new Scaffold(this);

                // called by RobotLegs's AboutPageMediator
                public function setAboutText(value:String):void
                        copyTextField.text = value;

                override public function transitionIn():void
              , 0.3, {alpha:1, onComplete:transitionInComplete});

                override public function transitionOut():void
              , 0.3, {alpha:0, onComplete:transitionOutComplete});

Notice the implementation of IAboutPage.  Since Gaia FLA’s by default have “../src” set in their class path, it’ll share the same class path with your ActionScript project.  The only class it’s importing form that code, however, is the interface, which is a few 300 bytes or so once compiled into the SWF.  If you’re clever, you could use External Libraries in CS4, but that’s out of scope for this article.

11. Open up your file in your editor of choice.  First, create a mainContext class reference, like:

private var mainContext:IContext;

12. Override init and do no call super.init.  Instead, write code to load in your RobotLegs SWF that your ActionScript project will create in bin.  You can use a Loader, your own wrapper class, extend Main to abstract away these details in a base class… whatever you want.  I used a Loader for this example.  Ensure you load the classes into the same ApplicationDomain so Gaia can share and use these classes, as well as any loaded SWF’s that need them.

var loaderContext:LoaderContext = new LoaderContext(false, ApplicationDomain.currentDomain);
moduleLoader.load(new URLRequest('GaiaRobotLegs.swf'), loaderContext);

13. In your Event.COMPLETE function, snatch your MainContext class from the loaded SWF, instantiate it, and pass the stage in, and call super.init to let Gaia know you’re done, like so:

private function onComplete(event:Event):void
        const mainContextClassName:String = "com.jxl.gaiarobotlegs.robotlegs.contexts.MainContext";
                var mainContextClass:Class = moduleLoader.contentLoaderInfo.applicationDomain.getDefinition(mainContextClassName) as Class;
                mainContext = new mainContextClass(stage) as IContext;
                trace("Failed to find class: " + err);

You use the stage so any DisplayObject added to the DisplayList will be checked to see if it has an associated Mediator.  Yes, I know OOP Purists will groan at this; don’t worry, I’ll offer a more pure solution later.  Remember, Gaia is typically used in Agencies with 2 week deadlines vs. 4 months; this is more than good enough for that crowd.

14. In your main ActionScript project class, define a dependency variable; this is strictly to ensure your RobotLegs code is compiled into the “module SWF”.  Assuming you have Build Automatically on in Flex Builder, it should make your SWF in the bin folder for you once you save.  Here’s mine:


        import com.jxl.gaiarobotlegs.robotlegs.contexts.MainContext;

        import flash.display.Sprite;

        public class GaiaRobotLegs extends Sprite
                private var mainContext:MainContext;

                public function GaiaRobotLegs()

15. Test your Gaia project in Flash.


As you can see, all you need to do now is code all your code in Flex Builder (or editor of choice), and anytime you need to compile your site, you just go back to main.fla in Flash and hit Control + Enter.

You’ll also notice I create one “global” context here.  Reiterating, this is good enough for most design agencies as not every Gaia page will need a Mediator, and most Gaia pages aren’t that complex View’s anyway.

For the purists, in a future article I’ll explain how you can modify the Gaia template files, and configure your Context classes via the Gaia site.xml externally.  This ensures that the details of a Gaia page registering it’s own Context, or simply just creating a single Context + Mediator for itself, are self-contained and as strongly-typed as possible.

For those building Enterprise sized sites with automated build processes, I’ll later go into how you can integrate a Gaia site into your build process.


Just a quick thanks to Steven for helping answer my Gaia architecture questions, and to the RobotLegs crew for helping me work out the details of how it could be done.

Your rating: None

Working with text in Flash can be a painstaking procedure, especially if you just want to quickly draw and animate some simple characters on screen using ActionScript. It usually takes many lines of codes to setup and adjust the appearance and alignment of text fields and you need to take care of other little annoying details like the whole font embedding procedure.

Lately I was working on a new interactive testbed for the motor physics engine where I’m solely using the new flash player 10 vector drawing API, but I needed to add some text. I was curious about how hard it would be to add text rendering capabilities to the drawing API and after some investigation I found an easy way using an old copy of Fontographer to extract the necessary data from a true type font file (much easier than trying to parse a .ttf file directly):

  1. Export the font data as an EPS postscript file.
  2. Write a parser to transform each glyph into a bunch of lineTo, moveTo and curveTo commands.
  3. Export and parse the spacing and kerning table (proportional fonts only).
  4. Write a method for drawing cubic bézier curves.

Following these steps I ended up with a simple text rendering engine. Here my result:

Monospace/Proportional font rendering (Consolas/Arial)

As you see small font sizes won’t be that sharp and readable as a TextField set to “Anti-alias for readability” but on the other hand the results are very smooth and perfect for animation. Placing and aligning text blocks is also much easier since it’s straightforward to compute bounding boxes for glyphs and text blocks.

So now drawing a text is a matter of writing:

graphics.beginFill(0, 1);
new Arial(10.0).print("Hello World", 100, 100);

This will draw “Hello World” at x,y=100,100 and a font size of 10 points. KISS!
An exciting thing is that the FP10 drawing API is actually very fast; the following demo scrolls all ASCII characters back and forth. Press space to toggle between regular text field and vector rendering. In the first case a TextField object is created once and then moved by adjusting its position, whereas in the second case the whole text is redrawn in every frame at a new position:

TextField vs. Graphics

If (hopefully) more people besides me find this useful I would invest some extra time to polish the code and publish it as open source. It made my coding life simpler :-)

Your rating: None

Mike Chambers posted that Flash Player 10 is officially live. This completes your 1-2 punch of RIA/game platform releases of Silverlight and Flash this week.

We have just released the shipping version of Flash Player 10 (Mac, Windows and Linux). You can find more information on all of the new features on the Flash Player product page.

You can download the player for Mac, Windows and Linux players from here.

You can grab debug and standalone players from here.

You can grab the release notes from here.

Flash Player 10 is great news. There are so many things in it from a new data structure (Vector), to local FileReference, to Matrix and 3D helpers, to speed improvements and video enhancements being able to play other video types and more (this was actually in a late version of flash player 9 as well but will be used more here). It does take time for flash versions to get out in the wild, about 9 months to where they are in the 90%-95% range where you can convince people to use it in production, but getting those skills now is good.  The scripting platform is still Actionscript 3 so anyone still making Flash Player 8 and AS2 stuff is now two revolutions behind.

Another thing I am looking forward to soon (next week) that is missing from both Flash and Silverlight, is the ability to develop for the iPhone, which Unity3D is dropping the iPhone kit on Oct 22nd. Unity3D has effectively taken Director’s 3d game development (hardware accelerated) market lead away this year and late last year and is a great platform. Director who?

Lots of great tools and platforms to create the innovative new applications, games and markets that are so needed right now. Go create!

Your rating: None

Finally I found a bit of time this weekend to do some other tests with expressions evaluation.

The results are pretty interesting, even if obvious from some point of view. I took the ExpressionEvaluator I wrote as example for the first post about this topic, and then I edited a bit the code adding just in time AS Bytecode compilation. Thanks to that, expressions evaluation is much more fast and always safe because it runs in its own ApplicationDomain.

You can download the sources here, that includes the edited code and a test file. The test file runs 1 milion of iterations and may hang your browser or at worst case your system. Reduce the value of the ITERATIONS constant if you are not sure about the power of you machine.

If you want to read a bit more details about that, click on the link below to continue reading.

I did some simple modification to the existing classes:

  • Added a compile method to the IExpression interface;
  • Added the symbolNames getter to the SymbolTable class, to retrieve a list of all the symbols defined inside the symbol table.
  • Converted hxasm to Actionscript (finxing some small issues to avoid to include all the Haxe support classes);
  • Implemented the compile method on all the AST nodes using HxASM;
  • Added an SWFCompile class that takes an IExpression and a SymbolTable and produces an SWF;
  • Edited the CompiledExpression class adding a compile method that generates an SWF starting from an IExpression.

Adding just in time compilation was easy a really quick thanks to HxASM. HxASM is an Haxe library written by Haxe's author (Nicolas Cannasse) that makes you able to write low level bytecode instructions in ActionScript an generate a ByteArray that contains an (hopefully) valid SWF. The SWF can then be loaded by Loader.loadBytes and executed.
It have been easy to port HxASM to ActionScript too, thanks again to the Haxe compiler which is able to generate AS3 code starting from Haxe code.

Compilation was achieved easilly because the expressions we are compiling are really simple. The compiler generates a class called CompiledExpression that has a method execute that can be called to execute the expression. All the values inside the symbol table are exposed as public properties.

Just to make you understand better, givin this expression:

sin( x * 10 + y - 11 )

SWFCompiler will generate the bytecode for this class:

class CompiledExpression
  public var sin: *;
  public var x: *;
  public var y: *;
  public function execute(): Number
    return sin( x * 10 + y - 11 );

As you can see the type of the class variables is generic, but you can improve the generator to understand if a variable should contain a number or a function.

The generated SWF can be loaded and run easilly:

var loader: Loader = new Loader();
var parser: Parser = new Parser(
  new Scanner( "sin( x / ( 4 / 2 * 2 - 2 + 2 * x / x ) ) * 100" ) );

var compiled: CompiledExpression = parser.parse();
var data: ByteArray = compiled.compile();

loader.contentLoaderInfo.addEventListener( Event.COMPLETE,
  function( event: Event ): void
    var info: LoaderInfo = ( as LoaderInfo );
    var klass: Class = (
      info.applicationDomain.getDefinition( "CompiledExpression" ) as Class );

    var cp: Object = new klass();

    cp.x = 10;
    cp.sin = Math.sin;

    trace( "Result", cp.execute() );

  } );

loader.loadBytes( data, new LoaderContext( false,
  new ApplicationDomain( ApplicationDomain.currentDomain ) ) );

I build a simple test (shipped with the sources) that runs 1 milion times an expression using 3 methods: Evaluation, Compiled and Native.

Results (on my Mac Pro 8 Core, 4GB ram) are obvious:

eval 1000000 iterations: 12939
compiled 1000000 iterations: 848
native 1000000 iterations: 371

As you can see native execution is faster, but compiled version runs really well (you might want to optimize the generated code to make it run faster) and it is many times faster then AST evaluation.

If you want to run a complex expression many times and dont want to stress your Flash Plugin, you can use the compiled version.

Gotta run, see u next time!

Your rating: None

Two Directory Source Control is a work flow where you utilize one directory from source control, one directory that you work out of, and you then shuttle between them at your convenience. In this article, I’ll describe how traditional work flows work, the problems with them, and how you can use the Two Directory option to overcome them. This article assumes you know what source control is, what a merging tool is, and the value they bring to your work flow.

Traditional Source Control Work Flows

Many tools such as Flex Builder , Flash , and Visual Studio have plug ins that allow integration with source control such as Subversion , CVS , Perforce , and Visual Source Safe . This allows you to check out code & assets, edit them, and check your changes back into source control directly from your IDE of choice. Tools such as Subclipse , a plug in for Eclipse , allow you to check out & check in code from an SVN repo directly into your project. It also has nice built-in "diffing", the ability to show 2 pieces of code side by side to allow you to see what’s changed, and merge them if you so desire.

The reality, however, is that these work flows aren’t as easy as the marketing makes it sound. The main flaw they all have is conflict resolution.

Conflict Resolution

Conflicts are when the code you have on your machine is different from what is in source control. Whether this is because the file in source control is newer, someone has the file checked out already in Visual Source Safe, or a folder/package structure has been re-arranged making your structure out of sync. My main gripe with the tools I’ve used is that they do not do an adequate job of making conflict resolution easy. I define easy as simple to understand what the conflict is, how to resolve it, and not interrupting your current work forcing you to immediately deal with the conflict. To expound on that last part, I agree that you must deal with conflicts when checking in code, but a conflict should not immediately break your current build until you resolve a conflict. Nor should it prevent you from working.

There are 2 scenarios where this irritation is shown. The first example is where you check out the latest code from SVN for your project. You then make your Flex Project in that directory. Assuming you’ve figured out how to prevent Tortoise SVN from copying .svn folders into your bin directories, you download the latest, and get to work. How long you work, however, is based on how often your team checks in code. Even if you are working on different portions of the code, if someone checks in code that relates to yours, you’ll have to deal with the conflict. Typically, you’ll do an update first, and once you’ve ensure none of the code you are committing needs to be merged via resolved conflicts, you commit your changes.

As your team size grows, however, many people will be checking in code. Since dealing with conflicts isn’t fun, sometimes teams will check in more often. This results in a behavior re-enforcement for others to check in sooner as well. While I’m a firm believer in having something compiling sooner than later is a great Agile practice to follow to reduce risk, it’s hard to really make head way on some of your code if your focused on getting code compiling quicker so you can check in vs. making progress on your code.

It may seem subtle, but it’s a major attitude adjustment that I perceive as harmful for developers to have. They should enjoy coding their masterpieces, and assuming they are communicating with others and/or their team lead, they should be focusing. A developers focus is a very important commodity that shouldn’t be taken lightly. Instead of coding by positive reinforcement, "I can’t wait to check this in for others to see!", they are instead coding by negative reinforcement, "I need to hurry up and check in what I have working so I don’t have to deal with a confusing merge operationthat’ll slow me and my team down".

Eclipse combined with Subclipse has some good merging and history management tools. Eclipse has it’s own local history where it stores multiple versions of your files. You can choose to revert if you need to and it’ll provide a diffing tool right there for you. My first problem is the diffing tool sucks. Subclipse is not much better. I last used it in February of 2007; if it has improved since then, let me know. This wouldn’t be a big deal, except when it’s 5:59 pm on a Friday evening, and you really really want to go home. Unless you resolve the conflict, your build will be broken, and you cannot commit till you do so. Suddenly, a good diff tool becomes very important. It’s really frustrating for a developer, and removes motivation to work later. To compound it, most developers I know refuse to leave until their code is compiling. Suddenly an update interferes with this.

The second scenario which is worse than the above is package re-factoring. This is when you change the directory of a file, or a whole folder structure. If it is minute enough, you can usually get by if its not a directory you were working in. If, however, it is AND files have change, this can be a really long and frustrating process… all the while your code no longer works. You cannot compile JUST because you wanted to do the right thing and check your code in. This is also where you can truly tell if a diffing tool is good or not; how it helps you resolve folder changes and their relations to files.

One solution to this is to check your code into a branch and merge in later when it is convenient for you.

The last problem with working directly from the repo is actually caused by a feature of a lot of source control applications. This is the ability to only check in a directory with files that you are working on. For example, if your team is using Cairngorm, and all you are doing isValueObject and Factory work, you can pretty much stay in those 2 directories, and not worry about someone stepping on your toes.

…it is a false sense of security, however. The cardinal rule for source control is only check in code that compiles. This doesn’t mean that it compiles in your machine. What this means is if you were to check out the entire build from source control, could you get to compile? By updating just your area of interest from source control, and then committing your changes successfully in that area, in this case "vo" and "factories" packages, does not mean you have followed this rule. If another developer working on the Cairngorm "delegates" was using your Factories, her/his code would break even though you successfully checked code in. In effect, you broke the build. At one of my older jobs, whoever broke the build had to change their IM icon to a monkey. It sounds trivial, but it really sucked when people asked you why your IM icon was a monkey…

To prevent this, you need to update the entire repo. For some repo’s, this is not easy to do. Then one I’m working with currently is at least over 12 gigs, with hundreds of thousands of files, both ASCII and binary. That means that if at least 12 developers are working on it at the same time, chances are high a lot of files will be added and changed. Regardless, you only take the hit in the beginning, if you update frequently, and have a good diff tool, this increases your ability to ensure your code won’t break the build when you check in. Enterprise tools installed on the source control server itself like Cruise Control can actually automate this process, and send emails with information on who checked in code, if it compiled or not, and what was changed. To be fair, this isn’t a problem with a lot of independent server-side code.

Another minor point is how some source control systems work with locking files. Some allow developers to check out code and "lock" it. This prevents others from checking it in. Some make it impossible to work with because the code then becomes read-only. Since having another directory under your control, away from the source folder, you won’t have this problem.

To summarize, there are a lot of problems, even with built-in IDE tools, to working directly in the repo:

  1. conflict resolution prevents you from compiling. Manager says, "Can I see the latest?" Dev replies, "No… I’m merging code."
  2. folder change conflict resolution. "Can you re-factor later?"
  3. pain avoidance code committing. "I’d better hurry up and check in what I have working so I don’t have to merge… to heck with focusing".
  4. not checking out the entire repo, committing code, and breaking the build. "Works on my box…"

Two Directory Source Control Work Flow

To reiterate, Two Directory Source Control is a work flow where you utilize one directory from source control, one directory that you work out of, and you then shuttle & merge between them at your convenience. I’ve been using this work flow for about 2 1/2 years now and everyone I encounter usually finds it extremely odd I don’t work directly in the repo, hence the need for this blog entry. I was taught this work flow by Thomas Burleson, one of people responsible for the Universal Mind Cairngorm Extensions . Beyond the above, his additional justification at the time was that with Flex 1.5, SVN and Tomcat or JRun would do weird things together every so often, resulting in strange compilation bugs. Before Flex 2, you had to compile on the server (it was the law), so you’d setup a local web server on your box, and refresh a web browser page to see your app.

This work flow solves the problems above.

The first thing is conflict resolution is now done at your convenience. This is a big deal. Suddenly getting the latest from the repo is not a grand affair. You can just get the latest, and go back to work. You can also see what has changed without having to immediately integrate it into your code base if you’re not ready to do so. You can also keep working; since the code you updated is in a totally different directory, the code you’re working on isn’t affected. About to catch a plane an losing wireless? Just update, and you can still focus on coding instead of resolving your changes with someone else when you are incapable of giving them a phone call to discuss. Finally, you can confidence that when you update, your code will still work. That’s a good feeling.

Secondly, file conflict resolution is challenging, yet fun. Folder conflict resolution, however, is very challenging, and requires a lot of meticulous concentration. Deleting directories can delete a lot of files and folders within them, and thus tick a lot of other developers off if you screw up and check in your screw up. So you have to be careful. Rather than updating some massive package re-factoring and preparing for a battle with Subclipse, you can instead just let Tortoise SVN do it’s job; you just right click and update; done. When you’re ready, and you know your version of code works, you can then merge the 2 pieces using Beyond Compare, or whatever Merging tool you use. It’s a lot easier to do this when you have 2 real, physical folder directories vs. a virtual one.

Third, this convenience extends to code committing. Rather than being motivated to commit code out of fear of merging, you can do so when you’re good and ready. Even if you wait 3 days, you know you’ll have plenty of time to merge the changes into your code, get it working, THEN check in inside a safe, non-repo area. This confidence in your ability to merge with the assurance you have a nice sequestered area away from your repo motivates you to focus on creating good code vs. creating compiling code for the mere sake of making merging easier.

Fourth, because this work flow forces you to continually get the latest code form the entire repo, you always have the latest build, not just a portion of the build. This ensures that when you check in code, you know it won’t break the build because you already have the entire build and have tested it on your local box. Typically anything wrong from there is a configuration issue, not a code out of sync issue.

You need 2 things to utilize this work flow. The first is your folder structure and the second is a good merge / diffing tool.

Folder Structure

When setting up a project, you create your project folder. Inside of that, you make 2 directories called "dev" and "local". The "dev" folder is where your repo goes. This can be an SVN/CVS repo, or a Perforce work space. In the case of Subversion, I’ll usually have Tortoise check out trunk, tags, and branch here.

To say it another way, "dev" is where your source control code goes. The "local" folder is where your copy goes. You play and work inside of local. When you are ready to merge in changes, you use a merge tool to do so.

dev and local folders

Setting Up Beyond Compare

For the merge tool, I use Beyond Compare . While it’s only for PC, I purchased CrossOver , a Windows emulation tool for Mac, just so I could retain the same work flow on both my PC and Mac.

BeyondCompare has a few setup steps that you only have to do once. These are:

  1. Change default keyboard short cuts.
  2. Creating a new "session" to have your repo on the left, local code on the right.
  3. Showing only mismatches.
  4. Making your session expand folders & expand only folders with differences.
  5. Setting up BeyondCompare to filter (aka not show) certain files and folders
  6. Adjusting the file comparison control to be Size & CRC based
  7. Full Refresh!
  8. Synchronize to Right.

Let’s go over these steps in detail.

1. Change Default Keyboard Shortcuts

When you get the latest code, you’ll want to shuttle over to your local repository when you’re ready. If it’s new or unchanged files, you’ll want to do this quickly. I find Control + Right Arrow and Control + Left Arrow for putting local code todev is the best shortcuts for this.

Go to Tools > Options, and select "Keyboard" from the bottom left. Scroll down till you see "Copy to Right". Click it and press Control + Right Arrow key. Click "Copy to Left" and press Control + Left Arrow key.

BeyondCompare keyboard shortcuts

2. Creating a new "session" to have your repo on the left, local code on the right.

The easiest way to create a new session in BeyondCompare is to go "Session > New Empty". A session in BC is your saved settings with a name. A session will store what directory you are viewing on the left, what directory you are comparing to on the right, and other comparison options. You typically create 1 session per project. I typically put the dev, your source code repository on the left, and local, your copy, on the right.

Dev on the left, Local on the right

3. Showing only Mismatches

Click the "Only Mismatches" button. This will filter the files to only show files and folders that aren’t the same. Blue ones will be files that do not exist on the other side. Red indicates files that are different and gray indicates a filethat’s older than the file on the other side.

Show only mismatches

4. Making your session expand folders & expand only folders with differences

You want to have the views automatically expand folders if they are different. To do this, you need to check some check boxes in 2 places. The first place is in the Session settings under "Session > Session Manager". For your session, you’ll see a section called "Folder Handling". Check all 3 check boxes.

The second place is in "Session > Comparison Control". You’ll see the same 3 check boxes at the bottom of the dialogue; click all 3.

Expand folders automatically and only with differences

You’ll have to do this again in another dialogue; see step #6.

5. Setting up Beyond Compare to filter (aka not show) certain files and folders

There are some files you don’t care about. For Subversion, for example, these are ".svn". For Flex Builder, this includes .actionScriptProperties, .settings, etc. For Mac users who show all files, this is .DS_Store. You can filter these files out by clicking the glasses icon, and typing in the files and folders you want to filter out, 1 per line, and clicking OK.

While this will filter out files in the view, it WILL copy them if you shuttle folders back and forth. I never copy folders across inBeyond Compare , only files. The exception to this rule is if you do an "Actions > Synchronize Folders". This will obey your filter settings. If weird things start happening in your code, make sure you didn’t accidentally copy a folder with a bunch of .svn folders and files in it.

6. Adjusting the file comparison control to be Size & CRC based

To ensure you get a decent difference between files, choose both size & crc checking for file comparisons. You can choose this radio button in "Session > Comparison Control". See above picture.

modify file comparison and re-do folder expansion

7. Do a full refresh!

This will compare your entire folder & file structure. You can do so via "View > Full Refresh".

8. Synchronize to Right

If this is your first time shuttling files from your dev folder to your local folder, you can go "Actions > Synchronize Folders > Synchronize to Right".


That’s it, you’re ready to go! Get latest revision, do a full refresh, merge, code. Later, you can get latest revision, merge your code left, and check-in with confidence! In the end, Two Directory Source Control allows you to still compile your code when checking out the latest revision, makes it easier to resolve largere-factoring efforts, allows code committing to be a pleasurable endeavor, and ensuring you always check out the whole repo so you can confirm it builds on your machine with your latest changes. I hope it helps you out too.

Let me digress here a minute and point out how disappointed I am in the Mac platforms support of development tools. Tortoise SVN and Beyond Compare are fantastic tools for the PC. I have not found the equivalent for Mac. SvnX and the Tortoise equivalent for Mac really really suck, especially when you grew up in Tortoise SVN. Same thing on the Merge front; I have yet to find a good GUI based Merge tool for Mac. I’ll probably install Parallels on my new Mac, and stick to the PC version of Tortoise SVN & Beyond Compare since I still can’t shake Flash Develop .

Your rating: None