Skip navigation
Help

Ruby

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None
Original author: 
Jon Brodkin

Niall Kennedy

Todd Kuehnl has been a developer for nearly 20 years and says he's tried "pretty much every language under the sun."

But it was only recently that Kuehnl discovered Go, a programming language unveiled by Google almost four years ago. Go is still a new kid on the block, but for Kuehnl, the conversion was quick. Now he says "Go is definitely by far my favorite programming language to work in." Kuehnl admitted he is "kind of a fanboy."

I'm no expert in programming, but I talked to Kuehnl because I was curious what might draw experienced coders to switch from proven languages to a brand new one (albeit one co-invented by the famous Ken Thompson, creator of Unix and the B programming language). Google itself runs some of its back-end systems on Go, no surprise for a company that designs its own servers and much of the software (right down to the operating systems) that its employees use. But why would non-Google engineers go with Go?

Read 17 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Stack Exchange

Stack Exchange

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

Where have all the user-defined operators gone? tjameson can't "find any procedural languages that support custom operators in the language," he writes. "There are hacks (such as macros in C++), but that's hardly the same as language support." Is there any good reason we can't write our own operators? See the original question here.

Are you experienced?

Karl Bielefeldt answers (94 votes): There are two diametrically opposed schools of thought in programming language design. One is that programmers write better code with fewer restrictions, and the other is that they write better code with more restrictions. In my opinion, the reality is that good experienced programmers flourish with fewer restrictions, but that restrictions can benefit the code quality of beginners. User-defined operators can make for very elegant code in experienced hands, and utterly awful code by a beginner. So whether your language includes them or not depends on your language designer's school of thought. Related: "How are operators saved in memory?"

Read 4 remaining paragraphs | Comments

0
Your rating: None
Original author: 
(author unknown)

We’ve all been there: that bit of JavaScript functionality that started out as just a handful of lines grows to a dozen, then two dozen, then more. Along the way, a function picks up a few more arguments; a conditional picks up a few more conditions. And then one day, the bug report comes in: something’s broken, and it’s up to us to untangle the mess.

As we ask our client-side code to take on more and more responsibilities—indeed, whole applications are living largely in the browser these days—two things are becoming clear. One, we can’t just point and click our way through testing that things are working as we expect; automated tests are key to having confidence in our code. Two, we’re probably going to have to change how we write our code in order to make it possible to write tests.

Really, we need to change how we code? Yes—because even if we know that automated tests are a good thing, most of us are probably only able to write integration tests right now. Integration tests are valuable because they focus on how the pieces of an application work together, but what they don’t do is tell us whether individual units of functionality are behaving as expected.

That’s where unit testing comes in. And we’ll have a very hard time writing unit tests until we start writing testable JavaScript.

Unit vs. integration: what’s the difference?

Writing integration tests is usually fairly straightforward: we simply write code that describes how a user interacts with our app, and what the user should expect to see as she does. Selenium is a popular tool for automating browsers. Capybara for Ruby makes it easy to talk to Selenium, and there are plenty of tools for other languages, too.

Here’s an integration test for a portion of a search app:

def test_search
  fill_in('q', :with => 'cat')
  find('.btn').click
  assert( find('#results li').has_content?('cat'), 'Search results are shown' )
  assert( page.has_no_selector?('#results li.no-results'), 'No results is not shown' )
end

Whereas an integration test is interested in a user’s interaction with an app, a unit test is narrowly focused on a small piece of code:

When I call a function with a certain input, do I receive the expected output?

Apps that are written in a traditional procedural style can be very difficult to unit test—and difficult to maintain, debug, and extend, too. But if we write our code with our future unit testing needs in mind, we will not only find that writing the tests becomes more straightforward than we might have expected, but also that we’ll simply write better code, too.

To see what I’m talking about, let’s take a look at a simple search app:

Srchr

When a user enters a search term, the app sends an XHR to the server for the corresponding data. When the server responds with the data, formatted as JSON, the app takes that data and displays it on the page, using client-side templating. A user can click on a search result to indicate that he “likes” it; when this happens, the name of the person he liked is added to the “Liked” list on the right-hand side.

A “traditional” JavaScript implementation of this app might look like this:

var tmplCache = {};

function loadTemplate (name) {
  if (!tmplCache[name]) {
    tmplCache[name] = $.get('/templates/' + name);
  }
  return tmplCache[name];
}

$(function () {

  var resultsList = $('#results');
  var liked = $('#liked');
  var pending = false;

  $('#searchForm').on('submit', function (e) {
    e.preventDefault();

    if (pending) { return; }

    var form = $(this);
    var query = $.trim( form.find('input[name="q"]').val() );

    if (!query) { return; }

    pending = true;

    $.ajax('/data/search.json', {
      data : { q: query },
      dataType : 'json',
      success : function (data) {
        loadTemplate('people-detailed.tmpl').then(function (t) {
          var tmpl = _.template(t);
          resultsList.html( tmpl({ people : data.results }) );
          pending = false;
        });
      }
    });

    $('<li>', {
      'class' : 'pending',
      html : 'Searching &hellip;'
    }).appendTo( resultsList.empty() );
  });

  resultsList.on('click', '.like', function (e) {
    e.preventDefault();
    var name = $(this).closest('li').find('h2').text();
    liked.find('.no-results').remove();
    $('<li>', { text: name }).appendTo(liked);
  });

});

My friend Adam Sontag calls this Choose Your Own Adventure code—on any given line, we might be dealing with presentation, or data, or user interaction, or application state. Who knows! It’s easy enough to write integration tests for this kind of code, but it’s hard to test individual units of functionality.

What makes it hard? Four things:

  • A general lack of structure; almost everything happens in a $(document).ready() callback, and then in anonymous functions that can’t be tested because they aren’t exposed.
  • Complex functions; if a function is more than 10 lines, like the submit handler, it’s highly likely that it’s doing too much.
  • Hidden or shared state; for example, since pending is in a closure, there’s no way to test whether the pending state is set correctly.
  • Tight coupling; for example, a $.ajax success handler shouldn’t need direct access to the DOM.

Organizing our code

The first step toward solving this is to take a less tangled approach to our code, breaking it up into a few different areas of responsibility:

  • Presentation and interaction
  • Data management and persistence
  • Overall application state
  • Setup and glue code to make the pieces work together

In the “traditional” implementation shown above, these four categories are intermingled—on one line we’re dealing with presentation, and two lines later we might be communicating with the server.

Code Lines

While we can absolutely write integration tests for this code—and we should!—writing unit tests for it is pretty difficult. In our functional tests, we can make assertions such as “when a user searches for something, she should see the appropriate results,” but we can’t get much more specific. If something goes wrong, we’ll have to track down exactly where it went wrong, and our functional tests won’t help much with that.

If we rethink how we write our code, though, we can write unit tests that will give us better insight into where things went wrong, and also help us end up with code that’s easier to reuse, maintain, and extend.

Our new code will follow a few guiding principles:

  • Represent each distinct piece of behavior as a separate object that falls into one of the four areas of responsibility and doesn’t need to know about other objects. This will help us avoid creating tangled code.
  • Support configurability, rather than hard-coding things. This will prevent us from replicating our entire HTML environment in order to write our tests.
  • Keep our objects’ methods simple and brief. This will help us keep our tests simple and our code easy to read.
  • Use constructor functions to create instances of objects. This will make it possible to create “clean” copies of each piece of code for the sake of testing.

To start with, we need to figure out how we’ll break our application into different pieces. We’ll have three pieces dedicated to presentation and interaction: the Search Form, the Search Results, and the Likes Box.

Application Views

We’ll also have a piece dedicated to fetching data from the server and a piece dedicated to gluing everything together.

Let’s start by looking at one of the simplest pieces of our application: the Likes Box. In the original version of the app, this code was responsible for updating the Likes Box:

var liked = $('#liked');

var resultsList = $('#results');


// ...


resultsList.on('click', '.like', function (e) {
  e.preventDefault();

  var name = $(this).closest('li').find('h2').text();

  liked.find( '.no-results' ).remove();

  $('<li>', { text: name }).appendTo(liked);

});

The Search Results piece is completely intertwined with the Likes Box piece and needs to know a lot about its markup. A much better and more testable approach would be to create a Likes Box object that’s responsible for manipulating the DOM related to the Likes Box:

var Likes = function (el) {
  this.el = $(el);
  return this;
};

Likes.prototype.add = function (name) {
  this.el.find('.no-results').remove();
  $('<li>', { text: name }).appendTo(this.el);
};

This code provides a constructor function that creates a new instance of a Likes Box. The instance that’s created has an .add() method, which we can use to add new results. We can write a couple of tests to prove that it works:

var ul;

setup(function(){
  ul = $('<ul><li class="no-results"></li></ul>');
});

test('constructor', function () {
  var l = new Likes(ul);
  assert(l);
});

test('adding a name', function () {
  var l = new Likes(ul);
  l.add('Brendan Eich');

  assert.equal(ul.find('li').length, 1);
  assert.equal(ul.find('li').first().html(), 'Brendan Eich');
  assert.equal(ul.find('li.no-results').length, 0);
});

Not so hard, is it? Here we’re using Mocha as the test framework, and Chai as the assertion library. Mocha provides the test and setup functions; Chai provides assert. There are plenty of other test frameworks and assertion libraries to choose from, but for the sake of an introduction, I find these two work well. You should find the one that works best for you and your project—aside from Mocha, QUnit is popular, and Intern is a new framework that shows a lot of promise.

Our test code starts out by creating an element that we’ll use as the container for our Likes Box. Then, it runs two tests: one is a sanity check to make sure we can make a Likes Box; the other is a test to ensure that our .add() method has the desired effect. With these tests in place, we can safely refactor the code for our Likes Box, and be confident that we’ll know if we break anything.

Our new application code can now look like this:

var liked = new Likes('#liked');
var resultsList = $('#results');



// ...



resultsList.on('click', '.like', function (e) {
  e.preventDefault();

  var name = $(this).closest('li').find('h2').text();

  liked.add(name);
});

The Search Results piece is more complex than the Likes Box, but let’s take a stab at refactoring that, too. Just as we created an .add() method on the Likes Box, we also want to create methods for interacting with the Search Results. We’ll want a way to add new results, as well as a way to “broadcast” to the rest of the app when things happen within the Search Results—for example, when someone likes a result.

var SearchResults = function (el) {
  this.el = $(el);
  this.el.on( 'click', '.btn.like', _.bind(this._handleClick, this) );
};

SearchResults.prototype.setResults = function (results) {
  var templateRequest = $.get('people-detailed.tmpl');
  templateRequest.then( _.bind(this._populate, this, results) );
};

SearchResults.prototype._handleClick = function (evt) {
  var name = $(evt.target).closest('li.result').attr('data-name');
  $(document).trigger('like', [ name ]);
};

SearchResults.prototype._populate = function (results, tmpl) {
  var html = _.template(tmpl, { people: results });
  this.el.html(html);
};

Now, our old app code for managing the interaction between Search Results and the Likes Box could look like this:

var liked = new Likes('#liked');
var resultsList = new SearchResults('#results');


// ...


$(document).on('like', function (evt, name) {
  liked.add(name);
})

It’s much simpler and less entangled, because we’re using the document as a global message bus, and passing messages through it so individual components don’t need to know about each other. (Note that in a real app, we’d use something like Backbone or the RSVP library to manage events. We’re just triggering on document to keep things simple here.) We’re also hiding all the dirty work—such as finding the name of the person who was liked—inside the Search Results object, rather than having it muddy up our application code. The best part: we can now write tests to prove that our Search Results object works as we expect:

var ul;
var data = [ /* fake data here */ ];

setup(function () {
  ul = $('<ul><li class="no-results"></li></ul>');
});

test('constructor', function () {
  var sr = new SearchResults(ul);
  assert(sr);
});

test('display received results', function () {
  var sr = new SearchResults(ul);
  sr.setResults(data);

  assert.equal(ul.find('.no-results').length, 0);
  assert.equal(ul.find('li.result').length, data.length);
  assert.equal(
    ul.find('li.result').first().attr('data-name'),
    data[0].name
  );
});

test('announce likes', function() {
  var sr = new SearchResults(ul);
  var flag;
  var spy = function () {
    flag = [].slice.call(arguments);
  };

  sr.setResults(data);
  $(document).on('like', spy);

  ul.find('li').first().find('.like.btn').click();

  assert(flag, 'event handler called');
  assert.equal(flag[1], data[0].name, 'event handler receives data' );
});

The interaction with the server is another interesting piece to consider. The original code included a direct $.ajax() request, and the callback interacted directly with the DOM:

$.ajax('/data/search.json', {
  data : { q: query },
  dataType : 'json',
  success : function( data ) {
    loadTemplate('people-detailed.tmpl').then(function(t) {
      var tmpl = _.template( t );
      resultsList.html( tmpl({ people : data.results }) );
      pending = false;
    });
  }
});

Again, this is difficult to write a unit test for, because so many different things are happening in just a few lines of code. We can restructure the data portion of our application as an object of its own:

var SearchData = function () { };

SearchData.prototype.fetch = function (query) {
  var dfd;

  if (!query) {
    dfd = $.Deferred();
    dfd.resolve([]);
    return dfd.promise();
  }

  return $.ajax( '/data/search.json', {
    data : { q: query },
    dataType : 'json'
  }).pipe(function( resp ) {
    return resp.results;
  });
};

Now, we can change our code for getting the results onto the page:

var resultsList = new SearchResults('#results');

var searchData = new SearchData();

// ...

searchData.fetch(query).then(resultsList.setResults);

Again, we’ve dramatically simplified our application code, and isolated the complexity within the Search Data object, rather than having it live in our main application code. We’ve also made our search interface testable, though there are a couple caveats to bear in mind when testing code that interacts with the server.

The first is that we don’t want to actually interact with the server—to do so would be to reenter the world of integration tests, and because we’re responsible developers, we already have tests that ensure the server does the right thing, right? Instead, we want to “mock” the interaction with the server, which we can do using the Sinon library. The second caveat is that we should also test non-ideal paths, such as an empty query.

test('constructor', function () {
  var sd = new SearchData();
  assert(sd);
});

suite('fetch', function () {
  var xhr, requests;

  setup(function () {
    requests = [];
    xhr = sinon.useFakeXMLHttpRequest();
    xhr.onCreate = function (req) {
      requests.push(req);
    };
  });

  teardown(function () {
    xhr.restore();
  });

  test('fetches from correct URL', function () {
    var sd = new SearchData();
    sd.fetch('cat');

    assert.equal(requests[0].url, '/data/search.json?q=cat');
  });

  test('returns a promise', function () {
    var sd = new SearchData();
    var req = sd.fetch('cat');

    assert.isFunction(req.then);
  });

  test('no request if no query', function () {
    var sd = new SearchData();
    var req = sd.fetch();
    assert.equal(requests.length, 0);
  });

  test('return a promise even if no query', function () {
    var sd = new SearchData();
    var req = sd.fetch();

    assert.isFunction( req.then );
  });

  test('no query promise resolves with empty array', function () {
    var sd = new SearchData();
    var req = sd.fetch();
    var spy = sinon.spy();

    req.then(spy);

    assert.deepEqual(spy.args[0][0], []);
  });

  test('returns contents of results property of the response', function () {
    var sd = new SearchData();
    var req = sd.fetch('cat');
    var spy = sinon.spy();

    requests[0].respond(
      200, { 'Content-type': 'text/json' },
      JSON.stringify({ results: [ 1, 2, 3 ] })
    );

    req.then(spy);

    assert.deepEqual(spy.args[0][0], [ 1, 2, 3 ]);
  });
});

For the sake of brevity, I’ve left out the refactoring of the Search Form, and also simplified some of the other refactorings and tests, but you can see a finished version of the app here if you’re interested.

When we’re done rewriting our application using testable JavaScript patterns, we end up with something much cleaner than what we started with:

$(function() {
  var pending = false;

  var searchForm = new SearchForm('#searchForm');
  var searchResults = new SearchResults('#results');
  var likes = new Likes('#liked');
  var searchData = new SearchData();

  $(document).on('search', function (event, query) {
    if (pending) { return; }

    pending = true;

    searchData.fetch(query).then(function (results) {
      searchResults.setResults(results);
      pending = false;
    });

    searchResults.pending();
  });

  $(document).on('like', function (evt, name) {
    likes.add(name);
  });
});

Even more important than our much cleaner application code, though, is the fact that we end up with a codebase that is thoroughly tested. That means we can safely refactor it and add to it without the fear of breaking things. We can even write new tests as we find new issues, and then write the code that makes those tests pass.

Testing makes life easier in the long run

It’s easy to look at all of this and say, “Wait, you want me to write more code to do the same job?”

The thing is, there are a few inescapable facts of life about Making Things On The Internet. You will spend time designing an approach to a problem. You will test your solution, whether by clicking around in a browser, writing automated tests, or—shudder—letting your users do your testing for you in production. You will make changes to your code, and other people will use your code. Finally: there will be bugs, no matter how many tests you write.

The thing about testing is that while it might require a bit more time at the outset, it really does save time in the long run. You’ll be patting yourself on the back the first time a test you wrote catches a bug before it finds its way into production. You’ll be grateful, too, when you have a system in place that can prove that your bug fix really does fix a bug that slips through.

Additional resources

This article just scratches the surface of JavaScript testing, but if you’d like to learn more, check out:

  • My presentation from the 2012 Full Frontal conference in Brighton, UK.
  • Grunt, a tool that helps automate the testing process and lots of other things.
  • Test-Driven JavaScript Development by Christian Johansen, the creator of the Sinon library. It is a dense but informative examination of the practice of testing JavaScript.
0
Your rating: None
Original author: 
Douglas Rushkoff

Nice short movie about Present Shock by Abe Riesman of the New York Observer's "betabeat".  Here's the piece he wrote to go with it: 

Douglas Rushkoff On The Terror of Time
"Are we gonna give all our bloggers Adderall and stick ‘em in a room and tell ‘em to just churn out more shit?”

By Abraham Riesman 3/26/13

Over the course of 20 years, 15 books, and countless speeches and articles, media theorist Douglas Rushkoff has established quite a following among technophiles.

With titles like 2010’s Program or Be Programmed: Ten Commands for a Digital Age, it’s no wonder folks like Andy Weissman quoted Mr. Rushkoff to explain why Union Square Ventures led a $2.5 million investment in Codecademy, a startup that teaches anyone programming languages like Ruby and Javascript. (Not long after, Codecademy announced Mr. Rushkoff would be joining their team as a “code literacy” evangelist.)

But while tomes like Program or Be Programmed or 1994′s Cyberia: Life in the Trenches of Cyberspace warned about the perils and responsibilities of digital citizenship, his latest book is all about time. Rather, it’s about how people are driving themselves insane by approaching time in unhealthy ways.

In Present Shock: When Everything Happens Now, which just hit shelves a few days ago, Mr. Rushkoff coins ominous terms like “digiphrenia” (the kind of disorientation you get when you’re trying to process something as fast as Twitter and something as slow as a news article in the same sitting) and “fractalnoia” (the mistakes organizations make when they try to predict major future trends using small bits of data from the recent past).

Sound a bit heady? Don’t worry. To slow things down a bit, we took Mr. Rushkoff to Sutton Clocks, a sales and repair shop for antique timepieces on the Upper East Side. There, amidst the ticks and tocks, Mr. Rushkoff told us how misunderstanding time can cripple a workforce and tempt you to make dumb decisions.

Try to rein in your digiphrenia as you watch him explain what happens to our biological clock when the digital world treats day and night interchangeably and why the 24-hour blog-cycle is a terrible idea.

0
Your rating: None

I've been a Microsoft developer for decades now. I weaned myself on various flavors of home computer Microsoft Basic, and I got my first paid programming gigs in Microsoft FoxPro, Microsoft Access, and Microsoft Visual Basic. I have seen the future of programming, my friends, and it is terrible CRUD apps running on Wintel boxes!

Of course, we went on to build Stack Overflow in Microsoft .NET. That's a big reason it's still as fast as it is. So one of the most frequently asked questions after we announced Discourse was:

Why didn't you build Discourse in .NET, too?

Let me be clear about something: I love .NET. One of the greatest thrills of my professional career was getting the opportunity to place a Coding Horror sticker in the hand of Anders Hejlsberg. Pardon my inner fanboy for a moment, but oh man I still get chills. There are maybe fifty world class computer language designers on the planet. Anders is the only one of them who built Turbo Pascal and Delphi. It is thanks to Anders' expert guidance that .NET started out such a remarkably well designed language – literally what Java should have been on every conceivable level – and has continued to evolve in remarkably practical ways over the last 10 years, leveraging the strengths of other influential dynamically typed languages.

Turbo-pascal

All that said, it's true that I intentionally chose not to use .NET for my next project. So you might expect to find an angry, righteous screed here about how much happier I am leaving the oppressive shackles of my Microsoft masters behind. Free at last, free at least, thank God almighty I'm free at last!

Sorry. I already wrote that post five years ago.

Like any pragmatic programmer, I pick the appropriate tool for the job at hand. And as much as I may love .NET, it would be an extraordinarily poor choice for an 100% open source project like Discourse. Why? Three reasons, mainly:

  1. The licensing. My God, the licensing. It's not so much the money, as the infernal, mind-bending tax code level complexity involved in making sure all your software is properly licensed: determining what 'level' and 'edition' you are licensed at, who is licensed to use what, which servers are licensed... wait, what? Sorry, I passed out there for a minute when I was attacked by rabid licensing weasels.

    I'm not inclined to make grand pronouncements about the future of software, but if anything kills off commercial software, let me tell you, it won't be open source software. They needn't bother. Commercial software will gleefully strangle itself to death on its own licensing terms.

  2. The friction. If you want to build truly viable open source software, you need people to contribute to your project, so that it is a living, breathing, growing thing. And unless you can download all the software you need to hack on your project freely from all over the Internet, no strings attached, there's just … too much friction.

    If Stack Overflow taught me anything, it is that we now live in a world where the next brilliant software engineer can come from anywhere on the planet. I'm talking places this ugly American programmer has never heard of, where they speak crazy nonsense moon languages I can't understand. But get this. Stand back while I blow your mind, people: these brilliant programmers still code in the same keywords we do! I know, crazy, right?

Getting up and running with a Microsoft stack is just plain too hard for a developer in, say, Argentina, or Nepal, or Bulgaria. Open source operating systems, languages, and tool chains are the great equalizer, the basis for the next great generation of programmers all over the world who are going to help us change the world.

  • The ecosystem. When I was at Stack Exchange we strove mightily to make as much of our infrastructure open source as we could. It was something that we made explicit in the compensation guidelines, this idea that we would all be (partially) judged by how much we could do in public, and try to leave behind as many useful, public artifacts of our work as we could. Because wasn't all of Stack Exchange itself, from the very first day, built on your Creative Commons contributions that we all share ownership of?

    You can certainly build open source software in .NET. And many do. But it never feels natural. It never feels right. Nobody accepts your patch to a core .NET class library no matter how hard you try. It always feels like you're swimming upstream, in a world of small and large businesses using .NET that really aren't interested in sharing their code with the world – probably because they know it would suck if they did, anyway. It is just not a native part of the Microsoft .NET culture to make things open source, especially not the things that suck. If you are afraid the things you share will suck, that fear will render you incapable of truly and deeply giving back. The most, uh, delightful… bit of open source communities is how they aren't afraid to let it "all hang out", so to speak.

    So as a result, for any given task in .NET you might have – if you're lucky – a choice of maybe two decent-ish libraries. Whereas in any popular open source language, you'll easily have a dozen choices for the same task. Yeah, maybe six of them will be broken, obsolete, useless, or downright crazy. But hey, even factoring in some natural open source spoilage, you're still ahead by a factor of three! A winner is you!

  • As I wrote five years ago:

    I'm a pragmatist. For now, I choose to live in the Microsoft universe. But that doesn't mean I'm ignorant of how the other half lives. There's always more than one way to do it, and just because I chose one particular way doesn't make it the right way – or even a particularly good way. Choosing to be provincial and insular is a sure-fire path to ignorance. Learn how the other half lives. Get to know some developers who don't live in the exact same world you do. Find out what tools they're using, and why. If, after getting your feet wet on both sides of the fence, you decide the other half is living better and you want to join them, then I bid you a fond farewell.

    I no longer live in the Microsoft universe any more. Right, wrong, good, evil, that's just how it turned out for the project we wanted to build.

    Im-ok-with-this

    However, I'd also be lying if I didn't mention that I truly believe the sort of project we are building in Discourse does represent most future software. If you squint your eyes a little, I think you can see a future not too far in the distance where .NET is a specialized niche outside the mainstream.

    But why Ruby? Well, the short and not very glamorous answer is that I had narrowed it down to either Python or Ruby, and my original co-founder Robin Ward has been building major Rails apps since 2006. So that clinched it.

    I've always been a little intrigued by Ruby, mostly because of the absolutely gushing praise Steve Yegge had for the language way back in 2006. I've never forgotten this.

    For the most part, Ruby took Perl's string processing and Unix integration as-is, meaning the syntax is identical, and so right there, before anything else happens, you already have the Best of Perl. And that's a great start, especially if you don't take the Rest of Perl.

    But then Matz took the best of list processing from Lisp, and the best of OO from Smalltalk and other languages, and the best of iterators from CLU, and pretty much the best of everything from everyone.

    And he somehow made it all work together so well that you don't even notice that it has all that stuff. I learned Ruby faster than any other language, out of maybe 30 or 40 total; it took me about 3 days before I was more comfortable using Ruby than I was in Perl, after eight years of Perl hacking. It's so consistent that you start being able to guess how things will work, and you're right most of the time. It's beautiful. And fun. And practical.

    Steve is one of those polyglot programmers I respect so much that I basically just take whatever his opinion is, provided it's not about something wacky like gun control or feminism or T'Pau, and accept it as fact.

    I apologize, Steve. I'm sorry it took me 7 years to get around to Ruby. But maybe I was better off waiting a while anyway:

    • Ruby is a decent performer, but you really need to throw fast hardware at it for good performance. Yeah, I know, interpreted languages are what they are, and caching, database, network, blah blah blah. Still, we obtained the absolute fastest CPUs you could buy for the Discourse servers, 4.0 Ghz Ivy Bridge Xeons, and performance is just … good on today's fastest hardware. Not great. Good.

      Yes, I'll admit that I am utterly spoiled by the JIT compiled performance of .NET. That's what I am used to. I do sometimes pine away for the bad old days of .NET when we could build pages that serve in well under 50 milliseconds without thinking about it too hard. Interpreted languages aren't going to be able to reach those performance levels. But I can only imagine how rough Ruby performance had to be back in the dark ages of 2006 when CPUs and servers were five times slower than they are today! I'm so very glad that I am hitting Ruby now, with the strong wind of many solid years of Moore's law at our backs.

  • Ruby is maturing up nicely in the 2.0 language release, which happened not more than a month after Discourse was announced. So, yes, the downside is that Ruby is slow. But the upside is there is a lot of low hanging performance fruit in Ruby-land. Like.. a lot a lot. On Discourse we got an across the board 20% performance improvement just upgrading to Ruby 2.0, and we nearly doubled our performance by increasing the default Ruby garbage collection limit. From a future performance perspective, Ruby is nothing but upside.

  • Ruby isn't cool any more. Yeah, you heard me. It's not cool to write Ruby code any more. All the cool people moved on to slinging Scala and Node.js years ago. Our project isn't cool, it's just a bunch of boring old Ruby code. Personally, I'm thrilled that Ruby is now mature enough that the community no longer needs to bother with the pretense of being the coolest kid on the block. That means the rest of us who just like to Get Shit Done can roll up our sleeves and focus on the mission of building stuff with our peers rather than frantically running around trying to suss out the next shiny thing.

    And of course the Ruby community is, and always has been, amazing. We never want for great open source gems and great open source contributors. Now is a fantastic time to get into Ruby, in my opinion, whatever your background is.

    (However, It's also worth mentioning that Discourse is, if anything, even more of a JavaScript project than a Ruby on Rails project. Don't believe me? Just go to try.discourse.org and view source. A Discourse forum is not so much a website as it is a full-blown JavaScript application that happens to run in your browser.)

    Even if done in good will and for the best interests of the project, it's still a little scary to totally change your programming stripes overnight after two decades. I've always believed that great programmers learn to love more than one language and programming environment – and I hope the Discourse project is an opportunity for everyone to learn and grow, not just me. So go fork us on GitHub already!

    [advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!

  • 0
    Your rating: None