Skip navigation
Help

CRM

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Like the Roman god Janus (and many a politician), every web application has two faces: Its human face interacts with people, while its machine face interacts with computer systems, often as a result of those human interactions. Showing too much of either face to the wrong audience creates opportunity for error.

When a user interface—intended for human consumption—reflects too much of a system’s internals in its design and language, it’s likely to confuse the people who use it. But at the same time, if data doesn’t conform to a specific structure, it’s likely to confuse the machines that need to use it—so we can’t ignore system requirements, either.

People and machines parse information in fundamentally different ways. We need to find a way to balance the needs of both.

Enter the Robustness Principle

In 1980, computer scientist Jon Postel published an early specification for the Transmission Control Protocol, which remains the fundamental communication mechanism of the internet. In this spec, he gave us the Robustness Principle:

Be conservative in what you do, be liberal in what you accept from others.

Although often applied to low-level technical protocols like TCP, this golden rule of computing has broad application in the field of user experience as well.

To create a positive experience, we need to give applications a human face that’s liberal: empathetic, flexible, and tolerant of any number of actions the user might take. But for a system to be truly robust, its machine face must also take great care with the data it handles— treating user input as malicious by default, and validating the format of everything it sends to downstream systems.

Building a system that embraces these radically different sets of constraints is not easy. At a high level, we might say that a robust web application is one that:

  1. Accepts input from users in a variety of forms, based first on the needs and preferences of humans rather than machines.
  2. Accepts responsibility for translating that human input to meet the requirements of computer systems.
  3. Defines boundaries for what input is reasonable in a given context.
  4. Provides clear feedback to the user, especially when the translated input exceeds the defined boundaries.

Whether it’s a simple form or a sophisticated application, anytime we ask users for input, their expectations are almost certainly different from the computer’s in some way. Our brains are not made of silicon. But thinking in terms of the Robustness Principle can help us bridge the gap between human and machine in a wide range of circumstances.

Numbers

Humans understand the terms “one,” “1,” and “1.00” to be roughly equivalent. They are very different to a computer, however. In most programming languages, each is a different type of data with unique characteristics. Trying to perform math on the wrong kind of data could lead to unexpected results. So if a web application needs the user to enter a number, its developers want to be sure that input meets the computer’s definition. Our users don’t care about such subtleties, but they can easily bubble up into our user interfaces.

When you buy something over the phone, the person taking your order never has to say, “Please give me your credit card number using only digits, with no spaces or dashes.” She is not confused if you pause while speaking or include a few “umms.” She knows a number when she hears one. But such prompts commonly litter web forms, instructing users to cater to the computer’s needs. Wouldn’t it be nice if the computer could cater to the person’s needs instead?

It often can, if we put the Robustness Principle to work to help our application take a variety of user input and turn it into something that meets the needs of a machine.

For example, we could do this right at the interface level by modifying fields to pre-process the user’s input, providing immediate feedback to the user about what’s happening. Consider an input field that’s looking for a currency value:

Form input requesting a currency value

HTML 5 introduces some new attributes for the input element, including a type of number and a pattern attribute, intended to give developers a way to define the expected format of information. Unfortunately, browser support for these attributes remains limited and inconsistent. But a bit of JavaScript can do the same work. For example:

<input onkeyup="value=value.replace(/[^0-9\.]/g,'')" />
<input onblur="if(value.match(/[^0-9\.]/)) raise_alert(this)" />

The first input simply blocks any characters that are not digits or decimal points from being entered by the user. The second triggers a notification instead.

We can make these simple examples far more sophisticated, but such techniques still place the computer’s rules in the user’s way. An alternative might be to silently accept anything the user chooses to provide, and then use the same regular expressions1 to process it on the server into a decimal value. Following guideline number three, the application would perform a sanity check on the result and report an error if a user entered something incomprehensible or out of the expected range.

Our application’s liberal human face will assume that these events are the exception: If we’ve designed and labeled our interfaces well, most people will provide reasonable input most of the time. Although precisely what people enter (“$10.00” or “10”) may vary, the computer can easily process the majority of those entries to derive the decimal value it needs, whether inline or server-side. But its cautious, machine-oriented face will check that assumption before it takes any action. If the transaction is important, like when a user enters the amount of a donation, the system will need to provide clear feedback and ask for confirmation before proceeding, even if the value falls within the boundaries of normalcy. Otherwise, aggressive reduction of text to a number could result in an unexpected—and potentially very problematic—result for our user:

Overly aggressive reduction of text input to a number leads to unexpected results

Dates

To a computer, dates and times are just a special case of numbers. In UNIX-based systems, for example, time is often represented as the number of seconds that have elapsed since January 1, 1970.

For a person, however, context is key to interpreting dates. When Alice asks, “Can we meet on Thursday?” Bob can safely assume that she means the next Thursday on the calendar, and he certainly doesn’t have to ask if she means Thursday of last week. Interface designers should attempt to get as close to this human experience as possible by considering the context in which a date is required.

We can do that by revisiting some typical methods of requesting a date from users:

  • A text input, often with specific formatting requirements (MM/DD/YYYY, for example)
  • A miniature calendar widget, arranging dates in a month-by-month grid

These patterns are not mutually exclusive, and a robust application might offer either or both, depending on the context.

There are cases where the calendar widget may be very helpful, such as identifying a future date that’s not known (choosing the second Tuesday of next February). But much of the time, a text input probably offers the fastest path to entering a known date, especially if it’s in the near future. If Bob wants to make a note about Thursday’s meeting, it seems more efficient for him to type the word “Thursday” or even the abbreviation “Thu” than to invoke a calendar and guide his mouse (or worse, his fingertip on a touchscreen) to the appropriate tiny box.

But when we impose overly restrictive formatting requirements on the text, we undermine that advantage—if Bob has to figure out the correct numeric date, and type it in a very specific way, he might well need the calendar after all. Or if an application requests Alice’s birthdate in MM/DD/YYYY format, why should it trigger an error if she types 1/1/1970, omitting the leading zeroes? In her mind, it’s an easily comprehensible date.

An application embracing the Robustness Principle would accept anything from the user that resembles a date, again providing feedback to confirm her entry, but only reporting it as a problem if the interpretation fails or falls out of bounds. A number of software libraries exist to help computers translate human descriptions of dates like “tomorrow,” “next Friday,” or “11 April” into their structured, machine-oriented equivalents. Although many are quite sophisticated, they do have limitations—so when using them, it’s also helpful to provide users with examples of the most reliable patterns, even though the system can accept other forms of input.

Addresses

Perhaps more often than any other type of input, address fields tend to be based on database design rather than the convenience of human users. Consider this common layout:

Typical set of inputs for capturing an address

This set of fields may cover the majority of cases for U.S. addresses, but it doesn’t begin to scratch the surface for international users. And even in the U.S., there are legitimate addresses it won’t accommodate well.

An application that wants to accept human input liberally might take the daring approach of using a single textarea to capture the address, allowing the user to structure it just as he or she would when composing a letter. And if the address will only ever be used in its entirety, storing it as a single block of text may be all that’s required. It’s worth asking what level of detail is truly needed to make use of the data.

Often we have a clear business need to store the information in discrete fields, however. There are many web-based and local services that can take a variety of address formats and standardize them, whether they were collected through a single input or a minimal set of structured elements.

Consider the following address:

Avenue Appia 20
1211 Genève 27
SUISSE

The Google Geocoding API, for example, might translate it to something like the following, with a high level of granularity for mapping applications:

"address_components" : [
  {
     "long_name" : "20",
     "short_name" : "20",
     "types" : [ "street_number" ]
  },
  {
     "long_name" : "Avenue Appia",
     "short_name" : "Avenue Appia",
     "types" : [ "route" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "Geneva",
     "types" : [ "locality", "political" ]
  },
  {
     "long_name" : "Genève",
     "short_name" : "Genève",
     "types" : [ "administrative_area_level_2", "political" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "GE",
     "types" : [ "administrative_area_level_1", "political" ]
  },
  {
     "long_name" : "Switzerland",
     "short_name" : "CH",
     "types" : [ "country", "political" ]
  },
  {
     "long_name" : "1202",
     "short_name" : "1202",
     "types" : [ "postal_code" ]
  }
]

The details (and license terms) of such standardization systems will vary and may not be appropriate for all applications. Complex addresses may be a problem, and we’ll need to give the application an alternate way to handle them. It will be more work. But to achieve the best user experience, it should be the application’s responsibility to first try to make sense of reasonable input. Users aren’t likely to care whether a CRM database wants to hold their suite number separately from the street name.

The exception or the rule

Parsing human language into structured data won’t always work. Under guideline number four, a robust system will detect and handle edge cases gracefully and respectfully, while working to minimize their occurrence. This long tail of user experience shouldn’t wag the proverbial dog. In other words, if we can create an interface that works flawlessly in 95 percent of cases, reducing the time to complete tasks and showing a level of empathy that surpasses user expectations, it’s probably worth the effort it takes to build an extra feedback loop to handle the remaining five percent.

Think again about the process of placing an order over the phone, speaking to a human being. If she doesn’t understand something you say, she may ask you to clarify. Even when she does understand, she may read the the details back to you to confirm. Those interactions are normal and usually courteous. In fact, they reassure us all that the end result of the interaction will be what we expect.

She is not, however, likely to provide you with a set of rigid instructions as soon as she answers the phone, and then berate you for failing to meet some of them. And yet web applications create the equivalent interaction all the time (sometimes skipping past the instructions and going directly to the berating).

For most developers, system integrity is an understandably high priority. Better structure in user-supplied data means that we can handle it more reliably. We want reliable systems, so we become advocates for the machine’s needs. When input fails to pass validation, we tend to view it as a failure of the user—an error, an attempt to feed bad data into our carefully designed application.

But whether or not our job titles include the phrase “user experience,” we must advocate at least as much for the people who use our software as we do for computer systems. Whatever the problem a web application is solving, ultimately it was created to benefit a human being. Everyone who has a hand in building an application influences the experience, so improving it is everyone’s job. Thinking in terms of robustness can help us balance the needs of both people and computers.

Postel’s Law has proven its worth by running the internet for more than three decades. May we all hold our software—and the experiences it creates—to such a high standard.

0
Your rating: None

hal380The advent of Salesforce Marketing Cloud and Adobe Marketing Cloud demonstrates the need for enterprises to develop new ways of harnessing the vast potential of big data. Yet these marketing clouds beg the question of who will help marketers, the frontline of businesses, maximize marketing spending and ROI and help their brands win in the end. Simply moving software from onsite to hosted servers does not change the capabilities marketers require — real competitive advantage stems from intelligent use of big data.

Marc Benioff, who is famous for declaring that “Software Is Dead,” may face a similar fate with his recent bets on Buddy Media and Radian6. These applications provide data to people who must then analyze, prioritize and act — often at a pace much slower than the digital world. Data, content and platform insights are too massive for mere mortals to handle without costing a fortune. Solutions that leverage big data are poised to win — freeing up people to do the strategy and content creation that is best done by humans, not machines.

Big data is too big for humans to work with, at least in the all-important analytical construct of responding to opportunities in real time — formulating efficient and timely responses to opportunities generated from your marketing cloud, or pursuing the never-ending quest for perfecting search engine optimization (SEO) and search engine marketing (SEM). The volume, velocity and veracity of raw, unstructured data is overwhelming. Big data pioneers such as Facebook and eBay have moved to massive Hadoop clusters to process their petabytes of information.

In recent years, we’ve gone from analyzing megabytes of data to working with gigabytes, and then terabytes, and then petabytes and exabytes, and beyond. Two years ago, James Rogers, writing in The Street, wrote: “It’s estimated that 1 Petabyte is equal to 20 million four-door filing cabinets full of text.” We’ve become jaded to seeing such figures. But 20 million filing cabinets? If those filing cabinets were a standard 15 inches wide, you could line them up, side by side, all the way from Seattle to New York — and back again. One would need a lot of coffee to peruse so much information, one cabinet at a time. And, a lot of marketing staff.

Of course, we have computers that do the perusing for us, but as big data gets bigger, and as analysts, marketers and others seek to do more with the massive intelligence that can be pulled from big data, we risk running into a human bottleneck. Just how much can one person — or a cubicle farm of persons — accomplish in a timely manner from the dashboard of their marketing cloud? While marketing clouds do a fine job of gathering data, it still comes down to expecting analysts and marketers to interpret and act on it — often with data that has gone out of date by the time they work with it.

Hence, big data solutions leveraging machine learning, language models and prediction, in the form of self-learning solutions that go from using algorithms for harvesting information from big data, to using algorithms to initiate actions based on the data.

Yes, this may sound a bit frightful: Removing the human from the loop. Marketers indeed need to automate some decision-making. But the human touch will still be there, doing what only people can do — creating great content that evokes emotions from consumers — and then monitoring and fine-tuning the overall performance of a system designed to take actions on the basis of big data.

This isn’t a radical idea. Programmed trading algorithms already drive significant activity across stock markets. And, of course, Amazon, eBay and Facebook have become generators of — and consummate users of — big data. Others are jumping on the bandwagon as well. RocketFuel uses big data about consumers, sites, ads and prior ad performance to optimize display advertising. Turn.com uses big data from consumer Web behavior, on-site behaviors and publisher content to create, optimize and buy advertising across the Web for display advertisers.

The big data revolution is just beginning as it moves beyond analytics. If we were building CRM again, we wouldn’t just track sales-force productivity; we’d recommend how you’re doing versus your competitors based on data across the industry. If we were building marketing automation software, we wouldn’t just capture and nurture leads generated by our clients, we’d find and attract more leads for them from across the Web. If we were building a financial application, it wouldn’t just track the financials of your company, it would compare them to public filings in your category so you could benchmark yourself and act on best practices.

Benioff is correct that there’s an undeniable trend that most marketing budgets today are betting more on social and mobile. The ability to manage social, mobile and Web analysis for better marketing has quickly become a real focus — and a big data marketing cloud is needed to do it. However, the real value and ROI comes from the ability to turn big data analysis into action, automatically. There’s clearly big value in big data, but it’s not cost-effective for any company to interpret and act on it before the trend changes or is over. Some reports find that 70 percent of marketers are concerned with making sense of the data and more than 91 percent are concerned with extracting marketing ROI from it. Incorporating big data technologies that create action means that your organization’s marketing can get smarter even while you sleep.

Raj De Datta founded BloomReach with 10 years of enterprise and entrepreneurial experience behind him. Most recently, he was an Entrepreneur-In-Residence at Mohr-Davidow Ventures. Previously, he was a Director of Product Marketing at Cisco. Raj also worked in technology investment banking at Lazard Freres. He holds a BSE in Electrical Engineering from Princeton and an MBA from Harvard Business School.

0
Your rating: None

CowboyRobot writes "Dr. Dobb's has an editorial on the problem of using return values and exceptions to handle errors. Quoting: 'But return values, even in the refined form found in Go, have a drawback that we've become so used to we tend to see past it: Code is cluttered with error-checking routines. Exceptions here provide greater readability: Within a single try block, I can see the various steps clearly, and skip over the various exception remedies in the catch statements. The error-handling clutter is in part moved to the end of the code thread. But even in exception-based languages there is still a lot of code that tests returned values to determine whether to carry on or go down some error-handling path. In this regard, I have long felt that language designers have been remarkably unimaginative. How can it be that after 60+ years of language development, errors are handled by only two comparatively verbose and crude options, return values or exceptions? I've long felt we needed a third option.'"

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

girl pointing at screen

A pervasive myth exists among tech founders: If they build a product that consumers will love, it will magically trickle into Fortune 500 companies.

The logic works something like this: Devote the bulk of your funding to designing the product. CIO’s will fork over a piece of their sizable budget if enough employees get hooked and use it at work. Founders often tell me that just like cloud storage company Dropbox or enterprise social network Yammer, their product will be a hit with large organizations if it’s well-designed and easy to deploy.

Do a search for “Dropbox problem” or “Dropbox effect” and you’ll find thousands of articles. I agree that Dropbox has inspired more enterprise founders to experiment with freemium models or to build intuitive products, but it is not proof that a consumer-focused company can simply change focus to the enterprise without having to reengineer its technology from the ground up.

You can’t just ‘pivot’ to the enterprise

“Dropbox’s message is that business users want products that are simple and sexy,” said Ray Wang, the principal analyst and CEO of Constellation Research. That may be true, but according to Wang, to meet the needs of IT, you have to “do a lot more.”

For example, an enterprise startup needs a sales and support infrastructure to handle requests, and the product must be significantly more scalable and secure than a consumer product.

The “consumerization of the enterprise” trend is very real; it means that employees are embracing the latest mobile and social technology and applications, and they are bringing their own devices to work.

But this trend has not replaced traditional enterprise sales cycles. Even new-age startups like Yammer (recently acquired by Microsoft for $1.2 billion), which once spread the notion that big companies will embrace new technologies the same way that people do with consumer products, later hired a full enterprise sales and customer support team.

“It’s a beautiful story that has been spread by investors and founders,” said Mike Driscoll, the CEO of Metamarkets, a “big data startup” in San Francisco. Driscoll said that he is already on the hunt for a new sales executive, preferably with experience working for a legacy vendor like IBM or EMC.

Likewise Box, a Dropbox competitor, had to make sweeping changes before approaching the enterprise. It brought on adult supervision in the form of Whitney Tidmarsh Bouck, a former chief marketing officer at enterprise technology company EMC. To land big-name customers like The Gap and Volkswagen, Bouck said the startup needed “dedicated product marketers and resources.”

“It is our central point of focus,” she added. The product team had to incorporate scalability, integration and security controls, mobile technology, Active Directory support, and so on. Most importantly, she said, “it’s a long-term consultative sales approach that is a world apart from a consumer or SMB [small to medium-sized business] play.”

Ben Horowitz, the cofounder and general partner of Andreessen Horowitz, was one of the first venture capitalists to dispel the myth. As he put it in a blog post:

Encouraged by the new trend, innovative entrepreneurs imagine a world where consumers find great solutions to help their employers in the same way that they find great products to help themselves. In the imaginary enterprise, these individuals will then take the initiative to convince their collegues to buy the solution. Through this method, if the product is truly great, there will be little or no need to actually sell it.

The actual enterprise works a bit differently. Meet the new enterprise customer. He’s a lot like the old enterprise customer.

Indeed, when employees set up accounts for consumer-focused services without permission, the IT department is at risk of losing control over corporate data, whether it’s emails, reports, or instant-messaging chatter. However, this does not mean that the IT executives will strike deals with these tech providers to preserve security and governance.

Sand Hill is part of the problem

Greg Piesco Putnam, cofounder of Aktana, an enterprise sales startup, told me that most venture capital firms accepted the Dropbox myth without question when he was raising funds.

“They were looking for stories of the consumerization of IT, and the entrepreneurs who told those stories raised big rounds,” he recalled. ”The question that was not asked was whether IT departments would actually respond to these user demands.” He explained that in the enterprise, startups need to convince at least three key decision-makers: IT, business, and operations.

Wang told me he often hears about high-performing, early-stage consumer startups that shift gears once their investors demand to see a solid business plan. Entrepreneurs are aware that their investors are angling for a piece of the trillion-dollar market for enterprise software.

“You get folks saying, I’m going to enterprise now to cover my butt, but the product might not have been designed for that,” said Wang, who draws a useful comparison to the adoption of email programs Lotus and Outlook. The latter was widely used in the enterprise despite its design flaws. “In the enterprise, the best sales and marketing wins, not the best product,” he said.

At the Disrupt conference in San Francisco, young enterprise founders from startups like Asana contested this point, clearly demonstrating that the myth is still pervasive.

“The distribution model has changed,” Asana‘s CEO Justin Rosenstein said, and he argued that the CIO is the end-user for enterprise software. “You don’t have to be sales-driven or marketing-driven; you have to be product-driven,” Rosenstein said. “It will be the best product that wins,” he added. Asana is a task management software started by former Facebook founder Dustin Moskovitz and Rosenstein, an former Google employee.

“Nothing is relatively different, it’s just evolved,” hit back Cloudera COO Kirk Dunn. Dunn is right to advise caution: a young company will not succeed without a full customer support and sales team. In the enterprise, product simply isn’t enough. “You can have a great product and great sales-focused company,” Todd McKinnon, the CEO of cloud startup Okta, offered as a conciliatory response.

At startup demo days and hackathons, young founders are slowly waking up to the importance of traditional enterprise sales. ”At the enterprise level, a great product doesn’t sell itself; it takes a great sales and marketing organization to engage buyers, procurement organizations, and IT departments to close a large enterprise deal,” said Mark Trang, the cofounder of SocialPandas, a CRM startup that recently debuted at Founders Den.

The roots of the ‘Dropbox myth’

Dropbox is a consumer startup and wasn’t build to store and share terabytes of sensitive data for a Fortune 500 company. As VentureBeat reported earlier, with its third major security breach this year, the fast-growing private company has become a problem child for chief information officers.

“We’re consistently replacing Dropbox in the enterprise,” Vineet Jain, the CEO of enterprise cloud storage startup Egnyte, told VentureBeat. “It’s incessantly used in enterprise until IT shuts it down.”

If you are selling to consumers or small companies that behave like consumers, moving away from the old enterprise sales and channel models may make perfect sense. However, if you plan to strike multimillion-dollar deals with enterprise companies, the chief information officer is still the chief decision-maker.

In short, the Dropbox model didn’t even work for Dropbox.

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

0
Your rating: None

Look at this incredible thing Ian Baker created. Look at it!

The PHP hammer

What you're seeing is not Photoshopped. This is an actual photo of a real world, honest to God double-clawed hammer. Such a thing exists. Isn't that amazing? And also, perhaps, a little disturbing?

That wondrous hammer is a delightful real-world acknowledgement of the epic blog entry PHP: A Fractal of Bad Design.

I can’t even say what’s wrong with PHP, because – okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

Remember the immediate visceral reaction you had to the double-clawed hammer? That's exactly the reaction most sane programmers have to their first encounter with the web programming language PHP.

This has been going on for years. I published my contribution to the genre in 2008 with PHP Sucks, But It Doesn't Matter.

I'm no language elitist, but language design is hard. There's a reason that some of the most famous computer scientists in the world are also language designers. And it's a crying shame none of them ever had the opportunity to work on PHP. From what I've seen of it, PHP isn't so much a language as a random collection of arbitrary stuff, a virtual explosion at the keyword and function factory. Bear in mind this is coming from a guy who was weaned on BASIC, a language that gets about as much respect as Rodney Dangerfield. So I am not unfamiliar with the genre.

Except now it's 2012, and fellow programmers are still writing long screeds bemoaning the awfulness of PHP!

What's depressing is not that PHP is horribly designed. Does anyone even dispute that PHP is the worst designed mainstream "language" to blight our craft in decades? What's truly depressing is that so little has changed. Just one year ago, legendary hacker Jamie Zawinski had this to say about PHP:

I used to think that PHP was the biggest, stinkiest dump that the computer industry had taken on my life in a decade. Then I started needing to do things that could only be accomplished in AppleScript.

Is PHP so broken as to be unworkable? No. Clearly not. The great crime of PHP is its utter banality. Its continued propularity is living proof that quality is irrelevant; cheap and popular and everywhere always wins. PHP is the Nickelback of programming languages. And, yes, out of frustration with the status quo I may have recently referred to Rasmus Lerdorf, the father of PHP, as history's greatest monster. I've told myself a million times to stop exaggerating.

The hammer metaphor is apt, because at its core, this is about proper tooling. As presciently noted by Alex Papadimoulis:

A client has asked me to build and install a custom shelving system. I'm at the point where I need to nail it, but I'm not sure what to use to pound the nails in. Should I use an old shoe or a glass bottle?

How would you answer the question?

  1. It depends. If you are looking to pound a small (20lb) nail in something like drywall, you'll find it much easier to use the bottle, especially if the shoe is dirty. However, if you are trying to drive a heavy nail into some wood, go with the shoe: the bottle will shatter in your hand.
  2. There is something fundamentally wrong with the way you are building; you need to use real tools. Yes, it may involve a trip to the toolbox (or even to the hardware store), but doing it the right way is going to save a lot of time, money, and aggravation through the lifecycle of your product. You need to stop building things for money until you understand the basics of construction.

What we ought to be talking about is not how terrible PHP is – although its continued terribleness is a particularly damning indictment – but how we programmers can culturally displace a deeply flawed tool with a better one. How do we encourage new programmers to avoid picking up the double clawed hammer in favor of, well, a regular hammer?

This is not an abstract, academic concern to me. I'm starting a new open source web project with the goal of making the code as freely and easily runnable to the world as possible. Despite the serious problems with PHP, I was forced to consider it. If you want to produce free-as-in-whatever code that runs on virtually every server in the world with zero friction or configuration hassles, PHP is damn near your only option. If that doesn't scare you, then check your pulse, because you might be dead.

Everything goes with PHP sauce! Including crushing depression.

Therefore, I'd like to submit a humble suggestion to my fellow programmers. The next time you feel the urge to write Yet Another Epic Critique of PHP, consider that:

  1. We get it already. PHP is horrible, but it's used everywhere. Guess what? It was just as horrible in 2008. And 2005. And 2002. There's a pattern here, but it's subtle. You have to look very closely to see it. On second thought, never mind. You're probably not smart enough to figure it out.
  2. The best way to combat something as pervasively and institutionally awful as PHP is not to point out all its (many, many, many) faults, but to build compelling alternatives and make sure these alternatives are equally pervasive, as easy to set up and use as possible.

We've got a long way to go. One of the explicit goals of my next project is to do whatever we can to buff up a … particular … open source language ecosystem such that it can truly compete with PHP in ease of installation and deployment.

From my perspective, the point of all these "PHP is broken" rants is not just to complain, but to help educate and potentially warn off new coders starting new codebases. Some fine, even historic work has been done in PHP despite the madness, unquestionably. But now we need to work together to fix what is broken. The best way to fix the PHP problem at this point is to make the alternatives so outstanding that the choice of the better hammer becomes obvious.

That's the PHP Singularity I'm hoping for. I'm trying like hell to do my part to make it happen. How about you?

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!

0
Your rating: None

larry ellison

Editor’s note: Aaron Levie is CEO of Box. Follow him on Twitter @levie.

In 1997, Larry Ellison had a vision for a new paradigm of computing which he called the Network Computer (NC). The idea was simple: a group of partners would build devices and services that leveraged the power of the Internet to compete against the growing Windows monopoly.

Ellison believed that the computer in the client/server era had evolved into too complex a machine for most tasks. With the NC, the ‘heavy’ computation of software and infrastructure would be abstracted from the actual device and delivered instead to thinner terminals via the web, thus radically simplifying access and enabling all new applications and mobility.

But the NC never made it mainstream. Microsoft and its allies had already amassed considerable power, and the cost of personal computers was dropping rapidly, making them even more attractive and ubiquitous. Furthermore, many of the applications were too immature to compete with the desktop software experience at the time; and few people, as it turned out, wanted to buy a device championed by Oracle.

The NC fell short on execution, but Ellison was right about the vision: “It’s the first step beyond personal computing, to universal computing.” In many ways, he was the first to glimpse a future resembling the post-PC world we are rapidly moving towards today.

15 years later, it is Apple that has brought its version of this vision to life. And Apple’s rising tide – already 172 million devices strong, sold in the last year alone – has in turn given rise to a massive, vibrant ecosystem: companies generating hundreds of millions and billions of dollars in value in under a few years, revolutionizing industries like gaming, social networking, entertainment and communications in the process. Then of course there’s Instagram.  All proving that value created in this mobile and Post-PC world will rival traditional computing categories.

But the post-PC transformation isn’t limited to the consumer landscape. In the enterprise, we’re transitioning to a way of working that is far more fluid, boundary-less and social. And mobile pushes computing to the cloud and rewrites all applications in its wake. Those who saw it coming (Oracle) and those who initially resisted its arrival (Microsoft) have equally been taken by surprise by the power and speed of the post-PC shift within today’s enterprises, and it’s creating one of the biggest opportunities ever.

Why the change is so profound

We recently met with the IT leadership team of a fairly conservative 50,000-person organization where all the participants all had iPads. No big surprise there. But the apps they were using were radically different from what you would have found in their organization only a few years back – a mix of apps from a new set of vendors that together supplant the traditional Microsoft Office stack.

Post-PC devices are driving enterprises to rethink their entire IT architecture, thanks to a wildly unpredictable and improbable chain reaction set off by a new consumer device from Apple.  For the first time in decades, CIOs have the opportunity – and necessity – to completely re-imagine and rebuild their technology strategy from the ground up. Catalyzing this change is the fact that the technology switching costs are often less than the price of maintaining existing solutions. A shipment of 1,000 new iPads requires applications to run on these devices – and choosing all-new applications and vendors is generally cheaper than the service fees, infrastructure, and operational costs of legacy software.

And thus, the Post-PC era drives the final nail in the coffin of the traditional enterprise software hegemony. Microsoft, in particular, built up a practical monopoly that lasted nearly twenty years, and forced an entire industry to conform to its way of seeing the world.  Yet this arrangement served its benefactor far more than the ecosystem, as the Redmond giant built up leadership positions across nearly every application category.

In the Post-PC era, businesses will shift from deploying and managing end-to-end enterprise solutions from a single vendor, to consuming apps a la carte both as individuals and en masse. But which apps and vendors will help define this new world?

What’s coming won’t look like what came before

Change always begins incrementally at first. Predicting specifically what will happen in the next year or two is a far more realistic undertaking than anticipating where we’ll be in a decade. In shifting from one technology generation to the next, we minimize disruption by porting the old way of doing things to newer mediums or channels. Not until the new model settles in do we see the real results that rise from these foundational shifts.

Mobility is such a foundational shift, and it’s still very, very early. Even when the Microsofts and Oracles of the world relent and build applications for post-PC devices, these apps will carry much of the DNA of their desktop predecessors. We can imagine that each of the enterprise mainstays – ERP, HR management, supply chain, business intelligence, and office productivity – will be painstakingly moved to mobile. But that’s just the first phase.

Emerging CRM startups like Base will challenge longstanding assumptions about where and how you manage customer interactions. Data visualization software like Roambi will make business analytics more valuable by making it available everywhere. Entire industries are already being transformed: mobile healthcare apps will enable cutting-edge health outcomes, and construction sites will eventually be transformed by apps like PlanGrid.  Companies like CloudOn and Onlive aim to virtualize applications that we never imagined would be available outside the office walls. Evernote’s 20+ million users already make it one of the most popular independent productivity software apps of all time, whose value is dramatically amplified by this revolution.  In a mobile and Post-PC world, the very definition of the office suite is transformed.

And with this transformation, much of the $288B spent annually on enterprise software is up for grabs.  The post-PC era is about no longer being anchored to a handful of solutions in the PC paradigm. Instead, we’re moving to a world where we mix and match best-of-breed solutions. This means more competition and choice, which means new opportunities for startups, which should mean more innovation for customers. As soon as individual workers look to the App Store for an immediate solution to their problem instead of calling IT (who in turn calls a vendor) you can tell things will never be the same.

In many ways, the enterprise software shift mirrors that of the media and cable companies fighting for relevance in a world moving to digital content (HT @hamburger). If users and enterprises can select apps that are decoupled from an entire suite, we might find they’d use a completely different set of technology, just as many consumers would only subscribe to HBO or Showtime if given the option.

Of course, every benefit brings a new and unique challenge. In a world where users bring their own devices into the workplace, connect to any network, and use a mix of apps, managing and securing business information becomes an incredibly important and incredibly challenging undertaking. Similarly, how do we get disparate companies to build apps that work together, instead of spawning more data silos?  And as we move away from large purchases of suites from a single provider, what is the new business model that connects vendors with customers (both end users and IT departments) with minimal friction?

And then there’s the inherent fragmentation of devices and platforms that defines the post-PC era. Android, iOS, and Windows 7 and 8 all have different languages and frameworks, UI patterns, and marketplaces. The fate of mobile HTML5 is still indeterminate. Fragmentation and sprawl of apps and data is now the norm. And while this fragmentation is creating headaches for businesses and vendors alike, it’s also opening a window for the next generation of enterprise software leaders to emerge and redefine markets before the industry settles into this new paradigm.

It would appear that Larry Ellison’s vision for the NC was right all along, just 15 years early. Welcome to the post-PC enterprise.

0
Your rating: None