Skip navigation
Help

Web Design bundle

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
(author unknown)

Ilya Grigorik discusses in detail how to construct a mobile website that loads as quickly as possible. A site that not only renders in 1 second, but one that is also visible in 1 second. With hard statistics as evidence to show why this matters, Ilya discusses techniques to deliver a 1000 millisecond experience.

0
Your rating: None
Original author: 
(author unknown)

In this 60-minute video from An Event Apart Boston, Scott Berkun tackles designer disempowerment. He discusses how power actually works, and why developing salesmanship skills is a must, even if your job isn’t public-facing.

0
Your rating: None
Original author: 
(author unknown)

Web maps have come a long way. Improved data, cleaner design, better performance, and more intuitive controls have made web maps a ubiquitous and critical component of many apps. They’ve also become one of the mobile space’s most successful transplants as more and more apps are powered by location-aware devices. The core web map UI paradigm itself—a continuous, pannable, zoomable surface—has even spread beyond mapping to interfaces everywhere.

Despite all this, we’ve barely begun to work web maps into our design practice. We create icon fonts, responsive grids, CSS frameworks, progressive enhancement strategies, and even new design processes. We tear down old solutions and build new ones, and even take an extra second to share battle stories in prose and in person. Yet nearly five years since Paul Smith’s article, “Take Control of Your Maps,” web maps are still a blind spot for most designers.

Have you ever taken apart a map? Worked with a map as a critical part of your design? Developed tricks, hacks, workarounds, or progressive enhancements for maps?

This article is a long overdue companion to Paul’s piece. Where he goes on a whirlwind survey of the web mapping stack at 10,000 feet, we’re going to walk through a single design process and implement a modern-day web map. By walking this path, I hope to begin making maps part of the collective conversation we have as designers.

Opinionated about open

Paul makes a strong case for why you might want to use open mapping tools instead of the established incumbent. I won’t retread his reasons here, but I would like to expand on his last: Open tools are the ones we hack best.

There is nothing mysterious about web maps. Take any spatial plane, split it up into discrete tiles, position them in the DOM, and add event handlers for panning and zooming. The basic formula can be applied to Portland, Mars, or Super Mario Land. It works for displaying large street maps, but nothing stops us from tinkering with it to explore galleries of art, create fictional game worlds, learn human anatomy, or simply navigate a web page. Open tools bare the guts of this mechanism to us, allowing us to see a wider range of possibilities.

 character navigation, Mars, and Super Mario Land.
The mechanics of web maps are not limited to street maps.

We should know the conditions under which map images are loaded and destroyed; we should argue whether map tiles are best positioned with CSS transforms or not; and we should care whether vector elements are drawn with SVG or Canvas. Open tools let us know and experiment with these working details of our maps. If you wouldn’t have it any other way with your HTML5, CSS, or JavaScript libraries, then you shouldn’t settle for less when it comes to maps.

In short, we’ll be working with a fully open mapping stack. MapBox, where I work, has pulled together several open source libraries into a single API that we publish under mapbox.js. Other open mapping libraries that are worth your time include Leaflet and D3.js.

Starting out

I’m a big fan of Sherlock Holmes. Between the recent Hollywood movies starring Robert Downey Jr. and the BBC’s contemporary series, I’m hooked. But as someone who has never been to London, I know I’m missing the richness of place and setting that Sir Arthur Conan Doyle meant to be read into his short stories.

A typical approach would be to embed a web map with pins of various locations alongside one of the Sherlock stories. With this approach the map becomes an appendix—a dispensable element that plays little part in Doyle’s storytelling. Instead, we’re going to expand the role of our map, integrating it fully into the narrative. It will set the stage, provide pace, and affect the mood of our story.

Comparing a map used as embedded media versus one used as a critical design element.

A tale of places

To establish a baseline for our tale, I restructured The Adventure of the Bruce-Partington Plans to be told around places. I picked eight key locations from the original text, pulled out the essential details of the mystery, and framed them out with HTML, CSS, and JavaScript.

Text only demo.
A Sherlock Holmes story in text only. View Demo 1.

  • The story is broken up into section elements for each key location. A small amount of JavaScript implements a scrolling flow that highlights a single section at a time.
  • Our page is not responsive yet, but it contains scaffolding to guard against bad choices that could thwart us. The main text column is fluid at 33.33% and pins to a min-width: 320px. If our content and design flow reasonably within these constraints, we’re in good shape.

Next, we’ll get started mapping. Initially we’ll work on our map separately from our story page to focus on learning key elements of a new technology.

Maps are data

The mapping equivalent of our abridged Sherlock story is a dataset of eight geographic points. GeoJSON, a format for describing geographic data in JSON, is the perfect starting point for capturing this data:

{
    "geometry": { "type": "Point", "coordinates": [-0.15591514, 51.51830379] },
    "properties": { "title": "Baker St." }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.07571203, 51.51424049] },
    "properties": { "title": "Aldgate Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.08533793, 51.50438536] },
    "properties": { "title": "London Bridge Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [0.05991101, 51.48752939] },
    "properties": { "title": "Woolwich Arsenal" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.18335806, 51.49439521] },
    "properties": { "title": "Gloucester Station" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.19684993, 51.5033856] },
    "properties": { "title": "Caulfield Gardens" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.10669358, 51.51433123] },
    "properties": { "title": "The Daily Telegraph" }
}, {
    "geometry": { "type": "Point", "coordinates": [-0.12416858, 51.50779757] },
    "properties": { "title": "Charing Cross Station" }
}

Each object in our JSON array has a geometry—data that describe where this object is in space—and properties—freeform data of our own choosing to describe what this object is. Now that we have this data, we can create a very basic map.

Basic web mapping demo.
The basics of web mapping. View Demo 2.

  • Note that the coordinates are a pair of latitude and longitude degrees. In the year 2013, it is still not possible to find a consistent order for these values across mapping APIs. Some use lat,lon to meet our expectations from grade-school geography. Others use lon,lat to match x,y coordinate order: horizontal, then vertical.
  • We’re using mapbox.js as our core open source mapping library. Each map is best understood as the key parameters passed into mapbox.map():
    1. A DOM element container
    2. One or more Photoshop-like layers that position tiles or markers
    3. Event handlers that bind user input to actions, like dragging to panning
  • Our map has two layers. Our tile layer is made up of 256x256 square images generated from a custom map on MapBox. Our spots layer is made up of pin markers generated from the GeoJSON data above.

This is a good start for our code, but nowhere near our initial goal of using a map to tell our Sherlock Holmes story.

Beyond location

According to our first map, the eight items in our GeoJSON dataset are just places, not settings in a story full of intrigue and mystery. From a visual standpoint, pins anonymize our places and express them as nothing more than locations.

To overcome this, we can use illustrations for each location—some showing settings, others showing key plot elements. Now our audience can see right away that there is more to each location than its position in space. As a canvas for these, I’ve created another map with a custom style that blends seamlessly with the images.

Web map with illustrations.
Illustrations and a custom style help our map become part of the storytelling. View Demo 3, and then read the diff.

  • The main change here is that we define a custom factory function for our markers layer. The job of the factory function is to take each GeoJSON object and convert it to a DOM element—an a, div, img, or whatever—that the layer will then position on the map.
  • Here we generate divs and switch from using a title attribute in our GeoJSON to an id. This provides us with useful CSS classes for displaying illustrations with our custom markers.

Bringing it all together

Now it’s time to combine our story with our map. By using the scroll events from before, we can coordinate sections of the story with places on the map, crafting a unified experience.

Web map coordinated to story text by a scroll handler.
As the user reads each section, the map pans to a new location. View Demo 4, then read the diff.

  • The bridge between the story and the map is a revamped setActive() function. Previously it only set an active class on a particular section based on scrolling position. Now it also finds the active marker, sets an active class, and eases the map to the marker’s location.
  • Map animation uses the easey library in the mapbox.js API, which implements animations and tweening between geographic locations. The API is dead simple—we pass it the lon,lat of the marker we want to animate to, and it handles the rest.
  • We disable all event handlers on our map by passing an empty array into mapbox.map(). Now the map can only be affected by the scrolling position. If users wanted to deviate from the storyline or explore London freeform, we could reintroduce event handlers—but in this case, less is more.

Displaying our fullscreen map together with text presents an interesting challenge: our map viewport should be offset to the right to account for our story on the left. The solution I’m using here is to expand our map viewport off canvas purely using CSS. You could use JavaScript, but as we’ll see later, a CSS-only approach gives us elegant ways to reapply and adjust this technique on mobile devices.

Using an off-canvas map width to offset the viewport center.

At this stage, our map and story complement each other nicely. Our map adds spatial context, visual intrigue, and an interesting temporal element as it eases between long and short distances.

Maps in responsive design

The tiled, continuous spatial plane represented by web maps is naturally well-suited to responsive design. Web maps handle different viewport sizes easily by showing a bit more or a bit less map. For our site, we adjust the layout of other elements slightly to fit smaller viewports.

Adding a responsive layout.
Tweaking layout with web maps. View Demo 5, then read the diff.

  • With less screen real estate, we hide non-active text sections and pin the active text to the top of the screen.
  • We use the bottom half of the screen for our map and use media queries to adjust the map’s center point to be three-fourths the height of the screen, using another version of our trick from Demo 4.

With a modest amount of planning and minimal adjustments, our Sherlock story is ready to be read on the go.

Solve your own case

If you’ve been following the code between these steps, you’ve probably noticed at least one or two things I haven’t covered, like the parameters of ease.optimal(), or how tooltips picked up on the title attribute of our GeoJSON data. The devil’s in the details, so post to this GitHub repository, where you will find the code and the design.

You should also check out:

  • The MapBox site, which includes an overview of tiling and basic web map concepts, and MapBox.js docs and code examples.
  • Leaflet, another powerful open source mapping library.
  • D3.js, a library for powering data-driven documents that has a broad range of applications, including mapping.

This example shows just one path to integrating web maps into your designs. Don’t stick to it. Break it apart. Make it your own. Do things that might be completely genius or utterly stupid. Even if they don’t work out, you’ll be taking ownership of maps as a designer—and owning something is the only way we’ll improve on it.

0
Your rating: None
Original author: 
(author unknown)

For ideal typography, web designers need to know as much as possible about each user’s reading environment. That may seem obvious, but the act of specifying web typography is currently like ordering slices of pizza without knowing how large the slices are or what toppings they are covered with.

If someone asked me how many slices of pizza I wanted for lunch, I would probably say it depends on how large the slices are. Then—even if they told me that each slice was one eighth of a whole pie, or that they themselves were ordering two slices, or even that the slices were coming from Joe’s Pizza—any answer I might give would still be based on relative knowledge and inexact assumptions.

Such is the current situation with the physical presentation of responsive typography on the web. The information at a designer’s disposal for responsive design is virtually nonexistent outside the realm of software. Very little knowledge about the physical presentation of content is available to inform the design. The media query features of today can only relay a very fragmented view of the content’s actual presentation, and related terms from CSS are confusing if not downright misleading.

The immeasurable pachyderm

Among all the physical qualities of web typography, the elephant in the room is the issue of size. I’m not talking about em or rem or “reference pixels” ¹ or even device pixels. I’m talking about real, actual, physical, bona fide, measurable, size!

It’s ridiculous that we can send robots to Mars yet it’s still virtually impossible to render a glyph on a web page and say with confidence: “If you measure this glyph on your screen with a ruler, it will be exactly 10 millimeters wide.” Although actual physical size isn’t always the most important factor in web design, in some cases it is critical. For example, consider content for partially-sighted or low-vision readers: the ability to tweak designs according to physical sizes would enable designers to make conscious design decisions with much more sensitivity to how the type is actually being seen. And even where physical sizing is secondary to relative sizing, why shouldn’t we nevertheless be able to factor in physical size when establishing the relationships between different elements?

Physical considerations ≠ print design

I don’t believe web typography should be a screen-based imitation of print typography. One of the greatest benefits of web typography, and web design in general, is that it is flexible, adaptable, fluidly adjustable, without being locked into any one specific configuration. However(!), that doesn’t mean web designers should be forced to design without any means to address the issues of physical presentation. On the contrary, responsive design will not reach its full potential until it allows the ability to respond to the very important physical variables of digital media.

Please pardon the cliché, but when it comes to typography, on screens or otherwise, size matters. Physical size affects optical issues that change how the eye and brain process typographic images. Not surprisingly, typographers and typeface designers have been compensating for optical size-related issues as far back as Gutenberg.

You can’t expect a paragraph of type with the same relative line-height, column width, letter-spacing, and glyph proportions to function just as well on two different displays that have the same number of pixels but completely different physical sizes. It’s great that designers can adjust proportions between typographic elements if the canvas varies in relative size, but any such compensation is still based on guesswork and assumptions about the physical size of that canvas. When people disagree about the size or spacing of type on a website, there’s a very good chance that their opinions are based on completely different physical manifestations of the same content, even if their software and settings are identical.

Resolute resolution, absolute absolution

One of the most crucial factors in the size equation is resolution. And when I say resolution, I don’t just mean “how many pixels is this?”, or even “how many device pixels is this?”, but also “how large are these pixels?”

This is very different from the W3C’s “resolution” media feature in the current draft of the Media Queries Level 4 spec. You will note that the spec refers to resolution in terms of “CSS ‘inches’”—the quotes around “inches” are theirs, implying that they are not actually inches at all.

For an example of why physical resolution matters, imagine you are rendering text on a digital billboard with a physical resolution of one pixel per inch (1 PPI). Now imagine you are rendering the same text on a 200 PPI mobile device display. Even if you knew the actual number of device pixels that would be used to render your type (which itself is difficult to do with confidence these days), you would want to treat the two compositions very differently, both in terms of the typeface as well as typographic layout. The billboard type would likely require less space between letters. The letterforms themselves would benefit from narrower proportions, and could endure a higher ratio between thick and thin strokes. The type might even require different colors to optimize contrast at that size. These are all basics of typography and typeface design.

Unfortunately, in the current landscape of media query features, there is no way to know the difference between 16 device pixels on a crude LED billboard and 16 device pixels on a high-density mobile display. Heck, there isn’t even a reliable way to know if your type is 16 device pixels at all, regardless of how large the pixels are!

Pixels still rule, for better or worse

I know what some em-based enthusiasts might be thinking: “But you shouldn’t be specifying type sizes in pixel units to start with! All type sizes should be spec’d abstractly in relation to each other or a base font size!” However, in the current world of web typography, no matter what unit of measure you use to spec your onscreen type sizes—em, rem, px, pt, in, %, vh, or whatever else—at the end of the line, your specification is being mapped to pixels. Even if you leave the base size of your document to the defaults and specify everything else with em, there is still a base size which all other sizes will ultimately refer to, and it is defined in pixels.

This is because, currently, the only unit of measure that can be rendered onscreen by any operating system with absolute confidence is the lowly pixel. Until we have media query features that allow us to spec for situations like:

@media (physical-resolution: 1device-pixels-per-physical-inch) { … }

or:

@media (device-width: 10physical-centimeters) { … }

… any compensation for physical size is based entirely on rough guesses about the devices our content will be presented on.²

It’s a complete fallacy that the official CSS spec allows so-called “absolute” units of measure like inches, points, and centimeters to be mapped to anything but actual physical units. Ironically, previous versions of CSS treated these things as you would hope and expect, but a change was made “because too much existing content relies on the assumption of 96dpi, and breaking that assumption breaks the content.” Call me idealistic if you will, but I am more of the mind that a spec should be written based on what is best for the future, not to cater to things that were made in the past.³

Getting physical

Any ability to leverage physical variables for web design will require a joint effort by several groups:

  • Device manufacturers will need to provide APIs that can inform the operating system—and, by extension, web browsers and web designers—of the actual physical properties of the hardware being used to present content to the user. Some device APIs are already beginning to show up in the world, but there is a long way to go before functionality and adoption are anywhere near dependable.
  • Standards organizations—the W3C in particular—will need to establish specifications for how to reference physical properties when formatting content. They will need to update (or at least augment) their existing “absolute” units of measure to be more meaningful, so they are more than just multipliers of sizeless pixels.
  • Software manufacturers will need to implement support for new specs relating to physical media features. Browsers are the most obvious software that will need to implement support, but the biggest challenge might be in getting native support for device APIs in operating system software.
  • Type manufacturers and type services will need to provide more diverse ranges of typefaces that have been optimized for a variety of physical properties. Ideally, many of the needed variations could even be provided on the fly using a broader approach to the ideas of font hinting.
  • Web designers and developers, last but not least, will need to build their sites to respond to physical properties, leveraging all variables to the benefit of their users.

Size and resolution are just the tip of the iceberg of physical variables that could be considered when improving web typography. Things like viewing distance, ambient light, display luminance, contrast ratio, black levels, etc., etc., could all be factored in to improve the reading experience. Even the ability to know some variables within the realm of software, like the user’s rendering engine or the presence of subpixel positioning, would go a long way toward helping web typographers design a better reading experience.

In the meantime, I’d love to see more of the players mentioned above start to at least experiment with what’s possible when physical features can be specified, detected, and factored into responsive designs in structured, meaningful, and predictable ways. Until we can do that, we’re all just ordering pizza without knowing exactly what will end up on our plate.

0
Your rating: None

By Dominique Hazaël-Massieux: People used to stare at me and laugh, back in 2005 when W3C launched its Mobile Web Initiative to advocate the importance of the web to the mobile world. Now I am the one smiling much of the time, as I did most recently during the 2013 edition of the Mobile World Congress (MWC) in Barcelona, one of the largest events to focus on mobile devices and networks.

This year W3C had a huge HTML5 logo splashed across its booth to emphasize the impact of the Open Web Platform across industries and devices. But the real adoption story was told by the HTML5 logos prominent at many, many other booths. The web has gained real visibility on mobile, and we should all be smiling because we are all getting closer to a platform for reaching more people on more devices at lower cost.

MWC 2013 also confirmed that HTML5 has broken out of the browser. We are seeing more and more HTML5-based development platforms, such as PhoneGap, Windows 8, Blackberry, and Tizen. Mozilla’s big announcement at MWC 2013 centered on FirefoxOS, Mozilla’s mobile operating system entirely based on web technologies. W3C and Intel partnered to create a T-shirt that says “I See HTML5 Everywhere.” And indeed, I do.

The challenge of mobile

Not only has the web a big role to play on mobile, mobile has also a key role to play for the web. As more and more of our connected interactions start or end on mobile devices, we must ensure that the web platform adapts to our mobile lives. I believe this is critical for the future of the web.

For many years W3C has designed technology to make the experience of web users on mobile ever more rich, adapted, and integrated. For example, CSS media queries provide the basis for responsive web design. There is already a lot for mobile, and a lot more is coming. To help people follow all the activity, every quarter I publish an overview of web technologies that are most relevant to mobile.

These technologies are the tools designers can rely on to build the user experience they need. But technologies are only a small piece of the puzzle when it comes to making the web user experience work on mobile devices. The number of A List Apart articles about mobile development provides a clear sign that this challenge is driving creativity in the design community. Responsive web design, mobile first, future friendly, and just-in-time interactions are some of the trends that have resonated with me over the years. The creativity is fantastic, but we still want our lives to be easier. Where web technologies do not yet provide the hooks you need to practice your craft, please let us know. Feel free to write me directly: dom@w3.org.

Closing the gap

Another challenge that we, the web community, face on mobile is the amazing energy devoted to native development.

The web has displaced a lot of the native software development on traditional computers; on mobile, the reverse trend has happened. Content that users had enjoyed on the web for years started to migrate to native applications: newspapers, social networking, media sharing, government services, to name a few. And to add insult to injury, a number of these content providers are pushing their users away from their website toward their native application, with obtrusive banners or pop-ups.

It is unclear where the world is going on mobile: some statistics and reports show a strong push toward moving back to the web (e.g., the recent Kendo UI survey), while others argue the opposite. What is clear to me, though, is that we cannot afford to let mobile become a native-entrenched ecosystem.

What has made the web unique and popular in so many hearts is not the technology (some great, some terrible) nor even the ubiquity (since interoperability can reduce it). I believe the much more fundamental importance of the web comes from its structural openness: anyone can publish the content they see fit and anyone can participate in defining the future of the web as a platform.

Native ecosystems on mobile have historically been very closed ecosystems, under the control of single commercial entities. A world where the majority of our information and infrastructure would be trapped inside these ecosystems is not something we should accept lightly. Mind you, I appreciate the innovations spawned by these platforms, but we need to encourage the cycle where innovations become standards, and those standards prime the platform for the next innovations.

Of course the best way to shift the balance to the web is to make the web the best platform for mobile. Achieving this will require ideas and energy from many people, and web developers and designers play a critical role in shaping the next generation of web user experiences. I am leading a focused effort in W3C to assess what we can and should do to make the web more competitive on mobile, and welcome feedback and ideas on what the missing pieces in the puzzle are.

Beyond mobile

I believe a key part in making the web the “king of mobile” is to realize that mobile devices are a means to an end. In our connected world—computers, phones, tablets, TVs, cars, glasses, watches, refrigerators, lightbulbs, sensors and more to come—mobile phones will most likely remain the hub for while. The only platform that can realistically be made available on all these devices is the web.

We have a unique opportunity to make the Open Web Platform a success. I realize getting it right will not be trivial. Building user experiences that scale from mobile (or watches!) to TV is complex. Building user experiences that adapt to these very different type of interactions will be hard. Matching the needs from users in a growing diversity of contexts will make us cringe. Creating user experiences that abolish the devices barrier (as I explored some months ago) is guaranteed to create more than a few headaches.

But there is unprecedented momentum to create an open platform for the planet. And that has me smiling a lot.

0
Your rating: None

Imagine you’re at an intersection waiting for your turn to walk across the street. You push the button to call the walk signal, and you take out your phone. You want to accomplish one thing: maybe check your e-mail, add an item to your to-do list, or check Twitter. You have a limited amount of time to accomplish that one thing.

That amount of time is how long users have to finish what they want to do on your site. And it matters.

Adding half a second to a search results page can decrease traffic and ad revenues by 20 percent, according to a Google study. The same article reports Amazon found that every additional 100 milliseconds of load time decreased sales by 1 percent. Users expect pages to load in two seconds—and after three seconds, up to 40 percent will simply leave.

Can you keep up? If you’re designing sites with rich content, lots of dynamic elements, larger JavaScript files, and complex graphics—like so many of us are—the answer might be “no.”

It’s time we make performance optimization a fundamental part of how we design, build, and test every single site we create—for every single device.

Designing for performance

Website performance starts with design. Weigh a design choice’s impact on page speed against its impact on your site’s conversion rate. Do you really need eight different shades of blue? What value does this 1,000px-wide background image add? Will replacing a sprite with an icon font actually add more page weight and slow rendering, or will it be faster than the original image?

Not every design decision will favor performance. I’ve found that a button style that slightly slows page speed can still increase conversions, and it’s worth the small web performance sacrifice.

But sometimes, performance will win. I once had a landing page redesign that added a significant amount of images to a page, and I wasn’t sure whether the performance hit would have a negative impact on conversions, so I rolled the redesign out to a small subset of users in an A/B test to see what the impact would be. The new design took twice as long to load, and I immediately saw a high exit rate and lower conversion rate, so we kept the original lightweight design. Being wrong is okay—it’s what gives you a benchmark.

In another experiment, the dyn.com homepage featured a thumbnail image section with 26 images that rotated in and out of 10 slots.


dyn.com homepage.

My teammate at the time put all 26 images into a sprite, which:

  • Increased the total homepage size by 60K with the increased CSS, JavaScript, and image size needed to recreate this effect with the sprite
  • Decreased the number of requests by 21 percent
  • Cut the total homepage load time by a whopping 35 percent

This proves that it’s worth experimenting: We weren’t sure whether or not this would be a page speed success, but we felt it was worth it to learn from the experiment.

Coding for performance

Clean your HTML, and everything else will follow.

Start by renaming non-semantic elements in your HTML. This is probably the toughest, but once you start thinking about theming in terms of semantics like “nav” or “article” and less with design or grid names, you’ll make significant headway. Often we get to elements with non-semantic names by way of needing more weight in CSS selectors, and instead of cleaning our CSS and adding specificity the right way, we add unnecessary IDs and elements to our HTML.

Then, clean up your CSS. Remove inefficient selectors first. In a study I performed for writegoodcode.com, I found that adding inefficient selectors to a CSS file actually increased page load time by 5.5 percent. More efficient CSS selectors will actually be easier to redesign and customize the styles of in the future since they are easier to read in your stylesheet and have semantic meaning. Repurposable, editable code often goes hand-in-hand with good performance. In that case study, I saved 39 percent of the CSS file size by cleaning my CSS files.

Next, focus on curing your HTML of div-itis. Typically the cleaner your markup ends up, the smaller your CSS will be, and the easier redesigning and editing will be in the future. It saves you not just page load time, but development time too.

Last, focus on creating repurposable code, which saves time and results in smaller CSS and HTML files. Less HTML and CSS will be significantly easier to maintain and redesign later, and the smaller page sizes will have a positive impact on page speed.

Optimizing requests

Requests are when your browser has to go fetch something like a file or a DNS record. The cleaner your markup, the fewer requests the browser has to make—and the less time users will spend waiting for their browser to make those round trips.

In addition to clean markup, minimize JavaScript requests by only loading it when absolutely necessary. Don’t call a file on every page if you don’t need it on every page. Don’t load a JavaScript file on a responsive design that is only needed for larger screens; for example, replace social scripts with simple links instead. You can also load JavaScript asynchronously so that the JavaScript won’t block any content from rendering.

Though a responsive design typically means more CSS and images (larger page weight), you can still get faster load times from larger page sizes if you cut requests.

Optimizing images

Save as many image requests as you can, too. First, focus on creating sprites. In my writegoodcode.com study, I found that a sprite for the icons in the example cut page load time by 16.6 percent. I like to start cleaning up images by creating one sprite for repeating backgrounds. You may need to create one for vertical repeats and one for horizontal repeats.

Next, create one transparent sprite for no-repeat backgrounds. This will include things like your logo and icons. As you get more advanced, you can also use tools like Grunticon, which takes SVG icon and background images and figures out how best to serve them based on the user’s browser’s capabilities.

After you regenerate your images, run them through an optimizer like ImageOptim. Similarly, retina-sized images can still be made smaller with extensive compression that isn’t noticeable in the end result.

Now see which images you can replace with CSS3 gradients. This will not only make a dent in page load time, but it will also make it infinitely easier to edit the site later, as developers won’t have to find original image files to edit, regenerate, or re-optimize in the future.

Last, look at using Base64 encode, which allows you to embed an image into your CSS file instead of calling it using a separate URL. It ends up looking like this:

#nav li:after { content:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA
UAAAAFCAYAAACNbyblAAAAI0lEQVQIW2P4//8/w8yZM//DMIjPAGPAMIiPWx
CIMQQxzAQAoFpF7lGFr24AAAAASUVORK5CYII=);
}

The random letters and numbers add up to a small circle that’s used in many places within dyn.com. Embedding the image allows you to save an image request each time you want to use it in your design. Embedding images using this method will make your CSS file larger, so it’s worth testing the page load time before and after to ensure you’re making improvements.

Measuring performance

Now for the fun part: determining whether your efforts are paying off.

Both Google’s PageSpeed and Yahoo!’s YSlow offer suggestions on how you can improve page load time, including identifying which elements block page rendering and the size of your page’s different components like CSS or HTML.

I also recommend the YSlow extension 3PO, which checks your site for integration with popular third-party scripts like Twitter, Facebook, and Google+. The plugin can give you recommendations on how to further optimize the social scripts on your page to improve page load time.

WebPageTest.org has been my go-to benchmarking tool ever since I first started making improvements based on PageSpeed and YSlow’s suggestions. It gives very detailed information about requests, file size, and timing, and it offers multiple locations and browsers to test in.

Benchmarking can help you troubleshoot as you design. Measuring performance and analyzing the results will help you make both your large- and small-screen designs faster. You can also test and benchmark techniques like conditional loading of images as you get more comfortable developing for performance.

The impact of web performance

Web performance affects your users—and that means its everyone’s job to understand it, measure it, and improve it. All of these techniques will lead to better page load time, which creates a significant improvement to your site’s user experience.

Happier users mean better conversion rates, whether you’re measuring in revenue, signups, returning visits, or downloads. With a fast page load time, people can use your site and accomplish what they want in a short amount of time—even if it’s just while they’re waiting for a walk signal.

0
Your rating: None

Close your eyes and touch your nose. How did you do it? How did you sense where your hand was, and direct it to the right point? You’re not using sight, hearing, taste, smell, or touch (except right at the end). Instead, you’re relying on proprioception: the sense of your body’s position in space, and the position of various parts of the body in relation to each other.

Proprioception—sometimes regarded as the sixth sense—helps us understand our orientation, coordinate our movements, and make sense of the world around us. It helps us turn space into place, turning an abstract set of dimensions into an environment that we understand and can manipulate accordingly.

Digital space, of course, doesn’t offer physical proprioception, so it falls to designers to provide cues about the user’s position. The most obvious way to do this is with explicit visual components. Throughout the web’s history, we’ve adopted many physical metaphors for these navigational building blocks: breadcrumbs, navigation menus, and homepages have become familiar to millions of users.

Unfortunately, these explicit navigational components are difficult to handle in the responsive era. They don’t fit well on small screens, and are falling out of favor as recent trends demand their deprecation in the interface.

To revitalize these components, enterprising designers have started to explore new small-screen navigation patterns. These are a useful start, but to help users navigate comfortably on small screens, it’s worth looking beyond pure signposting. We can bring the sense of proprioception into the digital world by focusing on the transitions between screens. Here are just a few examples.

Horizontal for hierarchy

By (largely unspoken) Western convention, right is the direction of horizontal progression–just think of the number line or written text. Left is subsequently seen as backward, even regressive. It’s no accident that the Latin for left–“sinistra”–found its home in the English language as “sinister.” This historical prejudice may upset my fellow lefties, but it offers a helpful convention for digital product designers.

Horizontal transitions are seen throughout smartphone operating systems. In this example from the iOS Music app, as content moves right to left, the user appears to move right, and thus down the hierarchy. He can then move left to return back up the tree. The iOS designers use directional indicators to back up these transitions. Controls that lead down the hierarchy are given right arrow shapes, and those that lead up (or “Back”) point left. Position further reinforces the concept, with right arrows placed on the right, and left arrows on the left.

The design works by offering users a mental origin—the home screen–then using transitions to establish a sense of displacement from that origin. Every step right takes the user farther from home, every step left brings them closer. This effect therefore needs a clear anchor or landmark against which the user can orient. The home screen of an app is usually at the top of the tree: the horizontal hierarchy effect isn’t so successful if the user is thrown straight into the middle of a complex hierarchy.

Perpendicular for modals

Not every action in an otherwise hierarchical structure sits within the hierarchy. Modal actions like warnings, “select account” disambiguation screens, or login flows need different treatment. It’s useful here to use perpendicular transitions that go against the horizontal flow.

Using the y-axis (i.e. vertical transitions) is the most common choice. The Twitter mobile app uses it, for example, for composing new tweets, an activity that breaks out of the timeline/detail view hierarchy.

The modal window slides into view from the bottom of the app. This doesn’t suggest progress within the hierarchy. Instead, it feels disruptive: the screen intrudes into the usual flow. Once the user has completed or dismissed the screen, it disappears from whence it came, and we return to horizontal business as usual.

Flip for configuration

An alternative for non-hierarchical interactions is a flip.

The physical metaphor of the flip gives it a slightly different feel. This isn’t interruption so much as configuration: reaching around the back of the screen to play with the wiring. As such, a flip is ideal for Settings screens, where the options chosen directly affect the initial view. In this clip, the Me screen flips to allow the user to choose their preferred account.

Card swap for entering a new hierarchy

In the card swap transition, the active screen moves aside and then behind another, like a deck of cards being cut.

This transition feels more absolute than the others. Just this simple piece of animation wipes the slate clean and tells the user she’s moved to a new structure, or even a new app. As such, it also feels less reversible. The user can’t simply press Back to undo this switch. The only way to reverse is to repeat the action, bringing the other screen back to the foreground.

These four transitions alone communicate significantly different navigational patterns; yet transitions are frequently overlooked in favor of bulkier on-screen components. Intelligent transitions could make a significant difference to your app’s user experience. If they support and confirm your app’s structure, your navigational model can fall into place more easily. However, if your transitions conflict with on-screen components, you may just stir up a vague feeling that something’s not quite right.

The rise of small screens rewards time spent on the small details. When on-screen territory is at a premium, the gaps between screens become far more interesting.

0
Your rating: None

Like the Roman god Janus (and many a politician), every web application has two faces: Its human face interacts with people, while its machine face interacts with computer systems, often as a result of those human interactions. Showing too much of either face to the wrong audience creates opportunity for error.

When a user interface—intended for human consumption—reflects too much of a system’s internals in its design and language, it’s likely to confuse the people who use it. But at the same time, if data doesn’t conform to a specific structure, it’s likely to confuse the machines that need to use it—so we can’t ignore system requirements, either.

People and machines parse information in fundamentally different ways. We need to find a way to balance the needs of both.

Enter the Robustness Principle

In 1980, computer scientist Jon Postel published an early specification for the Transmission Control Protocol, which remains the fundamental communication mechanism of the internet. In this spec, he gave us the Robustness Principle:

Be conservative in what you do, be liberal in what you accept from others.

Although often applied to low-level technical protocols like TCP, this golden rule of computing has broad application in the field of user experience as well.

To create a positive experience, we need to give applications a human face that’s liberal: empathetic, flexible, and tolerant of any number of actions the user might take. But for a system to be truly robust, its machine face must also take great care with the data it handles— treating user input as malicious by default, and validating the format of everything it sends to downstream systems.

Building a system that embraces these radically different sets of constraints is not easy. At a high level, we might say that a robust web application is one that:

  1. Accepts input from users in a variety of forms, based first on the needs and preferences of humans rather than machines.
  2. Accepts responsibility for translating that human input to meet the requirements of computer systems.
  3. Defines boundaries for what input is reasonable in a given context.
  4. Provides clear feedback to the user, especially when the translated input exceeds the defined boundaries.

Whether it’s a simple form or a sophisticated application, anytime we ask users for input, their expectations are almost certainly different from the computer’s in some way. Our brains are not made of silicon. But thinking in terms of the Robustness Principle can help us bridge the gap between human and machine in a wide range of circumstances.

Numbers

Humans understand the terms “one,” “1,” and “1.00” to be roughly equivalent. They are very different to a computer, however. In most programming languages, each is a different type of data with unique characteristics. Trying to perform math on the wrong kind of data could lead to unexpected results. So if a web application needs the user to enter a number, its developers want to be sure that input meets the computer’s definition. Our users don’t care about such subtleties, but they can easily bubble up into our user interfaces.

When you buy something over the phone, the person taking your order never has to say, “Please give me your credit card number using only digits, with no spaces or dashes.” She is not confused if you pause while speaking or include a few “umms.” She knows a number when she hears one. But such prompts commonly litter web forms, instructing users to cater to the computer’s needs. Wouldn’t it be nice if the computer could cater to the person’s needs instead?

It often can, if we put the Robustness Principle to work to help our application take a variety of user input and turn it into something that meets the needs of a machine.

For example, we could do this right at the interface level by modifying fields to pre-process the user’s input, providing immediate feedback to the user about what’s happening. Consider an input field that’s looking for a currency value:

Form input requesting a currency value

HTML 5 introduces some new attributes for the input element, including a type of number and a pattern attribute, intended to give developers a way to define the expected format of information. Unfortunately, browser support for these attributes remains limited and inconsistent. But a bit of JavaScript can do the same work. For example:

<input onkeyup="value=value.replace(/[^0-9\.]/g,'')" />
<input onblur="if(value.match(/[^0-9\.]/)) raise_alert(this)" />

The first input simply blocks any characters that are not digits or decimal points from being entered by the user. The second triggers a notification instead.

We can make these simple examples far more sophisticated, but such techniques still place the computer’s rules in the user’s way. An alternative might be to silently accept anything the user chooses to provide, and then use the same regular expressions1 to process it on the server into a decimal value. Following guideline number three, the application would perform a sanity check on the result and report an error if a user entered something incomprehensible or out of the expected range.

Our application’s liberal human face will assume that these events are the exception: If we’ve designed and labeled our interfaces well, most people will provide reasonable input most of the time. Although precisely what people enter (“$10.00” or “10”) may vary, the computer can easily process the majority of those entries to derive the decimal value it needs, whether inline or server-side. But its cautious, machine-oriented face will check that assumption before it takes any action. If the transaction is important, like when a user enters the amount of a donation, the system will need to provide clear feedback and ask for confirmation before proceeding, even if the value falls within the boundaries of normalcy. Otherwise, aggressive reduction of text to a number could result in an unexpected—and potentially very problematic—result for our user:

Overly aggressive reduction of text input to a number leads to unexpected results

Dates

To a computer, dates and times are just a special case of numbers. In UNIX-based systems, for example, time is often represented as the number of seconds that have elapsed since January 1, 1970.

For a person, however, context is key to interpreting dates. When Alice asks, “Can we meet on Thursday?” Bob can safely assume that she means the next Thursday on the calendar, and he certainly doesn’t have to ask if she means Thursday of last week. Interface designers should attempt to get as close to this human experience as possible by considering the context in which a date is required.

We can do that by revisiting some typical methods of requesting a date from users:

  • A text input, often with specific formatting requirements (MM/DD/YYYY, for example)
  • A miniature calendar widget, arranging dates in a month-by-month grid

These patterns are not mutually exclusive, and a robust application might offer either or both, depending on the context.

There are cases where the calendar widget may be very helpful, such as identifying a future date that’s not known (choosing the second Tuesday of next February). But much of the time, a text input probably offers the fastest path to entering a known date, especially if it’s in the near future. If Bob wants to make a note about Thursday’s meeting, it seems more efficient for him to type the word “Thursday” or even the abbreviation “Thu” than to invoke a calendar and guide his mouse (or worse, his fingertip on a touchscreen) to the appropriate tiny box.

But when we impose overly restrictive formatting requirements on the text, we undermine that advantage—if Bob has to figure out the correct numeric date, and type it in a very specific way, he might well need the calendar after all. Or if an application requests Alice’s birthdate in MM/DD/YYYY format, why should it trigger an error if she types 1/1/1970, omitting the leading zeroes? In her mind, it’s an easily comprehensible date.

An application embracing the Robustness Principle would accept anything from the user that resembles a date, again providing feedback to confirm her entry, but only reporting it as a problem if the interpretation fails or falls out of bounds. A number of software libraries exist to help computers translate human descriptions of dates like “tomorrow,” “next Friday,” or “11 April” into their structured, machine-oriented equivalents. Although many are quite sophisticated, they do have limitations—so when using them, it’s also helpful to provide users with examples of the most reliable patterns, even though the system can accept other forms of input.

Addresses

Perhaps more often than any other type of input, address fields tend to be based on database design rather than the convenience of human users. Consider this common layout:

Typical set of inputs for capturing an address

This set of fields may cover the majority of cases for U.S. addresses, but it doesn’t begin to scratch the surface for international users. And even in the U.S., there are legitimate addresses it won’t accommodate well.

An application that wants to accept human input liberally might take the daring approach of using a single textarea to capture the address, allowing the user to structure it just as he or she would when composing a letter. And if the address will only ever be used in its entirety, storing it as a single block of text may be all that’s required. It’s worth asking what level of detail is truly needed to make use of the data.

Often we have a clear business need to store the information in discrete fields, however. There are many web-based and local services that can take a variety of address formats and standardize them, whether they were collected through a single input or a minimal set of structured elements.

Consider the following address:

Avenue Appia 20
1211 Genève 27
SUISSE

The Google Geocoding API, for example, might translate it to something like the following, with a high level of granularity for mapping applications:

"address_components" : [
  {
     "long_name" : "20",
     "short_name" : "20",
     "types" : [ "street_number" ]
  },
  {
     "long_name" : "Avenue Appia",
     "short_name" : "Avenue Appia",
     "types" : [ "route" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "Geneva",
     "types" : [ "locality", "political" ]
  },
  {
     "long_name" : "Genève",
     "short_name" : "Genève",
     "types" : [ "administrative_area_level_2", "political" ]
  },
  {
     "long_name" : "Geneva",
     "short_name" : "GE",
     "types" : [ "administrative_area_level_1", "political" ]
  },
  {
     "long_name" : "Switzerland",
     "short_name" : "CH",
     "types" : [ "country", "political" ]
  },
  {
     "long_name" : "1202",
     "short_name" : "1202",
     "types" : [ "postal_code" ]
  }
]

The details (and license terms) of such standardization systems will vary and may not be appropriate for all applications. Complex addresses may be a problem, and we’ll need to give the application an alternate way to handle them. It will be more work. But to achieve the best user experience, it should be the application’s responsibility to first try to make sense of reasonable input. Users aren’t likely to care whether a CRM database wants to hold their suite number separately from the street name.

The exception or the rule

Parsing human language into structured data won’t always work. Under guideline number four, a robust system will detect and handle edge cases gracefully and respectfully, while working to minimize their occurrence. This long tail of user experience shouldn’t wag the proverbial dog. In other words, if we can create an interface that works flawlessly in 95 percent of cases, reducing the time to complete tasks and showing a level of empathy that surpasses user expectations, it’s probably worth the effort it takes to build an extra feedback loop to handle the remaining five percent.

Think again about the process of placing an order over the phone, speaking to a human being. If she doesn’t understand something you say, she may ask you to clarify. Even when she does understand, she may read the the details back to you to confirm. Those interactions are normal and usually courteous. In fact, they reassure us all that the end result of the interaction will be what we expect.

She is not, however, likely to provide you with a set of rigid instructions as soon as she answers the phone, and then berate you for failing to meet some of them. And yet web applications create the equivalent interaction all the time (sometimes skipping past the instructions and going directly to the berating).

For most developers, system integrity is an understandably high priority. Better structure in user-supplied data means that we can handle it more reliably. We want reliable systems, so we become advocates for the machine’s needs. When input fails to pass validation, we tend to view it as a failure of the user—an error, an attempt to feed bad data into our carefully designed application.

But whether or not our job titles include the phrase “user experience,” we must advocate at least as much for the people who use our software as we do for computer systems. Whatever the problem a web application is solving, ultimately it was created to benefit a human being. Everyone who has a hand in building an application influences the experience, so improving it is everyone’s job. Thinking in terms of robustness can help us balance the needs of both people and computers.

Postel’s Law has proven its worth by running the internet for more than three decades. May we all hold our software—and the experiences it creates—to such a high standard.

0
Your rating: None

For fifteen years, A List Apart has published long-form magazine articles written by the whip-smart web community (AKA you). But we don’t always have grand, 2,000-word arguments to make. Sometimes we just want to tell you about a new technique, share an article, or post a quick note.

And that’s what you’ll see here: short posts and interesting links from our staff and contributors.

You got blog in my RSS

A few of you noted that your RSS readers have been heaving under the weight of the new ALA. Don’t worry! We had some RSS hiccups during launch. Expect about a blog post or two a day, more or less, from here on out.

Starting now, you can also select from two RSS subscription options:

A word from our sponsors

If this redesign says anything, it’s that we care about reading—and we don’t want advertising (or anything else) to get in the way.

That won’t change.

But we do need to cover our costs—mostly paying our staff and contributors a small honorarium for their countless hours of work—so we’re going to start including a sponsor on our RSS feed. (If you read Daring Fireball, it’ll look a lot like that.) Much like The Deck ads we already run on ALA, we’ll only accept sponsors that offer something we think is relevant, and it’ll never be intrusive.

Have a blog post idea?

The best way to tell us is to tweet it @alistapart.

Please only e-mail us about feature article submissions—we can only sift through our inbox a couple times a week, and we don’t want your awesome idea to get lost in a pile of press releases.

Also, please, please don’t send us any more press releases.

0
Your rating: None

User-centered design has served the digital community well. So well, in fact, that I’m worried its dominance may actually be limiting our field.

The terms “user experience design” (UX) and “user-centered design” (UCD) are often used interchangeably. But there’s an important distinction.

UX design is the discipline: what we do. Precise definition is elusive, but most attempts focus on experience as an explicit design objective.

User-centered design is a process: how we do it. Again the specifics vary, but usually form shades of the same hue:

  • Research. Immerse yourself in your users’ worlds to understand what they do and why they do it.
  • Sketch ideas that address these learned needs.
  • Prototype the most promising ideas to evaluate them more accurately.
  • Iterate through testing, repeating steps as required.

Other design processes

UCD is the dominant design approach within UX, so pervasive that some UX designers behold it as the Platonic ideal of design. Deviation from the UCD faith is even met with derision. A naive recruiter whose job specs aren’t explicit about direct user contact soon learns not to reoffend.

But other design processes are available. Jared Spool’s article 5 Design Decision Styles explores alternatives to UCD, including:

  • Self design, aka “scratching your own itch.” The designer acts as a surrogate for the audience. It’s convenient and quick, but clearly only reliable in narrow circumstances.
  • Genius design. Genius design has no first-hand research phase. To anticipate user behavior, the designer draws upon stockpiled experience, imaginative analogy, and psychological fundamentals.
  • Activity-focused design. Here, the designer addresses users’ primary tasks rather than any underlying needs. Tasks are derived a priori, from a logical interpretation of the domain, rather than from research.1

It seems arbitrary to regard these alternative design processes as inferior substitutes. Surely other modes can fulfill the broad UX mandate of creating experiences?

Weaknesses of UCD

UCD’s ascendancy deserves historical context. Its success came largely as an antidote to what preceded: the Wild West of the early web, dominated by Hey-This-Looks-Cool hackery. UCD offered rigor (or at least the perception of rigor; see Scientism below) that helped the immature web refocus on its audience. But that phase is long past, and the more experience I earn, the more flaws I see in UCD’s finery.

Heaviness

UCD simply takes longer than genius or self design. Clients typically identify research as the culprit, meaning the research phase is usually targeted when time is short. The UX industry has countered this variously through client education, seeking shortcuts, or by slipping research in without formal consent. But—whisper it quietly—some design research is wasted effort. For research to be valuable, it must:

  • be free from sampling or cognitive biases;
  • address issues that are central to the product;
  • offer genuinely new insight; and
  • be used to forge new ideas, not to validate predetermined decisions.

In these circumstances UCD is unparalleled, enabling breakthroughs other modes can’t. But I think UCD advocates overstate how often these planets align. I argue that genius design and iteration will often achieve better results in the same time.

Someone with experience as not only a designer but also as an attentive user has built up an unconscious repertoire of patterns and approaches that suit various contexts. As this library grows, it frees the designer from the need to research every problem.

The UX industry appears to acknowledge the relevance of genius design by its adoption of the expert review—a tool that epitomizes the approach—but often feels it has to prop this review up with user validation. It’s hard to escape the thought that the primary function of this redundancy is to retain the appearance of neutrality.

Negation of style

Among the UX community’s favorite quotes of late:

“[Good design] dissolves in behavior.” —Naoto Fukosawa
“The best interface is no interface.” —various
“Great design is invisible.” —various

At first glance, these are elegant statements of aesthetic intent: the user should never notice the designer’s influence. This “disappearing designer” motif holds self-sacrificial appeal, and for many interactions it’s great advice. I don’t want my tax forms to bear any trademark flourishes. However, when we extend this line of thought to its logical conclusions, these quotes start to look like mere slogans.

By negating the idea of a designer’s influence, we also negate the idea of style within the UX discipline. We’re saying that, done properly, it should be impossible to tell one UX designer’s work from another’s. There should be no signature elements, no philosophical movements, no overarching tenets except that of transparency.

The commoditization of designers that this idea suggests is troubling. Moreover, style is crucial for a creative discipline’s evolution. The best writers and architects—whose work, just like UX design, has function and engenders experience—have unmistakable styles. Throughout history, daring work from iconoclasts has sparked entire movements, and thus transformed creative practice. The transition between the Neoclassical and Modernist architectural eras, for example, wasn’t simply a case of replacing Doric columns with perpendicular glass. It was a total reframing of architecture and its values. Modernity usurped antiquity.

Is our form of functional art any different? In a system that deprecates style, is there room for a designer to pioneer entirely new approaches?2 If not, are we happy with the resultant ideological homogeneity?

Of course our designs must put users first. But there is never just a single way to meet user goals. Instead of trying to deprecate style, we should embrace it as a way to drive our practice forward and lend personality to the things we make. In a marketplace of bewildering clutter, products with a damn opinion are by far the most interesting.

Scientism

Given its academic influences, it’s not surprising that UCD has been sold as a science. Empiricism runs through its discourse, to the unfortunate extent that the UX industry often oversells the certainty it can offer.

Scientism—akin to Colbert’s “truthiness”—is the veneer of science where little scientific validity exists. While UCD is methodical, it is manifestly not scientific. There can never be a universal truth to design. Solutions applied in one context may fail magnificently in another, and the few governing principles (Fitts’s Law, the Gestalt principles, and affordance, say) give at best partial guidance. Some supposed laws, such as the “magical number 7±2” persist in ill-informed fringes of UX, despite being largely rebutted.

While researchers and designers can learn plenty from the scientific method, design simply does not yield to scientific analysis in the way its scientistic proponents wish.

To treat design as a science is to retreat to the illusory safety of numbers, where designers are mostly seen as agents of skewing the odds in your favor. This can start a race to the bottom as design gets less and less leeway. Weak leaders overtest in lieu of trusting designers to make decisions: it’s just a small step from there to the infamous forty-one shades of blue.

Imbalance

Finally, I’m concerned about the mindset that UCD can instill in its practitioners.

It’s unsurprising that a user-centered process can skew inexperienced designers’ loyalties away from business priorities. Some claim that this serves as counterweight to the business-first leanings of other employees. The argument strikes me as infantile. When a designer adopts simplistic, reductive arguments that ignore business reality, it undermines him. It limits his potential influence. Only the well-rounded designer who can fight for what’s right while accommodating business reality will be seen as a true leader.

What next?

I don’t expect UCD’s pre-eminence to change. Nor do I think it necessarily should. But a design community is most healthy when it shares a respectful variety of opinions. I don’t see that in the UX industry today, and I hope we can begin to appreciate the value of alternative design approaches.

The designers who will stand out in future will be those who are familiar with many modes of design. These designers may have a favorite, of course—and UCD is an excellent candidate—but they also have the versatility to draw on other processes when suitable. Perhaps they will even pioneer new approaches to add to our toolkits.

0
Your rating: None