Skip navigation
Help

digital media

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
(author unknown)

For ideal typography, web designers need to know as much as possible about each user’s reading environment. That may seem obvious, but the act of specifying web typography is currently like ordering slices of pizza without knowing how large the slices are or what toppings they are covered with.

If someone asked me how many slices of pizza I wanted for lunch, I would probably say it depends on how large the slices are. Then—even if they told me that each slice was one eighth of a whole pie, or that they themselves were ordering two slices, or even that the slices were coming from Joe’s Pizza—any answer I might give would still be based on relative knowledge and inexact assumptions.

Such is the current situation with the physical presentation of responsive typography on the web. The information at a designer’s disposal for responsive design is virtually nonexistent outside the realm of software. Very little knowledge about the physical presentation of content is available to inform the design. The media query features of today can only relay a very fragmented view of the content’s actual presentation, and related terms from CSS are confusing if not downright misleading.

The immeasurable pachyderm

Among all the physical qualities of web typography, the elephant in the room is the issue of size. I’m not talking about em or rem or “reference pixels” ¹ or even device pixels. I’m talking about real, actual, physical, bona fide, measurable, size!

It’s ridiculous that we can send robots to Mars yet it’s still virtually impossible to render a glyph on a web page and say with confidence: “If you measure this glyph on your screen with a ruler, it will be exactly 10 millimeters wide.” Although actual physical size isn’t always the most important factor in web design, in some cases it is critical. For example, consider content for partially-sighted or low-vision readers: the ability to tweak designs according to physical sizes would enable designers to make conscious design decisions with much more sensitivity to how the type is actually being seen. And even where physical sizing is secondary to relative sizing, why shouldn’t we nevertheless be able to factor in physical size when establishing the relationships between different elements?

Physical considerations ≠ print design

I don’t believe web typography should be a screen-based imitation of print typography. One of the greatest benefits of web typography, and web design in general, is that it is flexible, adaptable, fluidly adjustable, without being locked into any one specific configuration. However(!), that doesn’t mean web designers should be forced to design without any means to address the issues of physical presentation. On the contrary, responsive design will not reach its full potential until it allows the ability to respond to the very important physical variables of digital media.

Please pardon the cliché, but when it comes to typography, on screens or otherwise, size matters. Physical size affects optical issues that change how the eye and brain process typographic images. Not surprisingly, typographers and typeface designers have been compensating for optical size-related issues as far back as Gutenberg.

You can’t expect a paragraph of type with the same relative line-height, column width, letter-spacing, and glyph proportions to function just as well on two different displays that have the same number of pixels but completely different physical sizes. It’s great that designers can adjust proportions between typographic elements if the canvas varies in relative size, but any such compensation is still based on guesswork and assumptions about the physical size of that canvas. When people disagree about the size or spacing of type on a website, there’s a very good chance that their opinions are based on completely different physical manifestations of the same content, even if their software and settings are identical.

Resolute resolution, absolute absolution

One of the most crucial factors in the size equation is resolution. And when I say resolution, I don’t just mean “how many pixels is this?”, or even “how many device pixels is this?”, but also “how large are these pixels?”

This is very different from the W3C’s “resolution” media feature in the current draft of the Media Queries Level 4 spec. You will note that the spec refers to resolution in terms of “CSS ‘inches’”—the quotes around “inches” are theirs, implying that they are not actually inches at all.

For an example of why physical resolution matters, imagine you are rendering text on a digital billboard with a physical resolution of one pixel per inch (1 PPI). Now imagine you are rendering the same text on a 200 PPI mobile device display. Even if you knew the actual number of device pixels that would be used to render your type (which itself is difficult to do with confidence these days), you would want to treat the two compositions very differently, both in terms of the typeface as well as typographic layout. The billboard type would likely require less space between letters. The letterforms themselves would benefit from narrower proportions, and could endure a higher ratio between thick and thin strokes. The type might even require different colors to optimize contrast at that size. These are all basics of typography and typeface design.

Unfortunately, in the current landscape of media query features, there is no way to know the difference between 16 device pixels on a crude LED billboard and 16 device pixels on a high-density mobile display. Heck, there isn’t even a reliable way to know if your type is 16 device pixels at all, regardless of how large the pixels are!

Pixels still rule, for better or worse

I know what some em-based enthusiasts might be thinking: “But you shouldn’t be specifying type sizes in pixel units to start with! All type sizes should be spec’d abstractly in relation to each other or a base font size!” However, in the current world of web typography, no matter what unit of measure you use to spec your onscreen type sizes—em, rem, px, pt, in, %, vh, or whatever else—at the end of the line, your specification is being mapped to pixels. Even if you leave the base size of your document to the defaults and specify everything else with em, there is still a base size which all other sizes will ultimately refer to, and it is defined in pixels.

This is because, currently, the only unit of measure that can be rendered onscreen by any operating system with absolute confidence is the lowly pixel. Until we have media query features that allow us to spec for situations like:

@media (physical-resolution: 1device-pixels-per-physical-inch) { … }

or:

@media (device-width: 10physical-centimeters) { … }

… any compensation for physical size is based entirely on rough guesses about the devices our content will be presented on.²

It’s a complete fallacy that the official CSS spec allows so-called “absolute” units of measure like inches, points, and centimeters to be mapped to anything but actual physical units. Ironically, previous versions of CSS treated these things as you would hope and expect, but a change was made “because too much existing content relies on the assumption of 96dpi, and breaking that assumption breaks the content.” Call me idealistic if you will, but I am more of the mind that a spec should be written based on what is best for the future, not to cater to things that were made in the past.³

Getting physical

Any ability to leverage physical variables for web design will require a joint effort by several groups:

  • Device manufacturers will need to provide APIs that can inform the operating system—and, by extension, web browsers and web designers—of the actual physical properties of the hardware being used to present content to the user. Some device APIs are already beginning to show up in the world, but there is a long way to go before functionality and adoption are anywhere near dependable.
  • Standards organizations—the W3C in particular—will need to establish specifications for how to reference physical properties when formatting content. They will need to update (or at least augment) their existing “absolute” units of measure to be more meaningful, so they are more than just multipliers of sizeless pixels.
  • Software manufacturers will need to implement support for new specs relating to physical media features. Browsers are the most obvious software that will need to implement support, but the biggest challenge might be in getting native support for device APIs in operating system software.
  • Type manufacturers and type services will need to provide more diverse ranges of typefaces that have been optimized for a variety of physical properties. Ideally, many of the needed variations could even be provided on the fly using a broader approach to the ideas of font hinting.
  • Web designers and developers, last but not least, will need to build their sites to respond to physical properties, leveraging all variables to the benefit of their users.

Size and resolution are just the tip of the iceberg of physical variables that could be considered when improving web typography. Things like viewing distance, ambient light, display luminance, contrast ratio, black levels, etc., etc., could all be factored in to improve the reading experience. Even the ability to know some variables within the realm of software, like the user’s rendering engine or the presence of subpixel positioning, would go a long way toward helping web typographers design a better reading experience.

In the meantime, I’d love to see more of the players mentioned above start to at least experiment with what’s possible when physical features can be specified, detected, and factored into responsive designs in structured, meaningful, and predictable ways. Until we can do that, we’re all just ordering pizza without knowing exactly what will end up on our plate.

0
Your rating: None

Grid systems are a key component of graphic design, but they’ve always been designed for canvases with fixed dimensions. Until now. Today you’re designing for a medium that has no fixed dimensions, a medium that can and will shape-shift to better suit its environment—a medium capable of displaying a single layout on a smartphone, a billboard in Times Square, and everything in between. You’re designing for an infinite canvas—and for that, you need an infinite grid system. Discover techniques and guidelines that can help bring structure to your content whatever the screen size.

0
Your rating: None

If I had to make a list of the top 10 things I've done in my life that I regret, "writing a book" would definitely be on it. I took on the book project mostly because it was an opportunity to work with a few friends whose company I enjoy. I had no illusions going in about the rapidly diminishing value of technical books in an era of pervasive high speed Internet access, and the book writing process only reinforced those feelings.

In short, do not write a book. You'll put in mountains of effort for precious little reward, tangible or intangible. In the end, all you will have to show for it is an out-of-print dead tree tombstone. The only people who will be impressed by that are the clueless and the irrelevant.

As I see it, for the kind of technical content we're talking about, the online world of bits completely trumps the offline world of atoms:

  • It's forever searchable.
  • You, not your publisher, will own it.
  • It's instantly available to anyone, anywhere in the world.
  • It can be cut and pasted; it can be downloaded; it can even be interactive.
  • It can potentially generate ad revenue for you in perpetuity.

And here's the best part: you can always opt to create a print version of your online content, and instantly get the best of both worlds. But it only makes sense in that order. Writing a book may seem like a worthy goal, but your time will be better spent channeling the massive effort of a book into creating content online. Every weakness I listed above completely melts away if you redirect your effort away from dead trees and spend it on growing a living, breathing website presence online.

A few weeks ago, Hyperink approached me with a concept of packaging the more popular entries on Coding Horror, its "greatest hits" if you will, into an eBook. They seemed to have a good track record doing this with other established bloggers, and I figured it was time to finally practice what I've been preaching all these years. So you can now download Effective Programming: More Than Writing Code for an introductory price of $2.99. It's available in Kindle, iPad, Nook, and PDF formats.

 More Than Writing Code (Jeff Atwood)

I've written about the ongoing tension between bits and atoms recently, and I want to be clear: I am a fan of books. I'm just not necessarily a fan of writing them. I remain deeply cynical about current book publishing models, which feel fundamentally broken to me. No matter the price of the book, outside of J.K. Rowling, you're basically buying the author a drink.

As the author, you can expect to make about a dollar on every copy that sells. The publisher makes several times that, so they make a nice profit with as few as, say, five thousand copies sold. Books that sell ten or fifteen thousand are rare, and considered strong sellers. So let's say you strike gold. After working on your book for a year or more, are you going to be happy with a payday of ten to fifteen grand?

Incidentally, don't expect your royalty check right away. The publisher gets paid first, by the bookstores, and the publisher may then hold on to your money for several months before they part with any of it. Yes, this is legal: it's in the publisher's contract. Not getting paid may be a bummer for you, but it's a great deal for the publisher, since they make interest on the float (all the money they owe to their authors) - which is another profit stream. They'll claim one reason for the delay is the sheer administrative challenge of cutting a check within three months (so many authors to keep track of! so many payments!)... a less ridiculous reason is that they have to wait to see whether bookstores are going to return unsold copies of your book for a full refund.

Here's one real world example. John Resig sold 4,128 copies of Pro Javascript, for which he earned a grand total of $1.87 per book once you factor in his advance. This is a book which still sells for $29.54 on Amazon new.

Resig-book-check

Tellingly, John's second book seems permanently unfinished. It's been listed as "in progress" since 2008. Can't say I blame him. (Update: John explains.)

When I buy books, I want most of that money to go to the author, not the publishing middlemen. I'd like to see a world where books are distributed electronically for very little cost, and almost all the profits go directly to the author. I'm not optimistic this will happen any time soon.

I admire people willing to write books, but I honestly think you have to be a little bit crazy to sit down and pound out an entire book these days. I believe smaller units of work are more realistic for most folks. I had an epic email discussion with Scott Meyers about the merits of technical book publishing versus blogging in 2008, and I don't think either of us budged from our initial positions. But he did launch a blog to document some of his thoughts on the matter, which ended with this post:

My longer-term goal was to engage in a dialogue with people interested in the production of fast software systems such that I could do a better job with the content of [my upcoming book]. Doing that, however, requires that I write up reasonable initial blog posts to spur discussion, and I've found that this is not something I enjoy. To be honest, I view it as overhead. Given a choice between doing background research to learn more about a topic (typically reading something, but possibly also viewing a technical presentation, listening to a technical podcast, or exchanging email with a technical expert) or writing up a blog entry to open discussion, I find myself almost invariably doing the research. One reason for this is that I feel obliged to have done some research before I post, anyway, and I typically find that once I'm done with the research, writing something up as a standalone blog entry is an enterprise that consumes more time than I'm willing to give it. It's typically easier to write the result up in the form of a technical presentation, then give the presentation and get feedback that way.

Overhead? I find this attitude perplexing; the research step is indeed critical, but no less important than writing up your results as a coherent blog entry. If you can't explain the results of your research to others, in writing, in a way they can understand, you don't understand it. And if you aren't willing to publish your research in the form of a simple web page that anyone in the world can visit and potentially learn from, why did you bother doing that research in the first place? Are you really maximizing the value of your keystrokes? More selfishly, you should always finish by writing up your results purely for your own self-improvement, if nothing else. As Steve Yegge once said: "I have many of my best ideas and insights while blogging." Then you can take all that published writing, fold in feedback and comments from the community, add some editorial embellishment on top, and voilà – you have a great book.

Of course, there's no getting around the fact that writing is just plain hard. Seth Godin's advice for authors still stands:

Lower your expectations. The happiest authors are the ones that don't expect much.

Which, I think, is also good life advice in general. Maybe the easiest way to lower your expectations as an author is by attempting to write one or two blog entries a week, keep going as long as you can, and see where that takes you.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

Screen shot 2012-04-11 at 11.53.36 PM 2

Editor’s Note: TechCrunch columnist Semil Shah currently works at Votizen and is based in Palo Alto. You can follow him on Twitter @semil

“In the Studio” opens its doors this week to one of Silicon Valley’s most quietly active venture capitalists who, after years working in technology operations for major networking companies, a stint with an Asian telecom giant, and nearly a decade investing in mobile, gaming, digital media, and networking companies, is paying particular attention to the implications of big data and the potential opportunities they create.

For the past decade, Ping Li has been investing in across a broad range of technology companies with Accel Partners, where he is a general partner. Since their defining Series A investment in Facebook, the firm has been on a roll, opening offices in New York City and expanding its footprint overseas, all while maintaining their anchor in the middle of Palo Alto’s University Avenue. And, over the past few years, Accel has also developed an interest in “big data.”

The term “big data” is thrown around often in conversation or at tech conferences, but despite the generalizations and hype, significant opportunities exist for entrepreneurs and investors alike. Last year, I attempted to analyze how big data impacted the consumer web and concluded that while opportunities were abundant, very few were in a positions to capitalize on them given the scarcity of talent in these specific areas of the consumer web.

Li and his partners at Accel are certainly looking at big data as it applies to consumer products — the massive amounts of unstructured social data we are all generating through social media and applications, waiting to be harvested. On the enterprise side of things, however, Li believes big data is on the verge of going mainstream, where datasets and analytical tools will soon be available to everyone, igniting new waves of innovation that could disrupt major public companies from the platform all the way to the application layer.

In this conversation, Li shares his views on the big data landscape and also offers subtle advice to potential founders looking into the space. Having the benefit to see many big data technologies and applications over the past few years, he has developed a keen sense of what minefields founders need to look out for when creating these technologies. To take things a step further, Li and his partners at Accel launched a $100M Big Data Fund, invested in creating an ecosystem of academics, technologists, and thought-leaders, and are hosting a private conference at Stanford on May 9 on this topic (technologists working on big data who would like to attend can contact Accel directly through the conference site).

0
Your rating: None