Skip navigation
Help

web users

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Pew Research Center recently conducted a survey with 792 web users, and found that the urge for privacy is more common than it seems. A full 86 percent of respondents had covered their digital tracks in some way, whether it was with encryption software or simply by using a browser's incognito mode, although only 14 percent went as far as using Tor or VPN proxy servers to cover their tracks. More telling, a full 68 percent of responders said current laws were not doing a good enough job protecting privacy online, suggesting a growing base for new legislation. As one study author told The New York Times, "it's not just a small coterie of hackers. Almost everyone has taken some action to avoid surveillance."

0
Your rating: None
Original author: 
Jon Brodkin


Can Google's QUIC be faster than Mega Man's nemesis, Quick Man?

Josh Miller

Google, as is its wont, is always trying to make the World Wide Web go faster. To that end, Google in 2009 unveiled SPDY, a networking protocol that reduces latency and is now being built into HTTP 2.0. SPDY is now supported by Chrome, Firefox, Opera, and the upcoming Internet Explorer 11.

But SPDY isn't enough. Yesterday, Google released a boatload of information about its next protocol, one that could reshape how the Web routes traffic. QUIC—standing for Quick UDP Internet Connections—was created to reduce the number of round trips data makes as it traverses the Internet in order to load stuff into your browser.

Although it is still in its early stages, Google is going to start testing the protocol on a "small percentage" of Chrome users who use the development or canary versions of the browser—the experimental versions that often contain features not stable enough for everyone. QUIC has been built into these test versions of Chrome and into Google's servers. The client and server implementations are open source, just as Chromium is.

Read 11 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Peter Bright

Aurich Lawson (with apologies to Bill Watterson)

Google announced today that it is forking the WebKit rendering engine on which its Chrome browser is based. The company is naming its new engine "Blink."

The WebKit project was started by Apple in 2001, itself a fork of a rendering engine called KHTML. The project includes a core rendering engine for handling HTML and CSS (WebCore), a JavaScript engine (JavaScriptCore), and a high-level API for embedding it into browsers (WebKit).

Though known widely as "WebKit," Google Chrome has used only WebCore since its launch in late 2008. Apple's Safari originally used the WebKit wrapper and now uses its successor, WebKit2. Many other browsers use varying amounts of the WebKit project, including the Symbian S60 browser, the BlackBerry browser, the webOS browser, and the Android browser.

Read 10 remaining paragraphs | Comments

0
Your rating: None

By Dominique Hazaël-Massieux: People used to stare at me and laugh, back in 2005 when W3C launched its Mobile Web Initiative to advocate the importance of the web to the mobile world. Now I am the one smiling much of the time, as I did most recently during the 2013 edition of the Mobile World Congress (MWC) in Barcelona, one of the largest events to focus on mobile devices and networks.

This year W3C had a huge HTML5 logo splashed across its booth to emphasize the impact of the Open Web Platform across industries and devices. But the real adoption story was told by the HTML5 logos prominent at many, many other booths. The web has gained real visibility on mobile, and we should all be smiling because we are all getting closer to a platform for reaching more people on more devices at lower cost.

MWC 2013 also confirmed that HTML5 has broken out of the browser. We are seeing more and more HTML5-based development platforms, such as PhoneGap, Windows 8, Blackberry, and Tizen. Mozilla’s big announcement at MWC 2013 centered on FirefoxOS, Mozilla’s mobile operating system entirely based on web technologies. W3C and Intel partnered to create a T-shirt that says “I See HTML5 Everywhere.” And indeed, I do.

The challenge of mobile

Not only has the web a big role to play on mobile, mobile has also a key role to play for the web. As more and more of our connected interactions start or end on mobile devices, we must ensure that the web platform adapts to our mobile lives. I believe this is critical for the future of the web.

For many years W3C has designed technology to make the experience of web users on mobile ever more rich, adapted, and integrated. For example, CSS media queries provide the basis for responsive web design. There is already a lot for mobile, and a lot more is coming. To help people follow all the activity, every quarter I publish an overview of web technologies that are most relevant to mobile.

These technologies are the tools designers can rely on to build the user experience they need. But technologies are only a small piece of the puzzle when it comes to making the web user experience work on mobile devices. The number of A List Apart articles about mobile development provides a clear sign that this challenge is driving creativity in the design community. Responsive web design, mobile first, future friendly, and just-in-time interactions are some of the trends that have resonated with me over the years. The creativity is fantastic, but we still want our lives to be easier. Where web technologies do not yet provide the hooks you need to practice your craft, please let us know. Feel free to write me directly: dom@w3.org.

Closing the gap

Another challenge that we, the web community, face on mobile is the amazing energy devoted to native development.

The web has displaced a lot of the native software development on traditional computers; on mobile, the reverse trend has happened. Content that users had enjoyed on the web for years started to migrate to native applications: newspapers, social networking, media sharing, government services, to name a few. And to add insult to injury, a number of these content providers are pushing their users away from their website toward their native application, with obtrusive banners or pop-ups.

It is unclear where the world is going on mobile: some statistics and reports show a strong push toward moving back to the web (e.g., the recent Kendo UI survey), while others argue the opposite. What is clear to me, though, is that we cannot afford to let mobile become a native-entrenched ecosystem.

What has made the web unique and popular in so many hearts is not the technology (some great, some terrible) nor even the ubiquity (since interoperability can reduce it). I believe the much more fundamental importance of the web comes from its structural openness: anyone can publish the content they see fit and anyone can participate in defining the future of the web as a platform.

Native ecosystems on mobile have historically been very closed ecosystems, under the control of single commercial entities. A world where the majority of our information and infrastructure would be trapped inside these ecosystems is not something we should accept lightly. Mind you, I appreciate the innovations spawned by these platforms, but we need to encourage the cycle where innovations become standards, and those standards prime the platform for the next innovations.

Of course the best way to shift the balance to the web is to make the web the best platform for mobile. Achieving this will require ideas and energy from many people, and web developers and designers play a critical role in shaping the next generation of web user experiences. I am leading a focused effort in W3C to assess what we can and should do to make the web more competitive on mobile, and welcome feedback and ideas on what the missing pieces in the puzzle are.

Beyond mobile

I believe a key part in making the web the “king of mobile” is to realize that mobile devices are a means to an end. In our connected world—computers, phones, tablets, TVs, cars, glasses, watches, refrigerators, lightbulbs, sensors and more to come—mobile phones will most likely remain the hub for while. The only platform that can realistically be made available on all these devices is the web.

We have a unique opportunity to make the Open Web Platform a success. I realize getting it right will not be trivial. Building user experiences that scale from mobile (or watches!) to TV is complex. Building user experiences that adapt to these very different type of interactions will be hard. Matching the needs from users in a growing diversity of contexts will make us cringe. Creating user experiences that abolish the devices barrier (as I explored some months ago) is guaranteed to create more than a few headaches.

But there is unprecedented momentum to create an open platform for the planet. And that has me smiling a lot.

0
Your rating: None

  

The Web font revolution that started around two years ago has brought up a topic that many of us had merrily ignored for many years: font rendering. The newfound freedom Web fonts are giving us brings along new challenges. Choosing and using a font is not merely a stylistic issue, and it’s worth having a look at how the technology comes into play.

While we cannot change which browser and OS our website visitors use, understanding why fonts look the way they do helps us make websites that are successful and comfortable to read in every scenario. Until recently, there were only a small handful of “Web safe” fonts we could use. While offering little variety (or means of expression), these fonts were very well-crafted and specifically adjusted—or even developed—for the screen, so there was little to worry about in terms of display quality.

Now that we have a great choice of fonts that can be used on websites, it becomes clear that the translation of a design into pixels is not something that happens naturally or consistently. OS makers apply different strategies to render how typefaces are displayed, and these have evolved greatly over time (and still continue to do so). As we now look closer at fonts on screen more than ever before, we realize that the rendering of these glyphs can differ significantly between systems and font formats. What’s more, it has become clear that even well-designed fonts may not look right on Windows if they are missing one crucial added ingredient: hinting.

This article presents the mechanisms of type rendering, how they were developed, and how and why they are applied by the various operating systems and browsers—so that when it comes time to choose a font for your next project, you know what to look out for to ensure the quality of the typography is consistently high.

Rendering Strategies

Ideal shape, black-and-white and grayscale rendering
Ideal shape, black-and-white and grayscale rendering

Rasterization

In digital type, characters are designed as abstract drawings. When a text is displayed on screen, this very precise, ideal shape needs to be expressed with a more or less coarse grid of pixels. As screens turned from mere preview devices for printing output into the actual medium we read in, more and more sophisticated rendering methods were developed in order to make type on the screen easy and pleasant to read.

Black And White Rendering

The earliest method of expressing letter shapes was using black and white pixels, sometimes referred to as bi-level rendering. Printers are still based on this principle, and thanks to their high-resolution, the result is a very good representation of the design. On screen, however, the small number of available pixels does not transport the subtleties of the drawn shapes very well. And although we might not be able to see the individual pixels, the steps found in round contours are noticeable.

Grayscale Rendering

In the mid-1990′s, operating systems started employing a very smart idea. Although screens have a rather low resolution, they can control the brightness of each pixel. This allows more information to be brought into the rasterized image.

In grayscale rendering, a pixel that is on the border of the original shape becomes gray, its brightness depending on how much it is covered by the ideal black shape. As a result, the contour appears much smoother, and design details are represented. The type on screen is no longer merely about being legible—it has its own character and style.

This principle—also called antialiasing—is the same that is used when photos are resampled to a lower resolution. Our eyes and brain interpret the information contained within the gray pixels and translate it back into sharp contours, so what we perceive is fairly close to the original shape. A similar effect is at work when a relatively coarse newspaper image that can appear nicely shaded if we hold it far enough away (or similarly, in the art of Chuck Close). Recently, Gary Andrew Clarke took this to the extremes with his “Art Remixed” Series.

Subpixel Rendering

Apparently colored pixels increase the resolution
Apparently colored pixels increase the resolution.

The third generation of rendering technology is characterized by apparently colored pixels. If we take a screenshot and the edges appear red and blue when enlarged, then we know that we are looking at subpixel rendering.

On LCD screens, the red, green and blue subpixels that control the color and brightness of the pixel are located side-by-side. Since they are so small, we don’t perceive them as individual colored dots. Having a closer look at the “red” pixel marked by the white dot reveals the strategy: all subpixels are switched on or off individually, and if the rightmost subpixel of the “whitespace” happens to be a red one, then the corresponding full pixel is technically red.

Subpixel rendering on an LCD screen
Subpixel rendering on an LCD screen.

The benefits of this technique become clear if we desaturate the image. Compared to plain grayscale rendering, the resolution has tripled in horizontal direction. The position and the weight of vertical stems can be reflected even more precisely, and the text becomes crisper.

Current Implementations

For the display of text, almost all browsers rely on system rasterizers. When looking at Web font rendering, the key distinction we have to make is the operating system. However, there are differences between the browsers in terms of the support given to kerning and ligatures, as well as the application of underline position and thickness, so we cannot expect perfectly identical rendering in all browsers (even on one platform). What’s more, on Windows, the browser can have the font rendered by either of the system technologies—GDI or DirectWrite.

Before we look at these in detail, lets first get an overview of where each one is to be used:

Rendering modes used by Windows browsers
Rendering modes used by Windows browsers.

Windows

On Windows, the font format has a significant impact on the rendering. The crucial difference is between PostScript-based and TrueType-based fonts, and not the way these are brought into the browser—JavaScript vs. pure CSS, raw fonts vs. “real” Web fonts, etc. We will see identical rendering as long as the underlying font is the same.

File formats can give us a clue as to what underlying rendering technology is being used, although it’s best that one doesn’t completely rely on the naming conventions. For example, EOT and .ttf files will always contain TrueType, whereas .otf fonts are typically PostScript-based. But then there’s the wrapped format WOFF, which can contain either “flavor” of font format. So we don’t know which one it contains (and therefore, what kind of rendering may be used), just by looking at the file name. Unless you’re using EOT or .ttf files, and can be sure it’s a TrueType, more investigation when purchasing fonts is always recommended.

TrueType and PostScript fonts differ in the mathematics used to describe curves—something that rasterizers don’t care about too much—it only makes a difference for the type designer when editing the glyph shapes. What is more relevant is the different approach to hinting. PostScript fonts only contain abstract information on the location of various elements of each letter (and rely on a smart rasterizer to make sense of this), whereas TrueType fonts include very specific low-level instructions that directly control the rendering process. Curiously, however, the effective differences in rendering are not due to these differences in concept, but rather stem from Microsoft initially deciding to apply their new rendering engine only to TrueType fonts.

Windows: TrueType Fonts

TrueType font rendering with Windows grayscale
TrueType font rendering with Windows grayscale
TrueType font rendering with Windows grayscale.

On Windows XP, text is rendered as grayscale by many browsers. Although not as crisp as the subpixel rendering used by Mac OS, the letters are nicely smoothed and look great in large sizes.

TrueType font rendering with Windows GDI ClearType
TrueType font rendering with Windows GDI ClearType
TrueType font rendering with Windows GDI ClearType.

ClearType is Microsoft’s take on subpixel rendering. It was first made available for GDI, the classic Windows API. Although available in Windows XP, it is not used by all browsers. In Windows 7 and Vista, ClearType is the default, which makes it the most widely used rendering technology (if we were to consider all internet users). However, it is important to note that this applies only to TrueType-based Web fonts—GDI-ClearType is not applied to PostScript-based fonts.

One curious property of this rendering technology is that along with adopting the advantages of subpixel rendering in horizontal direction, Microsoft gave up smoothing in vertical direction entirely. So ClearType is effectively a hybrid of subpixel and black-and-white rendering. This results in steps within the contour, which is particularly noticeable in large sizes. These jaggies at the top and bottom of the curves are unpleasant, but unavoidable—even the best hinting cannot make them disappear.

For type in large sizes, ClearType is a step backwards in rendering quality. The gains in horizontal precision are not significant, while the rough contours spoil the overall result.

TrueType font rendering with DirectWrite
TrueType font rendering with DirectWrite
TrueType font rendering with DirectWrite.

The future is bright, at least in terms of Windows font rendering. In DirectWrite (the successor of GDI), Microsoft added vertical smoothing to ClearType. This new rendering mode (so far used by Internet Explorer 9), gives us smooth and precise rendering in all sizes. The main difference to Mac OS that remains is that it still tries to align contours to full pixel heights, which leads to even better rendering given that the font is well-hinted. What’s more, DirectWrite allows for subpixel positioning, which gives the characters exactly the spacing that they have been designed with, improving the overall rhythm and evenness of the texture.

Windows: PostScript Fonts

PostScript font rendering with GDI grayscale
PostScript font rendering with GDI grayscale
PostScript font rendering with GDI grayscale.

In GDI-based browsers, PostScript-based Web fonts are displayed in grayscale. Unlike the prevalent GDI-ClearType, this gives smooth contours. And unlike TrueType hints, PostScript hinting is easier to create, even automatically.

PostScript font rendering with DirectWrite
PostScript font rendering with DirectWrite
PostScript font rendering with DirectWrite.

DirectWrite not only gives smoother outlines, it also applies subpixel rendering to PostScript fonts. Unlike TrueType rendering, however, it allows for more gray pixels in order to reflect stroke weights more realistically. That makes it well-balanced, and similar to Mac OS rendering.

At some point in the future—browser makers and users will not switch as quickly as we wish—DirectWrite will succeed the older Windows rendering methods, and we will indeed be spoilt for choice between TrueType- and PostScript-based Web fonts.

Windows: Unhinted Fonts

Unhinted font rendered with grayscale
Unhinted font rendered with grayscale
Unhinted font rendered with grayscale.

In the old Windows grayscale mode, completely unhinted fonts look surprisingly good. Since the font does not “align itself” to full pixels via hinting, and the rasterizer does not enforce this either, we have a rendering that is similar to that of iOS. Unfortunately, unhinted fonts are currently not an option, as the next example shows:

Unhinted TrueType font in GDI-ClearType rendering
Unhinted TrueType font in GDI-ClearType rendering
Unhinted TrueType font in GDI-ClearType rendering.

As noted in many discussions on Web font rendering quality, GDI-ClearType is extremely dependent on good hinting. Horizontal strokes have to be precisely defined by means of hinting, otherwise they might be rendered in an inappropriate thickness. Even in larger sizes, hinting is crucial. Unhinted fonts will show “warts” sticking out where contours are not correctly aligned to the pixel grid, like in the example above.

Unhinted font rendered with DirectWrite
Unhinted font rendered with DirectWrite
Unhinted font rendered with DirectWrite.

In DirectWrite, unhinted PostScript and TrueType-based Web fonts show virtually the same rendering. Text fonts of either flavor will still need good hinting in order to keep the strokes crisp and consistent. Display fonts may even get away with sloppy or no hinting, since this does not show much in large sizes.

Mac OS X

Font rendering in Mac OS X
Font rendering in Mac OS X
Font rendering in Mac OS X.

On Mac OS, all browsers use the Quartz rendering engine. TrueType and PostScript fonts are rendered in exactly the same way, since hinting—the biggest conceptual difference between the two formats—is ignored. The subpixel rendering on Mac OS is very robust, so this platform is typically the one we need to worry about the least. The rasterizer doesn’t try to understand the strokes and features that make up a font, as everything is represented by more or less dark pixels. Since the letter shapes are not interpreted, they cannot be misinterpreted. Quartz rendering is reliable because it doesn’t try to be smart. As a side note, Apple does seem to apply some subtle automagic to enhance the rendering, but this is entirely undocumented and beyond our control.

In some cases, however, this leads to less-than-ideal results. In the above example, the large size “T” has a fuzzy gray row of pixels on top because the theoretical height is not a full pixel value, and Mac OS does not force its alignment. Unfortunately this cannot be controlled by the font maker. However, the blurriness occurs only in certain type sizes. So typically, choosing a slightly different font size fixes the problem. With a bit of trial-and-error, one can find a type size that looks comfortable and crisp.

Another difficult-to-control phenomenon is that on the Mac, type tends to be rendered too heavy. This difference is most noticeable in text sizes, where the same font can look a bit “sticky” on Mac OS while appearing almost underweight on Windows.

iOS

Font rendering in iOS
Font rendering in iOS
Font rendering in iOS.

The rendering on iOS follows the same principles as on Mac OS—the main difference is that it currently does not employ subpixel rendering. The reason might be that when the device is rotated, the system would have to re-think and update the rendering because the subpixels are physically oriented in a different way, and the makers wanted to minimize CPU use.

Conclusions

Website visitors use a great variety of systems and browsers. Some are not up-to-date, and sometimes it’s not even the user’s fault, but rather a company’s policy to stick with a certain setup. My personal opinion is that we should try and give them the best rendering we can, instead of blaming OS makers, or demanding users to switch to better systems.

On Mac OS and iOS, we hardly have any control over the rendering, which is acceptable (since it’s generally very reliable). One problem is that fonts generally render too heavy. Maybe some day, Web font services can improve the consistency by serving slightly heavier or lighter fonts depending on the platform.

On Windows, hinting matters—especially for TrueType-based fonts (the only Web fonts Internet Explorer 6–8 will accept). Apart from that, one significant control we have over the rendering is the choice between TrueType and PostScript. Except for very well-hinted fonts in smaller sizes, the latter is equal or superior in rendering, and easier to produce. Even though DirectWrite is making Windows rendering more pleasant, it will not remove the necessity to provide well-hinted fonts.

Practical Application: Improving Display Font Rendering

Some Web font providers (such as Typekit or Just Another Foundry), have started serving display fonts in PostScript-based formats.

JAF Domus Titling Web rendered with GDI ClearType
JAF Domus Titling Web rendered with DirectWrite
JAF Domus Titling Web rendered with Windows grayscale
JAF Domus Titling Web in Mac OS X
JAF Domus Titling in different rendering environments.

While the GDI ClearType jaggies are unavoidable for IE 6–8, all other scenarios produce nice, smooth results. This also means that we will still need fonts that have decent TT-hinting—the browser share of IE6–8 is still too big to deliver fonts that don’t at least render in a clean fashion.

Bello by Underware on Typekit
Bello—by Underware on Typekit—served as PostScript-based Web fonts (right), which gives smoother rendering than TrueType (left).

Typekit has also started to implement a hybrid strategy by serving display fonts as PostScript in order to trigger smoother rendering in Windows GDI. This requires some decisions to be made on the basis of visual judgement.

“How do you define a display font?”, you may ask, and it is indeed difficult to draw the line. Some of the foundries offer high-quality, manually hinted TrueType fonts that look great in text sizes (and it would be a pity to lose this sophistication by converting them to PostScript). Some text fonts may well be used in very large sizes. So ideally, we would have to offer them in two different formats. However, increased complexity of the UI (as well as back-end handling) have so far kept us from doing this.

Future Developments

More and more type designers are becoming aware of the technical issues that arise when fonts are used on the Web, particularly TrueType hinting. As the Web font business grows, they are willing to put some effort into screen-optimizing their fonts. In the near future, we will hopefully see a number of well-crafted new releases (or at least updates to existing fonts).

With increasing display resolutions— and more importantly, improving rasterizers—we will slowly have to worry less about the technical aspects of font rendering. GDI-based browsers will certainly be the boat anchor in this respect, so we won’t be able to use TrueType fonts that aren’t carefully hinted for yet another few years. Once this portion of Web users has become small enough, the process of TrueType hinting (which is time-consuming and requires considerable technical skills), becomes less crucial. While most Web fonts currently on the market are TrueType-flavored, I am expecting that the industry will largely switch to PostScript, which is the native format nearly all type designers work in (the fonts that are easier to produce).

Other Resources

(jvb) (ac) (il)

Note: A big thank you to our fabulous Typography editor, Alexander Charchar, for preparing this article.

© Tim Ahrens for Smashing Magazine, 2012.

0
Your rating: None

Photo: tobias.munich/Flickr

Think your three-second page loads are “just fine”? Think again.

According to engineers at Google, even the blink of an eye — which takes around 400 milliseconds — is too long.

That’s the word from the New York Times, which makes an unusual foray into the world of web development with its article “For Impatient Web Users, an Eye Blink Is Just Too Long to Wait.”

Some web developers may remember the days of the two-second rule (and no, not the one that applies to dropping food on the floor). The established wisdom — well-tested at the time by usability experts like Jakob Nielsen and others — was that after two seconds the number of users willing to wait for your page to load dropped off significantly.

That rule still holds, it’s just the amount of time that’s changed. Nowadays, the Times claims, users drop off after a mere 400 milliseconds, and a difference in page load time of just 250 milliseconds is enough to convey a distinct advantage over your competitors.

It’s that last number that’s perhaps most interesting. Anyone who’s browsing the web via a 3G connection can tell you that if you’re only willing to wait 400 milliseconds for a page to load, you aren’t going to see much of the web. On mobile networks bandwidth constraints are even more of an issue than they were when the two-second maximum was popularized. Users seem to understand this, but they don’t see it as an excuse. Now, perhaps more than ever, slight differences in page load time can give your site a significant advantage over competitors, according to the Google and Microsoft engineers quoted in the Times piece.

In other words, users may still, in some circumstance, be willing a wait a second, but if your competitor’s page is even 250 milliseconds faster, you can kiss your users goodbye.

Don’t believe us? Head over to the Times article and see if Google’s engineers don’t convince you. When you’re done we’ve got a few tips on how to speed up your website and make sure that no one has the edge on you. Here are a few helpful articles from the Webmonkey archives:

0
Your rating: None